Responding to Buckingham’s Article

Responding to the Buckingham (2020) article entitled: “Systematic phonics instruction belongs in evidence-based reading programs: A response to Bowers”

Buckingham (2020) published an article challenging the Bowers (2020) claim that there is little or no evidence in support of systematic phonics. But despite a long list of falsehoods (please see note at the end of this blogpost) and mischaracterizations in the Buckingham article, and despite publishing this work without asking Bowers to review it, the journal The Educational and Developmental Psychologist has rejected a response from Bowers and Bowers (2021) and refused to issue a correction to straightforward and significant errors.

We have now published our response in PsyArxiv (go to: https://doi.org/10.31234/osf.io/f5qyu) and added this blogpost to encourage discussion. To simply ignore the challenge and continue to publish the claim that the evidence for systematic phonics is strong is poor academic practice. And if there are no good responses to the list of errors and mischaracterizations summarized in Table 1 of Bowers and Bowers (2021), there is a serious problem with Buckingham’s article.

There is also no excuse for continuing to mischaracterize an alternative approach to phonics we have been advocating, namely, Structured Word Inquiry (SWI). It is one thing to disagree with the approach, it is quite another to get basic facts wrong, such as Buckingham’s claim that SWI does not teach grapheme-phoneme correspondences (GPCs). In a future blogpost we will focus on how politics and ideology have compromised the science of reading instruction. But for now, let us highlight one small example of this – the refusal of the The Educational and Developmental Psychologist to correct this false claim regarding SWI.


After the paper was rejected J. Bowers wrote the following letter to the action editor:

Dear **, yes I plan to self-publish the complete article, but I expect only a small percentage of your readers will come across it. It is not just question of being disappointed, it is that don’t understand the procedure nor the decision. You published a paper by Buckingham critiquing my work that was full of falsehoods and highly misleading statements without asking me to review it. In a revision, I summarized many examples of these falsehoods and misleading statements in a new Table 1, and neither of the reviewers nor you have challenged any these points. In addition, 1 of the 2 reviewers appears to have recommended publishing a response, and nevertheless you have rejected manuscript. In the same special issue, there was another seriously flawed article by Stainthorp supporting phonics (as I detailed in a previous letter to you, the main claims of this paper are straightforwardly contradicted by Bowers, 2000 [should have written Bowers, 2020], but the paper did not cite this work), and subsequently, another pro-phonics paper by Siegel that included 24 self-citations out of a total of 28 references. This all seems to support my concern that politics biasing the science of reading.

I would be happy to publish a “letter to editor” style response, but given the challenges with this, would you consider publishing a simple correction with a citation to my self-published version of the paper (I’m planning on putting up the paper in PsyArxiv)? There are multiple falsehoods, but the most egregious is Buckingham’s claim that SWI does not teach GPCs:

“Students are encouraged to ‘spell words out’ using letter names rather than sound them out using GPCs. This contradicts any claim that SWI teaches GPCs, given that GPCs by definition involve the speech sounds associated with the letters, not their names. (p. 109)”

As noted in our rejected paper, we have emphasized repeatedly GPCs are an essential aspect of SWI. For example, in Bowers and Bowers (2017) we wrote:

“SWI emphasizes that English spellings are organized around the interrelation of morphology, etymology, and phonology and that it is not possible to accurately characterize grapheme–phoneme correspondences in isolation of these other sublexical constraints (p. 124) … “We have no doubt that learning grapheme–phoneme correspondences is essential” (p. 133).

And in Bowers and Bowers (2018b) we wrote:

“To avoid any confusion, it is important to emphasize that the explicit instruction of orthographic phonology — how grapheme-phoneme correspondences work—is a core feature of SWI. However, unlike phonics, SWI considers grapheme-phonemes within the context of morphology and etymology.” (pp. 409-410)


We have also made this point multiple times directly to Buckingham in personal communications, for example in twitter exchanges:

5 July 2019 | Replying to @buckingham_j@BrettYoud , and 2 others

SWI teaches GPCs from the start. In the context of morphemes. It teaches GPCs, morphemes, vocabulary together. More data needed to support hypothesis, but the evidence for teaching GPC by themselves is not strong.


19 February 2020 | Replying to @buckingham_j@Borto74 and @BrettYoud

SWI proposes a better way to teach GPCs, so makes sense from start. Not defined by when it starts, but how it teaches literacy, namely a common set of tools to teach the interactions between morphology, etymology, and phonology for reading, spelling, & vocabulary.

I would be pleased if you would consider publishing a correction to this one specific point (this false claim makes it even harder to get researchers to consider SWI as an alternative to phonics — the main goal of publishing my critique of phonics in the first place) along with a reference to our self-published PsyArix paper. Thanks so much for getting back,

Jeff


In response we received the following:

Dear Jeffrey,

We have completed a review of your request and I appreciate your patience.

In line with journal policy all submitted articles must go through a peer-review process and on this basis the article by Jennifer Buckingham was approved for publication. In addition, and consistent with acceptable practice, Editors have the final say in rejecting manuscripts that are not judged to meet the remit of the journal and to determine content. On behalf of the Publisher and Editor, the editorial decision to not accept your manuscript has been reviewed and upheld. The alleged falsehoods recorded in Table 1 have also been reviewed by the Editor, Special Issue Editors, and Publisher. As Editor, and based on the review of the above parties, I see no basis to seek a retraction or correction of any of the content in the published article or publish additional work on the topic and consider this to be the end of the matter.

Warm regards,


The conclusion that there is “no basis to seek a retraction or correction of any of the content in the published article” is unambiguously false as documented above, and indeed, a simple Google search for “structured word inquiry” identifies many illustrations of SWI providing explicit instruction of GPCs. You could not ask for a more straightforward demonstration that Buckingham was wrong about a key feature of SWI. Note, this is not a mistake on a trivial point, it is a falsehood that will discourage researchers from taking this alternative approach seriously. And this is just one of many straightforward and significant mistakes we identified.

Of course, everyone in academia faces challenges when trying to publish, and if this was just an isolated example, it would not matter much. But our experience trying to publish this response has been more or less typical. That is, the reviewers were strongly committed to phonics, the reviews were full of errors and ignored the key issues we raised (in this case summarized in Table 1), and the action editor was not willing to engage with the science when confronted with the straightforward errors of the reviewers (there have been a few honorable exceptions to this). For more examples of the politics of reading instruction making it difficult to publish work critical of phonics, see the final section of a previous blogpost you can find here: https://jeffbowers.blogs.bristol.ac.uk/buckingham/. And this has led to a situation in which most researchers studying literacy instruction believe that the science of reading strongly supports systematic phonics, when in fact, there is almost no evidence at all. If this seems like an absurd claim, it should be straightforward for someone to identify flaws in our arguments. Please post your responses below.


Note:  Dr. Buckingham has contacted me by email claiming that “I have made some serious allegations that are defamatory and untrue”.  She notes that I used the word “falsehood” six times, and that according to the Cambridge, Oxford, and Collins dictionaries, the term means “a lie, or a deliberate act of deception”.  She has asked me to take down the blog, or alternatively, amend the defamatory statements.

I am happy to note that I was not suggesting that Buckingham was lying.  I was using the primary definition found in the Oxford dictionary: “the state of being untrue” (see: https://www.lexico.com/definition/falsehood).  This is not in any way a retraction as indeed Buckingham has made multiple false statements.  Buckingham has indicated that she is not willing to engage with my claims at this stage, so hopefully someone else will address the list of points we raised in Table 1 of Bowers and Bowers (2021).


Reference:

Bowers, J.S. (2020). Reconsidering the evidence that systematic phonics is more effective than alternative methods of reading instruction. Educational Psychology Review, 32, 681–705. https://doi.org/10.1007/s10648-019-09515-y

Bowers and Bowers (2021). The science of reading provides little or no support for the widespread claim that systematic phonics should be part of initial reading instruction: A response to Buckingham. https://doi.org/10.31234/osf.io/f5qyu

Buckingham, J. (2020). Systematic phonics instruction belongs in evidence-based reading programs: A response to Bowers. The Educational and Developmental Psychologist, 37(2), 105-113. https://doi.org/10.1017/edp.2020.12

1 thought on “Responding to Buckingham’s Article

  1. It is striking that since I have published responses to the Buckingham (2020) and Fletcher et al. (2020) articles I have received 0 responses to my blogpost from researchers who claim that the science of reading strongly supports systematic phonics. One person who has responded in his own blogpost is Greg Ashman. I say “responded” but he refused to respond to any of the errors listed in Table 1 of Bowers & Bowers (2021) when I posed questions in his blogpost. Indeed, he introduced a whole new set of mistakes, including a mistake in the title of the blogpost “Structured word inquiry: Welcome to a world where systematic phonics does not work and where young children who cannot read should begin with prefixes and suffixes”. In fact, SWI does not begin teaching any child with prefixes and suffixes. To see more of his errors, read the comments to: https://fillingthepail.substack.com/p/structured-word-inquiry

    In a subsequent twitter exchange Ashman clarified why he is unwilling to address the errors in Table 1:

    Jeff Bowers
    “Yes, I (and Peter Bowers) responded in some detail to the points you made in your blogpost. Do you not have any comments to the points I’ve made in my Buckingham or Fletcher response articles or blogposts?”

    Greg Ashman
    “Yes. Your analysis is post-hoc, introduces too many researcher degrees of freedom, is therefore invalid and not worth addressing in detail, as I write in the post.”

    Jeff Bowers
    You have written 8 blogposts directed at me regarding phonics and SWI. But not worth addressing any of the specific mistakes/mischaracterizations listed in Table 1 of my Buckingham response or the errors I note in the Fletcher et al. response? It is all just too post-hoc?

    Greg Ashman
    Yes

    OK, so the reason why Ashman will not respond to the points in Table 1 is that it is just all post-hoc. I want to briefly respond (again) to his repeated false claim that my analysis is post-hoc. But first, let’s go through the points on Table 1 and see if the post-hoc claim make sense here.

    1) Buckingham claims that phonics occurs alongside morphological instruction. We provide quotes from Buckingham and others that shows this is misleading at best (all proponents of phonics says phonics should come before morphology). A post-hoc analysis does not apply.

    2) Regarding NRP Buckingham writes: “…the effect sizes are moderate, including for synthetic phonics, and are certainly stronger than the evidence found for any other method, including whole language.” We point out that The NRP only included 4 studies that compared synthetic phonics to whole language, and the effect sizes were: d= 0.91, d = 0.12 d = 0.07, and d = − 0.47. Again, a post-hoc analysis does not apply.

    3) Regarding Camilli (2006) meta-analysis, Buckingham claimed there was an advantage of systematic over unsystematic phonics. We noted that the effect of d = .123 was not statistically significant. A post-hoc analysis does not apply.

    4) Regarding Torgerson et al. (2006) meta-analysis, Buckingham claims that the word reading results were moderate and significant after an outlier study was removed. We note that when *Torgerson et al.* themselves removed the study they found the spelling effect was significant with a fixed effects model (d = .20 p = 0.03) and nonsignificant on a random effects model (d = .21, p = 0.09), with neither effect moderate in size. A post-hoc analysis does not apply.

    5) Regarding McArthur et al. (2012) meta-analysis, Buckingham claims that there is no reason to remove the Levy studies. We detail why the design of the studies do not provide an estimate of an effect size of phonics in general (the claim of the meta-analysis). But this is an example of removing a study from a meta-analysis, a point we return to below.

    6) Regarding Galuschka et al. (2014) meta-analysis, Buckingham clams the findings support the conclusion that phonics was the most effective method assessed. As detailed in the article, this is just a statistical error, and accordingly, a post-hoc analysis does not apply.

    7) Buckingham claims that SWI does not teach GPCs. This is just false, and accordingly, post-hoc analysis does not apply.

    8) Buckingham claims that the reading and writing SAT scores improved more than
    math and science scores between 2011-2015. This is misleading as detailed in Figure 1 of the paper. Accordingly, the post-hoc analysis does not apply,

    9) Buckingham claims that that Walker et al. (2015) analysis of SAT scores does not
    contradict her claim that reading and writing improved after introduction of PSC
    This is again misleading and indeed, she provides incorrect numbers to support her case. Again, a post-hoc analysis does not apply.

    10) Buckingham claims that the Devonshire et al. study that provided evidence for SWI
    used a control condition that included both phonics and whole word instruction. This is again misleading as detailed in the paper. The post-hoc analysis does not apply.

    11) Regarding SWI, Buckingham claims that there “is no evidence of effectiveness at all”. This is false, so again, a post-hoc analysis does not apply.

    So, of the 11 points in our Table 1, the claim that that our analysis was “post-hoc, introduces too many researcher degrees of freedom” clearly does not apply to 10 of them. It is a shame therefore that Ashman, Buckingham, or anyone else refuses to rebut these points.

    What about point 5 above where we did argue that that two studies by Levy et al. should be removed. Our reasoning is sound (read the paper), but for the sake of argument, let’s say it is post-hoc to remove them. If the two studies are included the meta-analysis it still did not find a significant effect in word reading fluency, reading comprehension, spelling, and nonword reading fluency. The only difference is that the authors would have reported a significant effect on word reading accuracy (along with nonword reading accuracy). And even these two significant results are largely meaningless because the meta-analysis compared small group reading instruction with phonics to NO reading instruction. Given this control condition, it is not clear whether phonics instruction was critical or whether ANY form of small group instruction would have supported the same outcomes. To use this as evidence for the importance of systematic phonics is to make a logical error.

    Of all the other studies included in all the meta-analyses I only suggested two other studies should be removed (one removed by Torgerson et al. *themselves*, and another study because it was carried out in Hebrew and is therefore unclear how it relates to teaching of English). Including all the studies in all the meta-analyses would not make any difference to the overall conclusion. How does this square with Ashman’s claim: “Your analysis is post-hoc, introduces too many researcher degrees of freedom, is therefore invalid and not worth addressing in detail, as I write in the post.”

    But more generally, what does it mean to suggest that re-analyzing and criticizing meta-analyses is post-hoc? If I show that a few studies that show big effects are flawed and argue that they should be removed, am I being post-hoc (should I do the same for the studies that failed to obtain an effect)? No, it is not post-hoc, but you need to appreciate the nature of my claim. If a researcher wants to argue that the evidence *supports* systematic phonics the burden is on him or her to provide evidence. I have shown that the evidence is flawed, and accordingly, the conclusion is not supported. That is all I have shown. Of course, a pro-phonics researcher is free to run another better meta-analysis that fixes the flaws that I have identified, but till then, there is no basis for claiming that the science of reading supports phonics. And if that meta-analysis is done, and that is what is found, I will be happy to conclude that there is indeed some evidence that systematic phonics is effective (whether there are better methods still is another question).

    A common criticism I have received is that I should run a new meta-analysis if I want to avoid being post-hoc. The irony here is that Camilli et al. (2003, 2006) did carry out two NEW meta-analyses, applying the same selection criterion as NRP but asking a different theoretical question. But somehow, Ashman considers new meta-analyses asking a new question using the same selection criterion as the NRP as post-hoc. You just can’t win….

Leave a Reply

Your email address will not be published. Required fields are marked *