What Michael said #1

The news that Firstborn has elected to take a module on statistics during her Masters’ year brought a smile to my face. Actually, I laughed hysterically until she pointed out that Management or I may have to help her out – it’s a long time since I studied statistics!

In fact, she may do well to ignore to anything that I say – after all, why change the habit of a lifetime! More seriously, that’s because my perspective on statistics and particularly on significance testing in classical statistics is that a lot of it seems rather arbitrary. Why should a particular outcome be considered statistically significant just because the odds of it happening by chance and chance alone are 1 in 20? Why not 1 in 19 or 1 in 21 or 1 in 50,000,000? And I’m not alone in thinking this.

As a post-graduate student at York University in the 1980s (maths, stats and computing for masochists and the innumerate), one of the highlights of the week was a pub quiz in the adjacent village of Heslington. The question master, Peter Lee, was a lecturer in the Maths and Statistics Department who later wrote a book on Bayesian statistics and commented in his Forward that he delved into the Bayesian world because he was dissatisfied with the arbitrary nature of significance testing in classical statistics. Now, I’m nowhere near numerate enough to discuss the finer points of frequentist versus Bayesian statistics, but the same underlying concern of arbitrariness struck me too.

Even within the more classical school, you can still see concerns. Modern papers with titles such as “The insignificance of significance testing“, or variants thereof, abound and exist alongside the older and more literary disclaimer of John Nelder, one of the founders of generalised linear modelling, who, in his overview of a 1971 British Ecological Society Symposium on Mathematical Models in Biology proclaimed:

Fisher’s famous paper of 1922, which quantified information almost half a century ago, may be taken as the fountainhead from which developed a flow of statistical papers, soon to become a flood. This flood, as most floods, contains flotsam much of which, unfortunately, has come to rest in many text books. Everyone will have his own pet assortment of flotsam; mine include most of the theory of significance testing, including multiple comparison tests, and non parametric statistics“.

Interestingly, Nelder was a later successor to Fisher as Director of the Statistics Department at the Rothamsted Research Station. I don’t know whether his quote deprecates Fisher’s work or the fact that followers often follow blindly without the insight into the subject that the originator had – I suspect the latter in Fisher’s case – and you certainly see that in fisheries research, my profession.

I did wonder whether the apparent post-1960s disenchantment with classical significance testing was due in part to the advent of electronic computers as a result of which more numerically intensive approaches to statistical modelling could be developed. Then I remembered a quote I once read in book first published in 1943, The Fish Gate, by Michael Graham (one of the most insightful leaders of fisheries research in the 20th century and the chap after whom the title of this post is framed). There will be more about him in future posts, but for now his concerns with statistical testing and statistical power had nothing to do with developments in computing power:

What Michael said:

In this century we have admitted this ‘Normal’ curve into all our counsels. It is of so wide an application that its professors have come to smell of priestcraft, setting up arbitrary standards by which to judge the significance of everything that we have claimed to achieve. They have real power; but it is of necromancy, as when they solve a problem by a short excursion into n-dimensional space. They ride brooms if ever man did“.

Postscript: I intentionally used the phrase ‘electronic computers’ in the penultimate paragraph above, even though it has a sort of antiquated feel to it; isn’t ‘computer’ enough? Well, no actually! At least not in the context of commenting on work from an era predating the modern age of computing. Delving into the fisheries research literature it is possible to find reference to ‘an experienced computer’ in the bible of fisheries research, Beverton and Holt’s 1957 magnum opus: ‘On the dynamics of exploited fish populations’. In this instance, the ‘computer’ is actually a living, breathing person, not a machine. So there!

Published by

LanterneRouge

😎 Former scientist, now graduated to a life of leisure; Family man (which may surprise the family - it certainly surprises him); Likes cycling and old-fashioned B&W film photography; Dislikes greasy-pole-climbing 'yes men'; Thinks Afterlife (previously known as Thea Gilmore) should be much better known than she is; Values decency over achievement.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.