Abstract
The paper presents and illustrates two areas of widespread abuse of statistics in social science research. The
first is the use of techniques based on random sampling but with cases that are not random and often not
even samples. The second is that even where the use of such techniques meets the assumptions for use,
researchers are almost universally reporting the results incorrectly. Significance tests and confidence
intervals cannot answer the kinds of analytical questions most researchers want to answer. Once their
reporting is corrected, the use of these techniques will almost certainly cease completely. There is nothing to
replace them with but there is no pressing need to replace them anyway. As this paper illustrates, removing
the erroneous elements in the analysis is usually sufficient improvement (to enable readers to judge claims
more fairly). Without them it is hoped that analysts will focus rather more on the meaning and limitations
of their numeric results.
Original language | English |
---|---|
Pages (from-to) | 22-23 |
Journal | Psychology of Education Review |
Volume | 38 |
Issue number | 1 |
Publication status | E-pub ahead of print - 1 Mar 2014 |