A New View of Statistics

© 1997 Will G Hopkins

Go to: Next · Previous · Contents · Search · Home


Generalizing to a Population:
ESTIMATING SAMPLE SIZE

Update Oct 2007: The following pages are now largely superseded by an extensive article on sample-size estimation published in Sportscience in 2006 with an accompanying slideshow and spreadsheet. I suggest you read the article first. There are a few formulae on the following pages that are not in the article.

I get more requests for information about sample sizes than about any other aspect of stats. I've come up with approaches and formulae that you won't find anywhere else, and that's not because they're wrong, either!

First, I'll deal with the need for the right number of subjects in a study: the main considerations are publishability of your findings, and the ethics of wasting resources. Then I spend a page on a new look at the traditional approach to what determines sample size, which leads to the formulae. I then present a new approach, sample-size estimation based on confidence intervals, with the good news that you need half the usual number of subjects. You'll almost certainly get away with an even smaller sample, if you use sample size "on the fly". Finally I encourage you to use simulation to work out sample size for complex designs or unusual outcome statistics.

 THE RIGHT NUMBER OF SUBJECTS


With too few subjects, the confidence interval on your outcome is too wide to allow any useful conclusion. For example, you could get a big positive effect, but that's not very exciting or publishable if the wide confidence interval shows that the effect could actually be negative--in other words, if it's not statistically significant. Even if you observe a trivial effect, a small sample means a wide confidence interval, so the effect could still be large and positive or large and negative. Such results are hard for journals to accept.

With the right number of subjects, you have a narrow confidence interval on your outcome. It's sufficiently narrow that any worthwhile effects are statistically significant, which means you won't have missed anything. And even statistically non-significant results are publishable, because you can say that the effect is trivial. In my view, being able to say that an effect is too small to worry about is just as important as saying that it is large.

Too many subjects gives you a nice narrow confidence interval, but it's more narrow than you need. For example, it would be silly to have so many subjects that you could say a correlation lies between 0.725 and 0.729. That's far too much precision. Most of the time you'd be happy to say that it's 0.7, but not 0.8 or 0.6.

The ethical committees that grant approval for research projects are becoming more aware of the need to have the right number of subjects in a study. They require you to document your estimation of the required sample size, and they will not grant approval for research projects with too few or too many subjects. Small samples are unethical, because you can't be specific enough about the size of the effect in the population. Large samples are also unethical, because they represent a waste of resources.

You can sometimes justify a suboptimal sample size by arguing it's for a pilot study to determine reliability or validity, which in turn will allow you to estimate the sample size for a larger-scale study. A suboptimal sample size is also the starting point for sample size on the fly. But let's continue with the traditional approach and some formulae on the next page.


Go to: Next · Previous · Contents · Search · Home
resources=AT=sportsci.org · webmaster=AT=sportsci.org · Sportsci Homepage · Copyright ©1997
Last updated 15 May 97