A New View of Statistics Go to: Next · Previous · Contents · Search · Home
Generalizing to a Population:
SAMPLE SIZE ON THE FLY continued

ON THE FLY FOR DIFFERENCES BETWEEN FREQUENCIES

Now you're interested in things like the difference in the frequency of injury in two groups. For example, if you found that 47% of runners and 15% of cyclists have an injury each year, how many runners and cyclists would you have needed in the study for the result to be publishable? Publishability depends on the confidence interval for the difference between the frequencies, of course. Obviously 10 runners and 10 cyclists would give a hopelessly unpublishably wide confidence interval, and equally obviously 10,000 of each has got to be overkill!

You can use sample size on the fly to get the minimum number of subjects, but you don't get quite the same saving as for correlations or means. I've used simulation to see how many subjects you need to give acceptable confidence limits for a wide range of frequency differences. I've found that it's at least 100 subjects, even for very large effects, so that will have to be our starting number.

The other thing we need for sample size on the fly is an acceptably narrow confidence interval for the outcome statistic. It's straightforward if we use the difference in frequencies as the outcome, but it gets really complicated if we use relative risk or the odds ratio. Let me explain with the example of injury in runners and cyclists.

The difference in rates of injury can be expressed either as a difference in the percentage rates (47 - 15 = 32%), or as a relative risk of injury (runners have 47/15 = 3.1 times the risk of cyclists). The acceptable width of the interval for a difference in the percentage rates is a fixed 20%, as I explained earlier. In our example the difference is 32%, so the required publishable confidence limits are 22% to 42%. Expressed as a relative risk, these frequencies correspond to 3.1, with confidence limits 2.1 to 5.1. But suppose the original frequencies were 67% and 52%. The difference in frequencies is still 32%, and the acceptable confidence limits on this difference are still 22% to 42%. But now the corresponding relative risk is 1.9, with confidence limits 1.5 to 2.5 What a mess! The odds ratio misbehaves in the same way for case-control data.

So here's the method, based on the confidence interval for the differences in frequencies between the groups, expressed as percents.

1. Start with a sample size of 100 (50 in each group).
2. Do the practical work. That often means interviewing subjects, or waiting for them to get sick or injured!
3. For each group, count up the number of subjects with the thing you're interested in (e.g. an injury). Express it as a percent for each group, then subtract one from the other. That's the frequency difference.
4. You are aiming for a confidence interval of 20% for that frequency difference. What's the current confidence interval? Once again, stats programs don't produce it, but it can be derived from something called the normal approximation to the binomial distribution. Here it is, in the right percent units:
392·sqrt((n1(n - n1) + n2(n - n2))/n3 ),
where n1 and n2 are the numbers (not %) of subjects with the thing of interest in groups 1 and 2, and n is the number of subjects in EACH group (50 to start with). My simulations show that this formula is surprisingly accurate, even for very low n1 and n2 (~1%, with only 50 in each group!).
5. If this confidence interval is less than 20, the study is finished. Otherwise go to the next step.
6. To estimate the number of subjects required to bring the confidence interval down to 20, we make use of the fact that the width of the confidence interval is inversely proportional to the square root of the sample size. So, divide the current confidence interval by 20, square the result, and multiply it by the current number of subjects in each group. The result is the predicted total number of subjects needed.
7. Subtract the current number of subjects in each group from the predicted number. The result is the number of subjects needed in each group for the next round of practical work. You can "cheat" by doing the practical work on less than this number, if it's a big leap to nearly 400 from the previous number. This trick will help make sure you don't test too many subjects, as I described for correlations and effect sizes. If the difference in frequencies turns out to be trivial, you may still end up with a final sample size of up to 200 in each group.
8. Do the practical work on the extra subjects, add them to all the previous subjects, then go to Step 3.

All computations in the above procedure are available on the spreadsheet, which includes the case of unequal numbers of subjects in the groups.

How do you present the final outcome? Obviously you need to show the frequency of the injury or whatever as a percent in the two groups. You should also show the confidence limits for the difference in frequencies (confidence limits = the difference in frequencies ± half the confidence interval, which you will have calculated in the last iteration of the sampling process). That's it, as far as I am concerned, but for a clinical journal you may have to show a relative risk or an odds ratio. If the editor of the journal insists on one or other of these effect statistics, put it in, and get your stats program to calculate its confidence limits.

To describe the outcome of your research in qualitative terms, check where the confidence limits of the frequency difference fall on the scale of magnitudes. Here's a version of it for frequency differences:

For example, if the limits are 22% and 42%, the effect is small-moderate; if they are -5% and 15%, the effect is trivial-small, and so on.

Go to: Next · Previous · Contents · Search · Home