Subject: Bayesian summary/critique
From: Mike Evans mevans=AT=utstat.utoronto.ca
To: Will Hopkins will.hopkins=AT=otago.ac.nz
Date: Mon, 17 Apr 2000
Your comments on Bayesian inference are reasonable, and certainly comments like these have been made before. Of course, many people are affronted, and perhaps rightly so, by the inclusion of so-called personal beliefs (the prior) into a statistical analysis. One of the classic Bayesian retorts to this is roughly "well, where do you think the rest of the model ingredients to an analysis come from?" The point here is that all model ingredients are subjective choices. This is true I think in virtually any branch of science. Whatever manner the researcher comes up with a model or theory, in the end subjective choices made on the basis of personal beliefs guide these choices. Science deals with this subjectivity by requiring that the model ingredients be tested against real data. In statistics this is roughly what we call model checking, and it applies equally well to frequentist statistical methods as it does to Bayesian methods. By the way, model checking never says an analysis is right, only that it doesn't seem to be wrong.
Still, this is not an argument in favour of Bayesian methods. I am pointing out only that one should be a little more open about what objectivity in science really means. Of course we would like to minimize these subjective inputs, so one could argue that, if we don't need the prior, why bother with it? That's the real question. But is there is an acceptable theory of frequentist statistics that behaves in an acceptable and logical way across the broad spectrum of problems a statistician confronts? In many of the simpler problems, concepts like confidence, likelihood etc. seem to perform reasonably well, but close examination reveals them to be rife with contradictions and even bizarre behavior. Some choose to simply ignore these problems, claiming some kind of practical insight or suggest that in the end some kind of sensible theory of frequentist statistics will come forward that will resolve this issue. On the other hand there is lots of evidence that there will be no reasonable theory of statistical inference that doesn't include the notion of prior beliefs. This is where Bayesian methodology comes in. This is the process that leads a lot of people to becoming Bayesians.
There is a minority who argue for Bayesian methodology on other, more philosophical grounds. Your comments seem somewhat aimed at those kind of arguments. Personally I think my reasons for considering Bayesian methodology as being the right approach are very practical and are based on a global perspective of statistical methodology and theory. (I spent many years as a frequentist.)
Another point worth making is that Bayesian methodology is by no means a settled issue. Many Bayesians simply use posterior beliefs when deriving inferences. I'm a bit of an outlier in that regard, however, as for me inferences should be derived based on how beliefs change from a priori to a posteriori, that is, they are data driven. This approach leads to inference methods closer to frequentist methods. It is also worth noting that, for many (not all) of the simpler problems where frequentist methodology seems to give satisfying answers, the Bayesian approach will yield basically the same answers, provided you start with diffuse (but still proper) priors.
Please note that these are my personal views. I'm not attempting to speak for all Bayesians. You might be surprised at the diversity of opinions that exist on this topic!
Secretary, International Society for Bayesian Analysis: www.bayesian.org