This maybe makes a good case for the rule of thumb “report odds ratios, not posterior priors”. If the statistician simply reports the formula , then people who know the context of the experiment can decide how important a very skinny peak slightly to the right of matters.

]]>Suppose I have a prior that puts nonzero probability on p=1/2. I’m worried you can construct a stopping rule such that, as p approaches 1/2 without becoming equal to it, the probability that I conclude that p=1/2 approaches 1.

Or, worse, maybe there’s a neighborhood around p=1/2 in which the stopping rule guarantees that I’ll conclude p=1/2.

In fact, the more I think about it, the more strongly I’m driven to conjecture that this is the case. So I don’t really believe that putting nonzero prior probability on p=1/2 protects Bayesian analysis from stopping rules.

]]>I am always comfortable using a Bayesian analysis with a prior that matches my prior belief. In practice this is hard and I use frequentist analyses a lot because I don’t feel like dealing with prior specification. But what I’m mainly concerned with is whether Bayesianism is correct.

All this example shows me is that there are certain experimental designs where you have to include prior information that you can normally get away with leaving out (specifically, the possibility of the parameter being exactly .5). Nobody says that systems with significant air resistance disprove Newtonian physics, just because it’s common practice to leave out air resistance.

]]>By the way, it occurs to me that the drug example may not be the best story to consider wrapping around this problem, because it really shouldn’t matter if a drug is equal to placebo or better. As an alternate story, suppose the experimenter is Chien-Shiung Wu trying to demonstrate parity violation in weak interactions. Now any nonzero effect, in either direction, is Nobel prize material; we really do care about whether the value is exactly or not. Are you comfortable analyzing that data by this method ?

]]>