Abstract
DeGroot (1962) developed a general framework for constructing Bayesian measures of the expected information that an experiment will provide for estimation. We propose an analogous framework for measures of information for hypothesis testing. In contrast to estimation information measures that are typically used for surface estimation, test information measures are more useful in experimental design for hypothesis testing and model selection. In particular, we obtain a probability based measure, which has more appealing properties than variance based measures in design contexts where decision problems are of interest. The underlying intuition of our design proposals is straightforward: to distinguish between models we should collect data from regions of the covariate space for which the models differ most. Nicolae et al. (2008) gave an asymptotic equivalence between their test information measures and Fisher information. We extend this result to all test information measures under our framework. Simulation studies and an application in astronomy demonstrate the utility of our approach, and provide comparison to other methods including that of Box and Hill (1967).
Keywords Statistical information, Optimal design; Bayes factors, Hypothesis testing, Model selection, Power.