Is there a mathematical basis for the idea that this interpretation of confidence intervals is incorrect, or is it just frequentist philosophy?
Suppose the mean time it takes all workers in a particular city to get to work is estimated as $21$. A $95\%$ confident interval is calculated to be $(18.3, 23.7).$
According to this website, the following statement is incorrect:
There is a $95\%$ chance that the mean time it takes all workers in this city to get to work is between $18.3$ and $23.7$ minutes.
Indeed, a lot websites echo a similar sentiment. This one, for example, says:
It is not quite correct to ask about the probability that the interval contains the population mean. It either does or it doesn't.
The meta-concept at work seems to be the idea that population parameters cannot be random, only the data we obtain about them can be random (related). This doesn't sit right with me, because I tend to think of probability as being fundamentally about our certainty that the world is a certain way. Also, if I understand correctly, there's really no mathematical basis for the notion that probabilities only apply to data and not parameters; in particular, this seems to be a manifestation of the frequentist/bayesianism debate.
Question. If the above comments are correct, then it would seem that the kinds of statements made on the aforementioned websites shouldn't be taken too seriously. To make a stronger claim, I'm under the impression that if an exam grader were to mark a student down for the aforementioned "incorrect" interpretation of confidence intervals, my impression is that this would be inappropriate (this hasn't happened to me; it's a hypothetical).
In any event, based on the underlying mathematics, are these fair comments I'm making, or is there something I'm missing?
We need to distinguish between two claims here:
- Population parameters cannot be random, only the data we obtain about them can be random.
- Interpreting confidence intervals as containing a parameter with a certain probability is wrong.
The first is a sweeping statement that you correctly describe as frequentist philosophy (in some cases “dogma” would seem more appropriate) and that you don't need to subscribe to if you find a subjectivist interpretation of probabilities to be interesting, useful or perhaps even true. (I certainly find it at least useful and interesting.)
The second statement, however, is true. Confidence intervals are inherently frequentist animals; they're constructed with the goal that no matter what the value of the unknown parameter is, for any fixed value you have the same prescribed probability of constructing a confidence interval that contains the “true” value of the parameter. You can't construct them according to this frequentist prescription and then reinterpret them in a subjectivist way; that leads to a statement that's false not because it doesn't follow frequentist dogma but because it wasn't derived for a subjective probability. A Bayesian approach leads to a different interval, which is rightly given a different name, a credible interval.
An instructive example is afforded by the confidence intervals for the unknown rate parameter of a Poisson process with known background noise rate. In this case, there are values of the data for which it is certain that the confidence interval does not contain the “true” parameter. This is not an error in the construction of the intervals; they have to be constructed like that to allow them to be interpreted in a frequentist manner. Interpreting such a confidence interval in a subjectivist manner would result in nonsense. The Bayesian credible intervals, on the other hand, always have a certain probability of containing the “random” parameter.
I read a nice exposition of this example recently but I can't find it right now – I'll post it if I find it again, but for now I think this paper is also a useful introduction. (Example $11$ on p. $20$ is particularly amusing.)
Here is how it appears in Larry Wasserman's All of Statistics
Warning ! There is much confusion about how to interpret a confidence interval. A confidence interval is not a probability statement about $\theta$ (parameter of the problem), since $\theta$ is a fixed quantity, not a random variable. Some texts interpret as follows: If I repeat over and over, the interval will contain the parameter 95 percent of the time. This is correct but useless since you rarely the same experiment over and over. A better interpretation is this: On day 1 you collect data and construct a 95 percent confidence interval for a parameter $\theta_1$. On day 2, you collect new data and construct a 95 percent confidence interval for an unrelated parameter $\theta_2$. [...] You continue this way constructing confidence intervals for a sequence of unrelated parameters $\theta_1, \theta_2, \dots$. Then 95 percent of your intervals will trap the true parameter value. There is no need to introduce the idea of repeating the same experiment over and over.