Why do we need Wald's confidence interval to estimate p in a Bernoulli distribution?

I'm studying statistics and I'm a bit confused about why Wald confidence interval is needed to estimate the p in Bernoulli distribution.

Let's say, I am modeling some phenomenon with a Bernoulli distribution: $ X \sim B_p $.
From my sample where n > 30, I want to estimate the p.

To my knowledge, Central Limit Theorem guarantees that the sampling distribution of $ \hat{p} $ is going to be Normal, if the sample size > 30. I understand that CLT talks about sums and means, but proportion is calculated the same way. Mean applies for continuous distribution and proportion applies to discrete distribution; but it is calculated the same way.

And based on this, I can construct a confidence interval using t-distribution (because I also have to estimate the variance). So in other words, there is no difference between estimating p and mean of a distribution as far as how we go about constructing the confidence interval.

Apologies, but if this is the case, why do we need Wald's confidence interval? I also came across other methods that are used to compute the confidence interval for p. Did I misunderstand the CLT and how to estimate a proportion and means for any distribution?


You don't need the Wald interval. I think this link will help you. The $t$ version also works, and may even work better (see link), but it's at least partly a matter of simplicity.

As general advice, there are often multiple ways to create confidence intervals, and they may be better or worse in various ways (simplicity, exact versus approximate, rate of convergence (if not exact), coverage probability, length, and maybe other criteria too).

If you look at Wikipedia you can see a bunch of different confidence intervals for $p$.