Intuitive Explanation of Bessel's Correction
Solution 1:
http://en.wikipedia.org/wiki/Bessel%27s_correction
The Wikipedia article linked to above has a section (written by me) titled "The source of the bias". It explains it via a concrete example.
But note also that correcting for bias, when it can be done, is not always a good idea. I wrote this paper about that: http://arxiv.org/pdf/math/0206006.pdf
Solution 2:
To me, the main idea is that the sample mean is not the distribution (or population) mean. The sample mean is "closer" to the sample data than the distribution mean, so the variance computed is smaller.
Suppose the distribution mean is $m_d$. The sum of $n$ variates (the sample) is $n m_s$, where $m_s$ is the sample mean. Recall that the mean and variance of a sum of variates are the sum of the means and sum of the variances of the variates. That is, the distribution mean of the sum of $n$ variates is $n m_d$ and the distribution variance of the sum of $n$ variates is $n v_d$. In other words, $$ \mathrm{E}[(n m_s-n m_d)^2]=n v_d $$ or equivalently, $$ \mathrm{E}[(m_s-m_d)^2]=\frac{1}{n}v_d $$ Let us compute the expected sample variance as $$ \begin{align} &\mathrm{E}[v_s]\\ &=\mathrm{E}\left[\frac{1}{n}\sum_{k=1}^n(x_k-m_s)^2\right]\\ &=\mathrm{E}\left[\frac{1}{n}\sum_{k=1}^n\left((x_k-m_d)^2+2(x_k-m_d)(m_d-m_s)+(m_d-m_s)^2\right)\right]\\ &=\mathrm{E}\left[\frac{1}{n}\sum_{k=1}^n\left((x_k-m_d)^2+2(m_s-m_d)(m_d-m_s)+(m_d-m_s)^2\right)\right]\\ &=\mathrm{E}\left[\frac{1}{n}\sum_{k=1}^n\left((x_k-m_d)^2-(m_d-m_s)^2\right)\right]\\ &=v_d-\frac{1}{n}v_d\\ &=\frac{n{-}1}{n}v_d \end{align} $$ Thus, $$ v_d=\frac{n}{n{-}1}\mathrm{E}[v_s] $$ This is why, to estimate the distribution variance, we multiply the sample variance by $\frac{n}{n{-}1}$. Thus, it appears as if we are dividing by $n{-}1$ instead of $n$.