Standardizing A Random Variable That is Normally Distributed

To standardize a random variable that is normally distributed, it makes absolute sense to subtract the expected value $\mu$ , from each value that the random variable can assume--it shifts all of the values such that the expected value is centered at the origin. But how does dividing by the standard deviation play a role in the standardization of a random variable? That part is not as intuitive to me as is subtracting $\mu$.


Let $X \sim N(\mu,\sigma^2)$.

Let $Y = \large\frac{X-\mu}{\sigma}$.

$E[Y] = \large\frac{E[X] - \mu}{\sigma} = \large\frac{\mu-\mu}{\sigma} = 0$

$\text{Var}(Y) = \large\frac{1}{\sigma^2}\text{Var}(X) = \large\frac{1}{\sigma^2}\sigma^2 = 1$.

So that $Y \sim N(0,1)$.

This is precisely why we subtract the mean and divide by the standard deviation.


Recall that $\operatorname{Var}{X}=E(X-\mu)^2$, where $\mu$ is the mean of $X$. So $\operatorname{Var}(X)$ is an average of squares. Thus if we scale $X$ by a factor $\rho$, then the variance gets multiplied by $\rho^2$.

To get a feeling for this, recall that if we scale a geometric figure, such as a square or a triangle, by the factor $\rho$, then area gets scaled by a factor of $\rho^2$.

Now suppose that the variance of $X$ is $\sigma^2$. Then we must scale $X$ by the factor $\dfrac{1}{\sigma}$ to bring the variance to $1$.