Generating multivariate normal samples - why Cholesky?
Solution 1:
After the comment of Rahul you understood that in any parametrization $x=Av+μ$ you will need that $$ Σ=\Bbb E(x-μ)(x-μ)^T=A·\Bbb E(vv^T)·A^T=AA^T. $$ There are infinitely many possibilities to chose $A$, with any orthogonal matrix $Q$ also $\tilde A=AQ$ satisfies that condition.
One could even chose the square root of $Σ$ (which exists and is unique among the s.p.d. matrices).
The advantage of using the Cholesky factorization is that you have a cheap and easy algorithm to compute it.
Solution 2:
If all the variables in the multivariate gaussian were independent, we would have faced no issue but to use the formula $X_i =\sigma_i \nu+\mu_i $. Since they are correlated, we have (for example, bivariate case), $X_1 = \sigma_1\nu_1+\mu_1$ and $X_2 = \sigma_2[\rho\nu_1+\sqrt{1-\rho_{12}^2}\nu_2]+\mu_2$ and can be extended further to N. Note: $$\sum = \begin{bmatrix}\sigma_1^2 &\rho_{12} \sigma_1\sigma_2 &\rho_{13} \sigma_2\sigma_3 & \dots \\\\\rho_{12} \sigma_1\sigma_2 &\sigma_2^2 &\rho_{23} \sigma_2\sigma_3 & \dots\end{bmatrix}$$ By decomposing through Cholesky $\sum=LL^T$, we can get our $X = L\nu+\mu$ without manual calculations which are otherwise quite tedious for higher order.