Understanding the distribution of two correlated random variables.
Well, you can't manage the fact that $X$ and $Y$ could be correlated. Or, rather, simply knowing the marginals of $X$ and $Y$ and their correlation is not enough to know the joint distribution $(X,Y)$, even if we know that both $X$ and $Y$ are normal.
To see this, let $S\sim Ber(p)$ for $p\in [0,1]$ and assume that $X\sim \mathcal{N}(0,1)$ and define $Y=(-1)^SX$. Assuming $S$ and $X$ are independent, then $\mathbb{P}(Y\leq t)=\mathbb{P}(S=0)\mathbb{P}(X\leq t)+\mathbb{P}(S=1)\mathbb{P}(X\geq -t)=\mathbb{P}(X\leq t),$ since $\mathcal{N}(0,1)$ is symmetric. Hence, $Y\sim \mathcal{N}(0,1)$. Furthermore, $Cov(X,Y)=1-2p$ by direct computation, so doing this experiment, we can get $Cov(X,Y)$ to be anything we want - for instance, they could be uncorrelated without being independent.
However, the distribution of $(X,Y)$ does not have density (with respect to the two-dimensional Lesbegue measure), since for $A=\{(t,t)|t\in \mathbb{R}\}\cup \{(-t,t)|t\in \mathbb{R}\},$ we have $\mathbb{P}((X,Y)\in A)=1$, but $A$ is a set of measure $0$.
So what's wrong? Well, we can't just pretend we know that $X$ and $Y$ are normal and have some covariance. It is absolutely vital that we know that $(X,Y)$ is a (regular) normal vector. And this is just defined as a variable such that there exists iid normal variables $Z_1,Z_2$ and an invertible matrix $B$ and $\mu\in \mathbb{R}^2$ such that $(X,Y)=B(Z_1,Z_2)+\mu$.
In this case, we can use the multidimensional change of variables formula (https://en.wikipedia.org/wiki/Integration_by_substitution#Substitution_for_multiple_variables) to deduce that $(X,Y)$ has density $$ \frac{1}{2\pi \det(BB^T)}\exp\left(-\frac{1}{2}(x-\mu)^T (B B^T)^{-1}(x-y)\right), $$ which agrees with what you have above.