Under what circumstance will a covariance matrix be positive semi-definite rather than positive definite?
Well, in the $1 \times 1$ case, a matrix is positive semi-definite precisely when its single entry is a non-negative number, and a random variable $X$ has zero variance if and only if it is a.s. constant. (If you don't know what ‘a.s.’ means, you may ignore it throughout this discussion.) Indeed, assuming $\mathbb{E}[X] = 0$ (which is no loss of generality), we have $$\operatorname{Var} X = \int_{\Omega} X^2 \mathrm{d} \mathbb{P}$$ and since the integrand is non-negative, this is zero if and only if the integrand is a.s. zero, i.e. if and only if $X = 0$ a.s. In the case where $\mathbb{E}[X] \ne 0$, we have (by linearity) $\operatorname{Var} X = 0$ if and only if $X = \mathbb{E}[X]$ a.s.
In general, if you have a $n \times n$ symmetric matrix $V$, there is an orthogonal matrix $Q$ such that $Q V Q^{\sf T}$ is a diagonal matrix $D$, and $V$ is positive semi-definite if and only if the diagonal entries of $D$ are all non-negative. But if $V$ is the covariance matrix of $\mathbf{X}$, then $D$ is the covariance matrix of $Q \mathbf{X}$, and so $V$ is positive semi-definite but not positive definite if and only if some component of $Q \mathbf{X}$ is a.s. constant. This happens if and only if some linear combination of $\mathbf{X}$ is ‘fully correlated’, to use your phrasing.