If $\Sigma$ is positive semi-definite, we may define the matrix $\Sigma$ as $$ \Sigma = P^t \Gamma^2 P $$ where $P$ is a permutation matrix and the $m \times m$ diagonal matrix $\Gamma^2$ is positive definite. Then the matrix $K$ can be written in the form $$ K = W^t \Gamma^2 W = (\Gamma W)^t (\Gamma W) $$ where $m \le d$, $$ W = PU = P_{11} U_1 + P_{12}U2 $$ $$ P = \left[ \begin{array}{cc} P_{11} & P_{12} \\ P_{21} & P_{22} \end{array} \right] , \;\;\; U = \left[ \begin{array}{c} U_1 \\ U_2 \end{array} \right] . $$ The matrix $W$ is an $m \times n$ matrix and $P_{11}$ is an $m \times m$ matrix.

The eigenvalues of the matrix $K$ are given by the singular values squared of the matrix $\Gamma W$. The corresponding eigenvectors are given by the right singular vectors of $\Gamma W$.

The SVD of $\Gamma W$ is usually computed in two steps:

Step 1: Compute the QR-factorization of $\Gamma W$: $$ QR = \Gamma W $$ where the $n \times n$ matrix $R$ is upper triangular. The matrix $Q$, which has orthonormal columns, is not explicitly computed. Only $R$ is required.

Step 2: Compute the SVD of $R$, $$ R = \Phi S \Psi^t $$ where $\Phi^t \Phi = \Psi^t \Psi = I$ and $S$ is the singular value matrix. This can be done without explicitly computing $\Phi$. The required eigenvectors are given by $\Psi$. The eigenvalues of $K$ are given by the squares of the singular values.

The nominally expensive part is computing the SVD of $R$ but $R$ is a small $n \times n$ matrix and hence inexpensive.

If it is known that $m$ is nearly equal to $d$, then the reduction using the permutational matrix can be skipped. In that case, we proceed by computing the $QR$-factorisation of $\Sigma^{\frac{1}{2}}U$ instead of $\Gamma W$.