Detecting symmetric matrices of the form (low-rank + diagonal matrix)

If you accept the numerical accuracy around the order of $[10^{-14},10^{-10}]$, then the following Linear Matrix Inequality solution is robust:

I have tried to solve the related problem : Finding a diagonal matrix $D\succ 0$ such that $$S:=\Sigma-D \succeq 0,\quad \ \operatorname{rank}(S)\leq n-2$$ Rank constraints are always problematic (non-convex) since going up (increasing the rank) is easy but going down is typically very hard (Since the rank of a matrix is upper semi-continuous..)

Anyway, I have set up a simple LMI problem with a diagonal matrix $D$ as follows $$ \min_{\displaystyle\substack{ D\succ 0,\\S\succeq 0}} \ \ \operatorname{tr(S)}$$ In other words, I simply minimized the sum of the eigenvalues of $S$, hoping that some of them will come quite close to zero. And much to my surprise, (except a few cases ended up with one, so far) two of them usually turned out to be arbitrarily close to zero. This is also equivalent to maximizing entries of $D$ while keeping the positive semi-definiteness of $S$.

If you have MATLAB and Robust Control Toolbox you can test it for yourself with the following code (any LMI solver would do):

%Generate some pos def matrix
n = 5;
Sr = rand(n);
Sr = 2*(Sr+Sr');
Sr = 2.5*max(abs(eig(Sr)))*eye(n)+Sr;
% Uncomment the next line for the particular matrix in the question
%Sr = [6 6 7 0;6 11 12 -3;7 12 20 -6;0 -3 -6 9]; n=4;

%Declare LMIs
setlmis([])
D=lmivar(1,[ones(n,1) zeros(n,1)]);
lmiterm([1 1 1 D],1,1);
lmiterm([-1 1 1 0],Sr);
lmis = getlmis;
c = mat2dec(lmis,-eye(n))
[cop,xop] = mincx(lmis,c,[1e-12 200 1e9 20 0]);
Dnum= dec2mat(lmis,xop,D);
disp('The eigenvalues of the difference')
eig(Sr-Dnum)
disp('The rank of the difference')
rank(Sr-Dnum)
disp('The condition number of the difference')
rcond(Sr-Dnum)

But as we can expect $(S_r-D_{num})$ is a full rank matrix bur its condition number is generally around $10^{-14}$.

Finally, I have tried your matrix and ended up with $$ D = \begin{pmatrix} 0.75669 &0 &0 &0\\ 0 &2.1392 &0 &0\\ 0 &0 &2.6752 &0\\ 0 &0 &0 &4.4885 \end{pmatrix} $$ After making sure that m of the eigenvalues are indeed close to zero, obtaining the low rank variable is simply numerical massaging. (This part can be substantially improved in my opinion)

[K,L,M] = svd(Sr-Dnum);
M = M';
m=2; %Select how many eigs are close to zero
K = K(:,1:n-m);
L = L(1:n-2,1:n-m);
M = M(1:n-m,:);
T = K*L*M
disp('The rank of V^T V ' )
rank(T) %indeed n-m
V = K*sqrt(L);
% Testing the result
disp('The error of \Sigma = D - v^T v' )
Sr-Dnum-V*V'

Numerical result for your variable $v^t$ (rounded off further by the display) $$ v^t = \begin{pmatrix} -1.826 &1.3817\\ -2.9418 &0.45479\\ -4.1423 & -0.40799\\ 1.2817 & 1.6938 \end{pmatrix} $$

We also have a necessary condition by using the orthogonal complement $V_\perp$ of $v^t$: $$ V_\perp^T\Sigma V_\perp = V_\perp^T D V_\perp $$ And this difference also suffers from numerical errors (around $10^{-12}$. Nevertheless, if you are searching for an quick and dirty approximation, this comes to pretty close. By fiddling with the solution $D_{num}$ (which means manual labor), you can even obtain exact results.

Regarding the theory, I don't know why the number of eigs that come close to zero is $n-2$ in general but not more. I'll try to come back to this if I have some time later.

I don't think an analytical solution would lead to a direct result since the rank is excessively dependent to matrix entries and it would be really tough(and great of course) to obtain something other than approximations.

Lastly, I did not understand what you meant with $1$-parametric family, so I don't have any comments on that :).