Calculating SVD by hand: resolving sign ambiguities in the range vectors.

The left singular vectors are

$$\mathrm{u}_1 \in \left\{ t_1 \begin{bmatrix} 1\\ 1\end{bmatrix} : t_1 \in \mathbb R \right\}$$

$$\mathrm{u}_2 \in \left\{ t_2 \begin{bmatrix} 1\\ -1\end{bmatrix} : t_2 \in \mathbb R \right\}$$

We want the left singular vectors to be orthonormal. They are already orthogonal. Normalizing,

$$\mathrm{u}_1 = \frac{t_1}{\sqrt{2 t_1^2}} \begin{bmatrix} 1\\ 1\end{bmatrix} = \operatorname{sgn} (t_1) \begin{bmatrix} \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}\end{bmatrix}$$

$$\mathrm{u}_2 = \frac{t_2}{\sqrt{2 t_2^2}} \begin{bmatrix} 1\\ -1\end{bmatrix} = \operatorname{sgn} (t_2) \begin{bmatrix} \frac{1}{\sqrt{2}}\\ -\frac{1}{\sqrt{2}}\end{bmatrix}$$

where $\operatorname{sgn}$ denotes the signum function. Hence,

$$\mathrm U = \begin{bmatrix} | & |\\ \mathrm{u}_{1} & \mathrm{u}_{2}\\ | & |\end{bmatrix} = \begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\end{bmatrix} \begin{bmatrix} \operatorname{sgn} (t_1) & 0\\ 0 & \operatorname{sgn} (t_2)\end{bmatrix}$$

There are $2^2 = 4$ possible choices.


This is a very nice post with an illuminating discussionon the confusion between sign choices and how they propagate.

Define the singular value decomposition of a rank $\rho$ matrix as $$ \mathbf{A} = \mathbf{U} \, \Sigma \, \mathbf{V}^{*}. $$

Connecting vectors between row and column spaces

The sign conventions can be understood by looking at the relationship between the $u$ and $v$ vectors $$ u_{k} = \sigma^{-1}_{k} \mathbf{A} v_{k}, \qquad v_{k} = \sigma^{-1}_{k} u^{*}_{k} \mathbf{A}, \qquad k = 1, \rho \tag{1} $$ Changing the sign of a $u$ vector induces a sign change in the corresponding $v$ vector; changing the sign of $v$ vector induces a sign change the corresponding $u$ vector.

Example matrix

We certainly agree the singular values are $\sigma = \left\{ 2\sqrt{3}, \sqrt{10} \right\}$. Let's nominate a set of eigenvectors.

$$ \mathbf{A}\,\mathbf{A}^{*}: \quad % u_{1} = % \frac{1}{\sqrt{2}} \left[ \begin{array}{r} 1 \\ 1 \\ \end{array} \right], \quad % u_{2} = % \frac{1}{\sqrt{2}} \left[ \begin{array}{r} -1 \\ 1 \\ \end{array} \right] $$ $$ \mathbf{A}^{*} \mathbf{A}: \quad % v_{1} = % \frac{1}{\sqrt{6}} \left[ \begin{array}{r} 1 \\ 2 \\ 1 \end{array} \right], \quad % v_{2} = % \frac{1}{\sqrt{5}} \left[ \begin{array}{r} -2 \\ 1 \\ 0 \end{array} \right] % v_{3} = % \frac{1}{\sqrt{30}} \left[ \begin{array}{r} -1 \\ -2 \\ 5 \end{array} \right] $$ How did we decide to distribute the minus signs in these vectors? Use the convention of making the first entries negative.

Stepping back, the eigenvectors have an inherent global sign ambiguity. For example $$ \begin{align} % \mathbf{A}^{*} \mathbf{A} v_{1} &= \mathbf{0} \\ % \mathbf{A}^{*} \mathbf{A} \left(-v_{1}\right) &= \left(- \mathbf{0} \right) = \mathbf{0} % \end{align} $$ Using this arbitrary set of column vectors, construct the default SVD: $$ \mathbf{A} = \left[ \begin{array}{cc} u_{1} & u_{2} \end{array} \right] % \Sigma \, % \left[ \begin{array}{ccc} v_{1} & v_{2} & v_{3} \end{array} \right]^{*} $$ If we select the sign convention $\color{blue}{\pm}$ on the $k-$th column vector, we induce the sign convention $\color{red}{\pm}$ on the $k-$th column vector in the complimentary range space. The $\color{blue}{choice}$ induces the $\color{red}{consequence}$.

Choices and consequences

For example, find the SVD by resolving the column space to produce the vectors $u$. The vectors $v$ will be constructed using the second equality in $(1)$. Flipping the global sign on $u_{1}$ flips the global sign on $v_{1}$. $$ \mathbf{A} = \left[ \begin{array}{cc} \color{blue}{\pm}u_{1} & u_{2} \end{array} \right] % \Sigma \, % \left[ \begin{array}{ccc} \color{red}{\pm}v_{1} & v_{2} & v_{3} \end{array} \right]^{*} $$ A nice feature of @Crimson's post is the demonstration that where can choose which range space to compute and which to construct. Compute $\mathbf{V}$ and construct $\mathbf{U}$, or compute $\mathbf{U}$ and construct $\mathbf{V}$.

The other path is to resolve the row space vectors $v$ and use the first equality in $(1)$ to construct the vectors $u$. The point is that flipping the global sign on $v_{1}$ flips the global sign on $u_{1}$ $$ \mathbf{A} = \left[ \begin{array}{cc} \color{red}{\pm}u_{1} & u_{2} \end{array} \right] % \Sigma \, % \left[ \begin{array}{ccc} \color{blue}{\pm}v_{1} & v_{2} & v_{3} \end{array} \right]^{*} $$ Choice and consequence swap places. The process for the first vector extends to all $\rho$ vectors in the range space.

Counting choices

The number of unique sign choices is $2^\rho$, two choices for each range space vector.

Here the matrix rank is $\rho = 2$ and there are $2^{2} = 4$ unique vector sets. If $\mathbf{A}\mathbf{A}^{*}$ is resolved, then: $$ \begin{array}{cc|cc} u_{1} & u_{2} & v_{1} & v_{2} \\\hline \color{blue}{+} & \color{blue}{+} & \color{red}{+} & \color{red}{+} \\ \color{blue}{+} & \color{blue}{-} & \color{red}{+} & \color{red}{-} \\ \color{blue}{-} & \color{blue}{+} & \color{red}{+} & \color{red}{+} \\ \color{blue}{-} & \color{blue}{-} & \color{red}{-} & \color{red}{-} \\ \end{array} $$ If instead $\mathbf{A}^{*}\mathbf{A}$ is resolved, then: $$ \begin{array}{cc|cc} v_{1} & v_{2} & u_{1} & u_{2} \\\hline \color{blue}{+} & \color{blue}{+} & \color{red}{+} & \color{red}{+} \\ \color{blue}{+} & \color{blue}{-} & \color{red}{+} & \color{red}{-} \\ \color{blue}{-} & \color{blue}{+} & \color{red}{+} & \color{red}{+} \\ \color{blue}{-} & \color{blue}{-} & \color{red}{-} & \color{red}{-} \\ \end{array} $$

Another example is in SVD and the columns — I did this wrong but it seems that it still works, why?


For $12$, $u_1=\begin{bmatrix} 1/\sqrt{2}\\ 1/\sqrt{2}\end{bmatrix}$ is a normalised eigenvector. Now, compute the first column of $V$, $$v_1=\frac{A^T}{\sqrt{12}}u_1= \begin{bmatrix} 2/\sqrt{24}\\ 4/\sqrt{24}\\ 2/\sqrt{24}\end{bmatrix}.$$Corresponding to $10$, a nomalised eigenvector is $u_2=\begin{bmatrix}1/\sqrt{2}\\ -1/\sqrt{2}\end{bmatrix}$. Calculate the second column of $V$, $$v_2=\frac{A^T}{\sqrt{10}}u_2=\begin{bmatrix}4/\sqrt{20}\\ -2/\sqrt{20}\\ 0\end{bmatrix}.$$

For $v_3$, choose any eigenvector corresponding to $0$. The sign doesn't matter. You can choose $v_3=\begin{bmatrix}1/\sqrt{30}\\ 2/\sqrt{30}\\ -5/\sqrt{30}\end{bmatrix}$

Take $U=[u_1 u_2]$, $V=[v_1 v_2 v_3]$, $S=\begin{bmatrix}\sqrt{12}&0&0\\ 0&\sqrt{10}&0\end{bmatrix}$. Then $A=USV^T$.

You don't have to worry about the sign of the eigenvectors for $AA^T$. Choose any eigenvector, but make sure that things match nicely by choosing the columns of $V$ as I indicated.

Note further that the non-zero eigenvalues of $AA^T$ and $A^TA$ are the same. So, after finding the eigenvalues of $AA^T$, you need not find the eigenvalues of $A^TA$ again. Also, you get the normalised eigenvectors for the nonzero eigenvlaues of $A^TA$ virtually `for free' from the eigenvectors of $AA^T$.