SVD and the columns -- I did this wrong but it seems that it still works, why?

I want to decompose $A = \begin{pmatrix} 3 & 1 & 2 \\ -2 & 1 & 3 \end{pmatrix}$ using the SVD. So $A = U \Sigma V^\star$.

Now, I calculated the matrices $U$,$\Sigma$ which are $\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & -1\\ 1 & 1 \end{pmatrix}$ and $\begin{pmatrix} \sqrt{15} & 0 & 0 \\ 0&\sqrt{13} & 0 \end{pmatrix}$ respectively.

Now this is where my problem comes. So I know the correct matrix for $V$ which should be the matrix with columns $$v_1 = \frac{1}{\sqrt{30}}\begin{pmatrix}1 \\ 2 \\ 5 \end{pmatrix}, v_2 = \frac{1}{\sqrt{26}}\begin{pmatrix}-5 \\ 0 \\ 1 \end{pmatrix}, v_3 = \frac{1}{\sqrt{175}}\begin{pmatrix} 1 \\ -13 \\ 5 \end{pmatrix}.$$

Now, I made a mistake in computing $v_3$ and got $v_3 = \frac{1}{\sqrt{611}}\begin{pmatrix}21 \\ -13 \\ 1 \end{pmatrix}$, and upon substituting this in, it still worked. In fact, I found that any vector in the form $v_3 = b \begin{pmatrix} \frac{26}{a} - 5 \\ \frac{-13}{a} \\ 1\end{pmatrix}$ still worked, with $a$ being any real number and $b$ being the modulus of that vector.
Why does this vector $v_3$ still work?


Solution 1:

Your question provides a forum to clear common confusion about the singular value decomposition: $$ \mathbf{A} = \mathbf{U} \, \Sigma \, \mathbf{V}^{*} \tag{1} $$ The specific example will use $$ \mathbf{A} = \left[ \begin{array}{rcc} 3 & 1 & 2 \\ -2 & 1 & 3 \\ \end{array} \right] \tag{2} $$


# Computing the SVD

Consider the general matrix $\mathbf{A} = \mathbb{C}^{m \times n}_{\rho}$. The steps for computing the SVD are

  1. Resolve the eigensystem for a product matrix $\mathbf{W}$
  2. Compute remaining component matrix from $(1)$.

Recipes

The choice is whether to first resolve the row space or the column space. The table below compares the two parallel paths. The notation $$ \lambda \left( \mathbf{W} \right) $$ represents the eigenvalue spectrum for the matrix $\mathbf{W}$, while $$ \tilde{\lambda} \left( \mathbf{W} \right) $$ represents the ordered, eigenvalue spectrum with the $0$ elements removed. There will be $\rho$ nonzero eigenvalues.

$$ \begin{array}{lll} % \text{Operation} & \text{Row space first} & \text{Column space first} \\\hline % \text{1. Construct product matrix} & \mathbf{W} = \mathbf{A}^{*} \mathbf{A} & \mathbf{W} = \mathbf{A} \, \mathbf{A}^{*} \\ % \text{2. Solve for eigenvalues} & \sigma = \sqrt{\tilde{\lambda} \left( \mathbf{W} \right)} & \sigma = \sqrt{\tilde{\lambda} \left( \mathbf{W} \right)} \\ % \color{blue}{\text{3. Solve for eigenvectors }} w_{k},\ k=1, \rho & \left( \mathbf{W} - \lambda_{k} \mathbf{I}_{n} \right) w_{k} = \mathbf{0} & \left( \mathbf{W} - \lambda_{k} \mathbf{I}_{m} \right) w_{k} = \mathbf{0} \\ % \text{4. Assemble domain matrix} & \mathbf{V}_{k} = \frac{w_{k}}{\lVert w_{k} \rVert_{2}} & \mathbf{U}_{k} = \frac{w_{k}}{\lVert w_{k} \rVert_{2}} & \\ % \text{5. Compute complementary domain matrix} & \mathbf{U}_{k} = \sigma_{k}^{-1} \mathbf{A} \mathbf{V}_{k} & \mathbf{V}_{k} = \sigma_{k}^{-1} \mathbf{A}^{*} \mathbf{U}_{k} & \\ % \end{array} $$

The step highlighted in $\color{blue}{blue}$ is the step where we must pick a sign (or phase). Ambiguity enters here.

The following example computes the SVD by 1) resolving the row space first and then 2) resolving the column space first. The decompositions are expressed in terms of complex phase factors, and so represent the most general case of sign conventions.

Notice that life is easier when we work with the smaller of the two product matrices $\mathbf{A}\mathbf{A}^{*}$ and $\mathbf{A}^{*}\mathbf{A}$.


Example I: Row space first

  1. Construct product matrix $$ \mathbf{W} = \mathbf{A}^{*} \, \mathbf{A} = \left[ \begin{array}{ccc} 13 & 1 & 0 \\ 1 & 2 & 5 \\ 0 & 5 & 13 \\ \end{array} \right] $$

  2. Solve for eigenvalues

The characteristic polynomial is computed using $$ p(\lambda) = \det \left( \mathbf{W} - \lambda \mathbf{I}_{3}\right) = \det \left[ \begin{array}{ccc} 13-\lambda & 1 & 0 \\ 1 & 2-\lambda & 5 \\ 0 & 5 & 13-\lambda \\ \end{array} \right] $$ Solve for the determinant by computing the minors: $$ \left| \begin{array}{ccc} 13-\lambda & 1 & 0 \\ 1 & 2-\lambda & 5 \\ 0 & 5 & 13-\lambda \\ \end{array} \right| = % \boxed{13-\lambda} \left| \begin{array}{cc} 2-\lambda & 5 \\ 5 & 13-\lambda \\ \end{array} \right| % -\boxed{1} \left| \begin{array}{cc} 1 & 5 \\ 0 & 13-\lambda \\ \end{array} \right| % +\boxed{0} \left| \begin{array}{cc} 1 & 2-\lambda \\ 0 & 5 \\ \end{array} \right| $$ The characteristic polynomial is $$ p \left( \lambda \right) = -\lambda ^3+28 \lambda ^2-195 \lambda = -\lambda \left( \lambda - 13 \right) \left( \lambda - 15 \right) $$

The eigenvalue spectrum is $$ \lambda\left( \mathbf{W} \right) = \left\{ 0, 13, 15 \right\} $$ The truncated, ordered eigenvalue spectrum is $$ \tilde{\lambda} \left( \mathbf{W} \right) = \left\{ 15, 13 \right\} $$ is the foundation for the singular values: $$ \sigma = \sqrt{\tilde{\lambda}} $$ The matrix of singular values, $$ \mathbf{S} = \left[ \begin{array}{cc} \sqrt{15} & 0 \\ 0 & \sqrt{13} \\ \end{array} \right], $$ is embedded in the sabot matrix: $$ \Sigma = % \left[ \begin{array}{cc} \mathbf{S} & \mathbf{0} \end{array} \right] % = % \left[ \begin{array}{cc|c} \sqrt{15} & 0 & 0 \\ 0 & \sqrt{13} & 0 \\ \end{array} \right] % $$

  1. $\color{blue}{\text{Solve for eigenvectors}}$

First eigenvector

$$ \begin{align} \left( \mathbf{W} - \lambda_{1} \mathbf{I}_{3} \right) w_{1} &= \mathbf{0} \\ % \left[ \begin{array}{rrr} -2 & 1 & 0 \\ 1 & -13 & 5 \\ 0 & 5 & -2 \\ \end{array} \right] % \left[ \begin{array}{rr} w_{x} \\ w_{y} \\ w_{z} \\ \end{array} \right] % &= % \left[ \begin{array}{c} 0 \\ 0 \\ 0 \\ \end{array} \right] % \end{align} $$ The general solution is $$ \left[ \begin{array}{c} w_{x} \\ w_{y} \\ w_{z} \\ \end{array} \right] = e^{i \theta_{1}} \left[ \begin{array}{c} 1 \\ 2 \\ 5 \\ \end{array} \right] $$ with $0 \le \theta_{k} \le 2\pi$. This is the general phase angle.

The normalized vector is the first column vector in the domain matrix $$ \mathbf{V}_{1} = \frac{e^{i \theta_{1}}}{\sqrt{30}} \left[ \begin{array}{c} 1 \\ 2 \\ 5 \\ \end{array} \right] $$

Second eigenvector

$$ \begin{align} \left( \mathbf{W} - \lambda_{2} \mathbf{I}_{3} \right) w_{2} &= \mathbf{0} \\ % \left[ \begin{array}{crr} 0 & 1 & 0 \\ 1 & -11 & 5 \\ 0 & 5 & 0 \\ \end{array} \right] % \left[ \begin{array}{rr} w_{x} \\ w_{y} \\ w_{z} \\ \end{array} \right] % &= % \left[ \begin{array}{c} 0 \\ 0 \\ 0 \\ \end{array} \right] % \end{align} $$ The general solution is $$ \left[ \begin{array}{c} w_{x} \\ w_{y} \\ w_{z} \\ \end{array} \right] = e^{i \theta_{2}} \left[ \begin{array}{r} -5 \\ 0 \\ 1 \\ \end{array} \right] $$

For the purposes of the thin SVD, we are done as we have found the $\rho=2$ eigenvectors. However, we can compute the null space term by solving for the eigenvector of the $0$ eigenvalue.

Third eigenvector

$$ \begin{align} \left( \mathbf{W} - \lambda_{3} \mathbf{I}_{3} \right) w_{3} &= \mathbf{0} \\ % \left[ \begin{array}{ccc} 13 & 1 & 0 \\ 1 & 2 & 5 \\ 0 & 5 & 13 \\ \end{array} \right] % \left[ \begin{array}{rr} w_{x} \\ w_{y} \\ w_{z} \\ \end{array} \right] % &= % \left[ \begin{array}{c} 0 \\ 0 \\ 0 \\ \end{array} \right] % \end{align} $$ The general solution is $$ \left[ \begin{array}{c} w_{x} \\ w_{y} \\ w_{z} \\ \end{array} \right] = e^{i \theta_{3}} \left[ \begin{array}{r} 1 \\ -13 \\ 5 \\ \end{array} \right] $$

This normalized vector is the third and final column vector in the domain matrix $$ \mathbf{V}_{3} = \frac{e^{i \theta_{3}}}{\sqrt{195}} \color{gray}{\left[ \begin{array}{r} 1 \\ -13 \\ 5 \\ \end{array} \right]} $$ Null space vectors are shaded in gray.

  1. Assemble domain matrix

$$ \mathbf{V} = \left[ \begin{array}{cc} % c1 \frac{e^{i \theta_{1}}}{\sqrt{30}} \left[ \begin{array}{c} 1 \\ 2 \\ 5 \\ \end{array} \right] % c2 \frac{e^{i \theta_{2}}}{\sqrt{26}} \left[ \begin{array}{r} -5 \\ 0 \\ 1 \\ \end{array} \right] % c3 \frac{e^{i \theta_{3}}}{\sqrt{195}} \color{gray}{\left[ \begin{array}{r} 1 \\ -13 \\ 5 \\ \end{array} \right]} % \end{array} \right] $$

  1. Compute columns of $\mathbf{U}$

$$ \mathbf{U}_{1} = \sigma^{-1}_{1} \mathbf{A} \mathbf{V}^{*}_{1} = \frac{e^{i \theta_{1}}} {\sqrt{2}} \left[ \begin{array}{r} 1 \\ 1 \\ \end{array} \right] $$

$$ \mathbf{U}_{2} = \sigma^{-2}_{1} \mathbf{A} \mathbf{V}^{*}_{2} = \frac{e^{i \theta_{2}}} {\sqrt{2}} \left[ \begin{array}{r} -1 \\ 1 \\ \end{array} \right] $$

We're done. The singular value decomposition is $$ \begin{align} \mathbf{A} &= \mathbf{U} \, \Sigma \, \mathbf{V}^{*} \\ \\ & = % U \underbrace{ \frac{1}{\sqrt{2}} \left[ \begin{array}{cc} % c1 e^{i \theta_{1}} \left[ \begin{array}{c} 1\\ 1\\ \end{array} \right] % c2 e^{i \theta_{2}} \left[ \begin{array}{r} -1\\ 1\\ \end{array} \right] % \end{array} \right]}_{\mathbf{U}} % Sigma \underbrace{ \left[ \begin{array}{cc|c} \sqrt{15} & 0 & 0 \\ 0 & \sqrt{13} & 0 \\ \end{array} \right]}_{\Sigma} % V* \underbrace{ \left[ \begin{array}{cc} % c1 \frac{e^{i \theta_{1}}} {\sqrt{30}} \left[ \begin{array}{r} 1 \\ 2 \\ 5 \\ \end{array} \right] % c2 \frac{e^{i \theta_{2}}} {\sqrt{26}} \left[ \begin{array}{r} -5 \\ 0 \\ 1 \\ \end{array} \right] \color{gray}{ \frac{e^{i \theta_{3}}}{\sqrt{195}} \left[ \begin{array}{r} 1 \\ -13 \\ 5 \\ \end{array} \right]} % \end{array} \right] ^{*} }_{\mathbf{V}} % A \\[3pt] & = \left[ \begin{array}{rcc} 3 & 1 & 2 \\ -2 & 1 & 3 \\ \end{array} \right] % \end{align} $$


## Example II: Column space first
  1. Construct product matrix $$ \mathbf{W} = \mathbf{A} \, \mathbf{A}^{*} = \left[ \begin{array}{cc} 14 & 1 \\ 1 & 14 \\ \end{array} \right] $$

  2. Solve for eigenvalues

The eigenvalues are the roots of the characteristic polynomial $$ p \left( \lambda \right) = \lambda^{2} - \lambda \, \text{tr }\mathbf{W} + \det \mathbf{W} $$ The trace and determinant are $$ \text{tr }\mathbf{W} = 28, \qquad \det \mathbf{W} = 195 $$ The eigenvalue spectrum is $$ \lambda \left( \mathbf{W} \right) = \tilde{\lambda} \left( \mathbf{W} \right) = \left\{ 15, 13 \right\}, $$ already in a form suitable to compute singular values: $$ \sigma = \sqrt{\tilde{\lambda}} $$ The matrix of singular values is $$ \mathbf{S} = \left[ \begin{array}{cc} \sqrt{15} & 0 \\ 0 & \sqrt{13} \\ \end{array} \right], $$ and is embedded in the sabot matrix like so: $$ \Sigma = % \left[ \begin{array}{cc} \mathbf{S} & \mathbf{0} \end{array} \right] % \left[ \begin{array}{cc|c} \sqrt{15} & 0 & 0 \\ 0 & \sqrt{13} & 0 \\ \end{array} \right] % $$

  1. $\color{blue}{\text{Solve for eigenvectors}}$

First eigenvector

$$ \begin{align} \left( \mathbf{W} - \lambda_{1} \mathbf{I}_{2} \right) w_{1} &= \mathbf{0} \\ % \left[ \begin{array}{rr} -1 & 1 \\ 1 & -1 \\ \end{array} \right] % \left[ \begin{array}{rr} w_{x} \\ w_{y} \\ \end{array} \right] % &= % \left[ \begin{array}{rr} 0 \\ 0 \\ \end{array} \right] % \end{align} $$ The general solution is $$ \left[ \begin{array}{rr} w_{x} \\ w_{y} \\ \end{array} \right] = e^{i \phi_{1}} \left[ \begin{array}{rr} 1 \\ 1 \\ \end{array} \right] $$ with $0 \le \phi_{k} \le 2\pi$. This is the juncture where one picks a sign; instead, we leave the general phase.

The normalized vector is the first column vector in the domain matrix $$ \mathbf{U}_{1} = \frac{e^{i \phi_{1}}}{\sqrt{2}} \left[ \begin{array}{rr} 1 \\ 1 \\ \end{array} \right] $$

Second eigenvector

$$ \begin{align} \left( \mathbf{W} - \lambda_{2} \mathbf{I}_{2} \right) w_{1} &= \mathbf{0} \\ % \left[ \begin{array}{cc} 1 & 1 \\ 1 & 1 \\ \end{array} \right] % \left[ \begin{array}{rr} w_{x} \\ w_{y} \\ \end{array} \right] % &= % \left[ \begin{array}{rr} 0 \\ 0 \\ \end{array} \right] % \end{align} $$ The general solution is $$ \left[ \begin{array}{rr} w_{x} \\ w_{y} \\ \end{array} \right] = e^{i \phi_{2}} \left[ \begin{array}{rr} -1 \\ 1 \\ \end{array} \right] $$ The minus sign could go in the top entry as shown or the bottom entry.

The normalized vector is the second column vector in the domain matrix $$ \mathbf{U}_{2} = \frac{e^{i \phi_{2}}}{\sqrt{2}} \left[ \begin{array}{r} -1 \\ 1 \\ \end{array} \right] $$

  1. Assemble domain matrix

$$ \mathbf{U} = \frac{1}{\sqrt{2}} \left[ \begin{array}{cc} % c1 e^{i \phi_{1}} \left[ \begin{array}{c} 1\\ 1\\ \end{array} \right] % c2 e^{i \phi_{2}} \left[ \begin{array}{r} -1\\ 1\\ \end{array} \right] % \end{array} \right] $$

  1. Compute columns of $\mathbf{V}$

$$ \mathbf{V}_{1} = \sigma^{-1}_{1} \mathbf{A}^{*} \mathbf{U}_{1} = \frac{e^{i \phi_{1}}} {\sqrt{30}} \left[ \begin{array}{r} 1 \\ 2 \\ 5 \\ \end{array} \right] $$

$$ \mathbf{V}_{2} = \sigma^{-1}_{2} \mathbf{A}^{*} \mathbf{U}_{2} = \frac{e^{i \phi_{2}}} {\sqrt{26}} \left[ \begin{array}{r} -5 \\ 0 \\ 1 \\ \end{array} \right] $$

The third and final column for $\mathbf{V}$ is in the null space $\mathcal{N}(\mathbf{A})$. One way to compute this vector is to use the cross product $$ \left[ \begin{array}{c} 1 \\ 2 \\ 5 \\ \end{array} \right] \times \left[ \begin{array}{r} -5 \\ 0 \\ 1 \\ \end{array} \right] = 2\left[ \begin{array}{r} 1 \\ -13 \\ 5 \\ \end{array} \right] $$ The third and final vector is the normalized version $$ \color{gray}{\mathbf{V}_{3}} = \color{gray}{ \frac{1}{\sqrt{195}} \left[ \begin{array}{r} 1 \\ -13 \\ 5 \\ \end{array} \right]} $$ The lighter shading reminds that this vector is lives in a null space.

The singular value decomposition is $$ \begin{align} \mathbf{A} &= \mathbf{U} \, \Sigma \, \mathbf{V}^{*} \\ & = % U \underbrace{ \frac{1}{\sqrt{2}} \left[ \begin{array}{cc} % c1 e^{i \phi_{1}} \left[ \begin{array}{c} 1\\ 1\\ \end{array} \right] % c2 e^{i \phi_{2}} \left[ \begin{array}{r} -1\\ 1\\ \end{array} \right] % \end{array} \right]}_{\mathbf{U}} % Sigma \underbrace{ \left[ \begin{array}{cc|c} \sqrt{15} & 0 & 0 \\ 0 & \sqrt{13} & 0 \\ \end{array} \right]}_{\Sigma} % V* \underbrace{ \left[ \begin{array}{cc} % c1 \frac{e^{i \phi_{1}}} {\sqrt{30}} \left[ \begin{array}{r} 1 \\ 2 \\ 5 \\ \end{array} \right] % c2 \frac{e^{i \phi_{2}}} {\sqrt{26}} \left[ \begin{array}{r} -5 \\ 0 \\ 1 \\ \end{array} \right] \color{gray}{ \frac{1}{\sqrt{195}} \left[ \begin{array}{r} 1 \\ -13 \\ 5 \\ \end{array} \right]} % \end{array} \right] ^{*} }_{\mathbf{V}} % A \\[3pt] & = \left[ \begin{array}{rcc} 3 & 1 & 2 \\ -2 & 1 & 3 \\ \end{array} \right] \end{align} \tag{1} $$



## Error in post The null space vector is unique up to the general phase. The vector cited in the question does not work.

For example, when $a=1$ $$ \mathbf{U} \, \Sigma \, \left[ \begin{array}{cc} % c1 \frac{e^{i \phi_{1}}} {\sqrt{30}} \left[ \begin{array}{r} 1 \\ 2 \\ 5 \\ \end{array} \right] % c2 \frac{e^{i \phi_{2}}} {\sqrt{26}} \left[ \begin{array}{r} -5 \\ 0 \\ 1 \\ \end{array} \right] \color{gray}{ \frac{1}{\sqrt{611}} \left[ \begin{array}{r} 21 \\ -13 \\ 1 \\ \end{array} \right]} % \end{array} \right]^{*} = \left[ \begin{array}{rcc} 3 & 1 & 21 \sqrt{\frac{15}{1222}}+\frac{13}{\sqrt{94}} \\ -2 & 1 & 21 \sqrt{\frac{15}{1222}}-\frac{13}{\sqrt{94}} \\ \end{array} \right] \ne \mathbf{A} $$

Solution 2:

The SVD is not unique (see e.g. here, here, or here).

Quoting wikipedia:

Even if all singular values are nonzero, if m > n then the cokernel is nontrivial, in which case U is padded with m − n orthogonal vectors from the cokernel. Conversely, if m < n, then V is padded by n − m orthogonal vectors from the kernel. However, if the singular value of 0 exists, the extra columns of U or V already appear as left or right singular vectors.

In your case, $3=m < n=2$, so $V$ is being padded by an extra arbitrary vector.