Solution 1:

Of course you can have non-unique solution when $A$ has a null space. The point of least square solution is to find the orthogonal projection of $b$ in the image space of $A$. When columns of $A$ becomes linearly dependent, you can always find more than one, in fact infinitely many, solution.

Solution 2:

Your theorem statement is incomplete. Requirements have been omitted.

To amplify the insights of @Troy Woo, given a matrix $\mathbf{A}\in\mathbb{C}^{m \times n}$, a solution vector $x\in\mathbb{C}^{n}$, and a data vector $b\in\mathbb{C}^{m}$ such that $b\notin\mathcal{N}(\mathbf{A}^{*})$, and where $n\in\mathbb{N}$ and $m\in\mathbb{N}$, the linear system $$ \mathbf{A} x = b $$ has the least squares solution can be expressed in terms of the Moore-Penrose pseudoinverse $\mathbf{A}^{\dagger}$: $$ x_{LS} = \mathbf{A}^{\dagger}b + \left(\mathbf{I}_{n} - \mathbf{A}^{\dagger}\mathbf{A} \right) y $$ with the arbitrary vector $y\in\mathbb{C}^{n}$.

If the matrix rank $\rho < m$, the null space $\mathcal{N}\left(\mathbf{A}\right)$ is non-trivial and the projection operator $\left(\mathbf{I}_{n} - \mathbf{A}^{\dagger}\mathbf{A} \right)$ is non-zero.

Example

The linear system $$ \begin{align} \mathbf{A} x & = b \\ % \left[ \begin{array}{cc} 1 & 0 \end{array} \right] % \left[ \begin{array}{c} x_{1} \\ x_{2} \end{array} \right] % &= % \left[ \begin{array}{c} b_{1} \end{array} \right] \end{align} $$ has the least squares solution $$ \begin{align} x_{LS} & = \mathbf{A}^{\dagger} b + \left( \mathbf{I}_{2} - \mathbf{A}^{\dagger} \mathbf{A}\right) y\\ % &= % \left[ \begin{array}{c} b_{1} \\ 0 \end{array} \right] % + % \alpha \left[ \begin{array}{c} 0 \\ 1 \end{array} \right] \end{align} $$ with $\alpha \in \mathbb{C}^{n}$.

The affine space of the solution satisfies $$ \mathbf{A} \left( \left[ \begin{array}{c} b_{1} \\ 0 \end{array} \right] % + % \alpha \left[ \begin{array}{c} 0 \\ 1 \end{array} \right] \right) = % \mathbf{A} \left( \left[ \begin{array}{c} b_{1} \\ 0 \end{array} \right] \right) % + % \alpha \mathbf{A} \left( \left[ \begin{array}{c} 0 \\ 1 \end{array} \right] \right) = % \mathbf{A} \left( \left[ \begin{array}{c} b_{1} \\ 0 \end{array} \right] \right). $$

The solution vector of least norm, $$\Bigg\lVert \left[ \begin{array}{c} b_{1} \\ 0 \end{array} \right] % + % \alpha \left[ \begin{array}{c} 0 \\ 1 \end{array} \right] \Bigg\rVert_{2}^{2}$$ corresponds to $\alpha=0$.