Method to reverse a Kronecker product
Solution 1:
This problem (Reverse kronecker product) has a known solution called "Nearest Kronecker Product" and it is generalized to matrices as well.
Given $A\in \mathbb R^{m\times n} $ with $m = m_1m_2$ and $n = n_1n_2$, find $B\in \mathbb R^{m_1\times n_1}$ and $C\in \mathbb R^{m_2\times n_2}$ so
$\phi(B,C)$ = min $|| A- B\otimes C||_F$, where $F$ denotes Frobenius norm.
This is reformulated as:
$\phi(B,C)$ = min $|| R- vec(B)\otimes vec(C)'||_F$
$vec$ is the vectorization operator which stacks columns of a matrix on top of each other. A is rearranged into $R \in \mathbb R^{m_1n_1\times m_2n_2}$ such that the sum of squares in $|| A- B\otimes C||_F$ is exactly the same as $|| R- vec(B)\otimes vec(C)'||_F$.
Example for arrangement where $m_1=3,n_1=m_2=n_2=2$:
$$ \phi(B,C) = \left| \left[ \begin{array}{cc|cc} a_{11}& a_{12} & a_{13} & a_{14} \\ a_{21}& a_{22} & a_{23} & a_{24} \\ \hline a_{31}& a_{32} & a_{33} & a_{34} \\ a_{41}& a_{42} & a_{43} & a_{44} \\ \hline a_{51}& a_{52} & a_{53} & a_{54} \\ a_{11}& a_{62} & a_{63} & a_{64} \end{array} \right] - \begin{bmatrix} b_{11}& b_{12} \\ b_{21}& b_{22} \\ b_{31}& b_{32} \end{bmatrix} \otimes \begin{bmatrix} c_{11}& c_{12} \\ c_{21}& c_{22} \end{bmatrix} \right|_F \\ \phi(B,C) = \left| \begin{bmatrix} a_{11}& a_{21} & a_{12} & a_{22} \\ \hline a_{31}& a_{41} & a_{32} & a_{42} \\ \hline a_{51}& a_{61} & a_{52} & a_{62} \\ \hline a_{13}& a_{23} & a_{14} & a_{24} \\ \hline a_{33}& a_{43} & a_{34} & a_{44} \\ \hline a_{53}& a_{63} & a_{54} & a_{64} \end{bmatrix} - \begin{bmatrix} b_{11} \\ b_{21} \\ b_{31} \\ b_{12} \\ b_{22} \\ b_{32} \end{bmatrix} \begin{bmatrix} c_{11}&c_{21} & c_{12} & c_{22} \end{bmatrix} \right|_F $$
Now the problem has turned into rank 1 approximation for a rectangular matrix. The solution is given by the singular value decomposition of $R = USV^T$ in [1,2]. $$ vec(B) = \sqrt{\sigma_1}u_1, \quad vec(C) = \sqrt{\sigma_1}v_1 $$
If $R$ is a rank 1 matrix solution will be exact i.e. $A$ is full seperable.[3]
[1] Golub G, Van Loan C. Matrix Computations, The John Hopkins University Pres. 1996 [2] Van Loan C., Pitsianis N., Approximation with Kronecker Products, Cornell University, Ithaca, NY, 1992 [3] Genton MG. Separable approximations of space–time covariance matrices. Environmetrics 2007; 18:681–695.
Solution 2:
First of all: there isn't going to be any unique answer for how to factorize them. To take a very simple example,
$\begin{align} [0\;\;6\;\;0\;\;0] = \left\{\begin{array}{c} [6\;\;0] \otimes [0\;\;1] \\ \\ [2\;\;0] \otimes [0\;\;3] \\ \\ [1\;\;0] \otimes [0\;\;6] \end{array}\right.\end{align}$
so that any of the products on the right (as well as a continuum of others!) could be a valid answer.
We can remedy this situation only by generalizing your criterion for a factorization, by requiring a function mapping any Kronecker product vector $\mathbf p$ an output consisting of a scalar $s \in \mathbb C$ and two unit vectors $\mathbf v, \mathbf w \in \mathbb C^2$ such that $\mathbf p = s \, (\mathbf v \otimes \mathbf w)$. In order for $\mathbf v$ and $\mathbf w$ to be unique, we also have to fix the degree of freedom in their coefficients represented by the complex argument, by requiring that the first non-zero coefficient be a positive real. And if $s = 0$, we will fix $\mathbf v = \mathbf 0$ and $\mathbf w = \mathbf 0$ as well, rather than unit vectors. It's clumsy... but then, it makes sense to talk about a function.
Secondly, we have to have some way of identifying if a vector $\mathbf p$ is a Kronecker product at all. There's a pretty simple way to do this, actually. Consider the definition of the Kronecker product, in terms of blocks:
$\mathbf v \otimes \mathbf w = [\;v_1 \mathbf w \;\;|\;\; v_2 \mathbf w \;\;|\;\; \cdots \;\;|\;\; v_m \mathbf w\;]$
where $\mathbf v \in \mathbb C^m$ and $\mathbf w \in \mathbb C^n$. If we took those blocks and instead made them separate rows, we'd get a matrix instead:
$\begin{bmatrix} v_1 \mathbf w \\ \hline v_2 \mathbf w \\ \hline \vdots \\ \hline v_m \mathbf w\end{bmatrix} = \mathbf v^\top \mathbf w$.
There's an isomorphism between the elements of $\mathbb C^m \otimes \mathbb C^n$ and m × n matrices; the mapping just shuffles the coefficients around. The Kronecker products, as we see, get mapped to outer products of vectors, and the salient thing about these matrices is that their rows are multiples of a common row-vector (and similarly for the columns), by construction. To see whether a (non-zero) matrix is an outer product, it suffices to find out if it has rank 1. To find out what it is an outer product of, you can find any non-zero row $\mathbf r_\ast\,$, and find out the coefficients of a vector $\mathbf q_\ast$ such that the matrix is equal to $\mathbf q_\ast^\top \mathbf r_\ast\,$. You can do the same for any Kronecker product vector; and to return a "canonical" result, you should compute unit vectors $\mathbf v \propto \mathbf q_\ast$ and $\mathbf w \propto \mathbf r_\ast$ (together with the scalar factor $s$). If you do this for the matrix, you get an answer for the Kronecker product as well.
From these remarks, it should be fairly clear that undoing a Kronecker product is in $\mathsf P$, as it amounts to simple computations on m × n matrices.
Solution 3:
I prefer to say the Kronecker of two vectors is reversible, but up to a scale factor. In fact, Niel de Beaudrap has given the right answer. Here I attempt to present it in a concise way.
Let $a\in\mathbb{R}^N$ and $b\in\mathbb{R}^M$ be two column vectors. The OP is: given $a\otimes b$, how to determine $a$ and $b$?
Note $a\otimes b$ is nothing but $\mathrm{vec} (ba^T)$. The $\mathrm{vec}$ operator reshapes a matrix to a column vector by stacking the column vectors of the matrix. Note $\mathrm{vec}$ is reversible. Therefore, given $c=a\otimes b$, first reshape it to a $M\times N$ matrix $C=ba^T$. $C$ is a rank-one matrix. You can simply decompose $C$ to get $kb$ and $a/k$.