Matrix factorization
Solution 1:
Short answer : your problem can be transformed into an equivalent diagonalization problem, except in a few degenerate and easier cases.
Longer answer : I’m sorry but I don’t like your notations at all. I will write
$$ \left(\begin{array}{cc} {\mathbf{X_1}}& {\mathbf{X_2}}\\ {\mathbf{X_3}}& {\mathbf{X_4}}\end{array}\right) =\left(\begin{array}{cc} a{\mathbf I}& b{\mathbf I}\\ c{\mathbf I}& d{\mathbf I}\end{array}\right)\left(\begin{array}{cc}{\mathbf P_1}&\\&{\mathbf P_2}\end{array}\right)\left(\begin{array}{cc} e{\mathbf I}& f{\mathbf I}\\ g{\mathbf I}& h{\mathbf I}\end{array}\right) $$ instead of $$ \left(\begin{array}{cc}X_1&X_2\\X_3&X_4\end{array}\right) = \left(\begin{array}{cc}D_a&D_b\\D_c&D_d\end{array}\right)\left(\begin{array}{cc}P_1&\\&P_2\end{array}\right)\left(\begin{array}{cc}D'_a&D'_b\\D'_c&D'_d\end{array}\right) $$
We then can write
$$ \begin{array}{l} {\mathbf X_1}=(ae){\mathbf P_1}+(bg){\mathbf P_2} \\ {\mathbf X_2}=(af){\mathbf P_1}+(bh){\mathbf P_2} \\ {\mathbf X_3}=(ce){\mathbf P_1}+(dg){\mathbf P_2} \\ {\mathbf X_4}=(cf){\mathbf P_1}+(dh){\mathbf P_2} \\ \end{array} $$
So an obvious necessary condition for your problem to have a solution is that the vector space $\cal X$ spanned by $\mathbf{X_1},\mathbf{X_2},\mathbf{X_3},\mathbf{X_4}$ has dimension at most 2.
If the dimension of $\cal X$ is $0$, all the $\mathbf X_i$ are zero and we are done. If the dimension of $\cal X$ is $1$, all the $\mathbf{X_i}$ are multiples of a single matrix $\mathbf Z$ : ${\mathbf{X_i}}=x_i{\mathbf{Z}} $ for all $i \in [1,4]$ . Then we have a solution as follows :
$$ \left(\begin{array}{cc} {\mathbf X_1}& {\mathbf X_2}\\ {\mathbf X_3}& {\mathbf X_4}\end{array}\right) =\left(\begin{array}{cc} x_1{\mathbf I}& x_2{\mathbf I}\\ x_3{\mathbf I}& x_4{\mathbf I}\end{array}\right)\left(\begin{array}{cc}{\mathbf Z}&\\&{\mathbf Z}\end{array}\right)\left(\begin{array}{cc} {\mathbf I}& \\ & {\mathbf I}\end{array}\right) $$
So we can assume that the dimension of $\cal X$ is exactly $2$. Then there are two indices $1 \le i <j \le 4$ such that $\lbrace \mathbf{X_i},\mathbf{X_j}\rbrace$ is a basis of $\cal X$. And $\lbrace P_1,P_2 \rbrace$ is another basis of $\cal X$, so there is a $2 \times 2$ matrix (let’s call it $M=\left(\begin{array}{cc}m_{11}&m_{12}\\m_{21}&m_{22}\end{array}\right) $) such that
$$ \left(\begin{array}{c} {\mathbf P_1}\\ {\mathbf P_2}\end{array}\right)=M\left(\begin{array}{c} {\mathbf X_i}\\ {\mathbf X_j}\end{array}\right) $$
Suppose for example that $i=1$ and $j=2$. Then, there are numbers $x_1,x_2,y_1,y_2$ such that
$$ \begin{array}{l} {\mathbf {X_3}}=x_1 {\mathbf {X_1}} + x_2 {\mathbf {X_2}} \\ {\mathbf {X_4}}=y_1 {\mathbf {X_1}} + y_2 {\mathbf {X_2}} \end{array} $$
Then the equality we want can be rewritten as
$$ \left(\begin{array}{cc} {\mathbf{X_1}}& {\mathbf{X_2}}\\ x_1 {\mathbf {X_1}} + x_2 {\mathbf {X_2}} & y_1 {\mathbf {X_1}} + y_2 {\mathbf {X_2}}\end{array}\right) =\left(\begin{array}{cc} a{\mathbf I}& b{\mathbf I}\\ c{\mathbf I}& d{\mathbf I}\end{array}\right)\left(\begin{array}{cc}m_{11}{\mathbf X_1}+m_{12}{\mathbf X_2}&\\&m_{21}{\mathbf X_1}+m_{22}{\mathbf X_2} \end{array}\right)\left(\begin{array}{cc} e{\mathbf I}& f{\mathbf I}\\ g{\mathbf I}& h{\mathbf I}\end{array}\right) $$
Since $\mathbf X_1$ and $\mathbf X_2$ are linearly independent, we may identify coefficients; this yields :
$$ \left(\begin{array}{cc} 1 & 0\\ x_1 & y_1 \end{array}\right) =\left(\begin{array}{cc} a& b\\ c& d\end{array}\right)\left(\begin{array}{cc}m_{11}&\\&m_{21} \end{array}\right)\left(\begin{array}{cc} e& f\\ g& h\end{array}\right) $$
and
$$ \left(\begin{array}{cc} 0 & 1\\ x_2 & y_2 \end{array}\right) =\left(\begin{array}{cc} a& b\\ c& d\end{array}\right)\left(\begin{array}{cc}m_{12}&\\&m_{22} \end{array}\right)\left(\begin{array}{cc} e& f\\ g& h\end{array}\right) $$
If we put
$$ {\mathbf A}=\left(\begin{array}{cc} a& b\\ c& d\end{array}\right) \ \ {\text{and}} \ \ {\mathbf E}=\left(\begin{array}{cc} e& f\\ g& h\end{array}\right), $$
we deduce
$$ \left(\begin{array}{cc} 1 & 0\\ x_1 & y_1 \end{array}\right) ={\mathbf A}\left(\begin{array}{cc}m_{11}&\\&m_{21} \end{array}\right){\mathbf E} \ \ {\text{and}} \ \ \left(\begin{array}{cc} 0 & 1\\ x_2 & y_2 \end{array}\right) ={\mathbf A}\left(\begin{array}{cc}m_{12}&\\&m_{22} \end{array}\right){\mathbf E} $$
If $x_1$ and $y_2$ are both nonzero, all the matrices above are invertible (and in particular, all the $m_{ij}$ are nonzero), and we can write
$$ \frac{1}{y_2}\left(\begin{array}{cc} -y_2 & 1\\ x_2 & 0 \end{array}\right) =\left(\begin{array}{cc} 0 & 1\\ x_2 & y_2 \end{array}\right)^{-1} ={\mathbf E}^{-1}\left(\begin{array}{cc}\frac{1}{m_{12}}&\\&\frac{1}{m_{22}} \end{array}\right){\mathbf A}^{-1} $$
and multiplying,
$$ \left(\begin{array}{cc} 1 & 0\\ x_1 & y_1 \end{array}\right)\frac{1}{y_2}\left(\begin{array}{cc} -y_2 & 1\\ x_2 & 0 \end{array}\right) ={\mathbf A}\left(\begin{array}{cc}m_{11}&\\&m_{21} \end{array}\right){\mathbf E} {\mathbf E}^{-1}\left(\begin{array}{cc}\frac{1}{m_{12}}&\\&\frac{1}{m_{22}} \end{array}\right){\mathbf A}^{-1} $$
which means that
$$ \frac{1}{y_2} \left(\begin{array}{cc} -y_2 & 1\\ -x_1y_2+x_2y_1 & x_1 \end{array}\right) ={\mathbf A}\left(\begin{array}{cc}\frac{m_{11}}{m_{12}}&\\&\frac{m_{21}}{m_{22}} \end{array}\right){\mathbf A}^{-1} $$
So the matrix $N=\left(\begin{array}{cc} -y_2 & 1\\ -x_1y_2+x_2y_1 & x_1 \end{array}\right)$ must be diagonalizable. Conversely, if $N$ is diagonalizable, the computations above can be reversed to find a solution to the initial problem.