Question on linear independence of particular vectors of $\mathbb{R}^8$

Solution 1:

I'm writing $z_k=a_k+ib_k$ with $a_k,b_k\in\mathbb R$ for your complex numbers.

The $7$ vectors are independent over the reals iff none of the following conditions is satisfied:

  1. $a_2b_3 = a_3b_2$
  2. $a_1b_2 + a_2b_4 + a_4b_1 = a_1b_4 + a_4b_2 + a_2b_1$
  3. $a_1b_3 + a_3b_4 + a_4b_1 = a_1b_4 + a_4b_3 + a_3b_1$

You can also reformulate these conditions in terms of determinants:

$$ \left|\begin{array}{ccc}a_2&a_3&0\\b_2&b_3&0\\1&1&1\end{array}\right|=0 \qquad \left|\begin{array}{ccc}a_1&a_2&a_4\\b_1&b_2&b_4\\1&1&1\end{array}\right|=0 \qquad \left|\begin{array}{ccc}a_1&a_3&a_4\\b_1&b_3&b_4\\1&1&1\end{array}\right|=0 $$

You could also take the product of these, as the product will be zero if any of its factors is zero.

Interpreting these determinants geometrically (interpreting $z_k$ as points in the complex plane), the first is zero if $z_2,z_3,0$ are collinear (i.e. $z_2$ is a real multiple of $z_3$ or vice versa), the second for $z_1,z_2,z_4$ collinear and the third for $z_1,z_3,z_4$ collinear.

I got these conditions out of the following Sage computation, which I'll explain below:

sage: R.<a1,b1,a2,b2,a3,b3,a4,b4> = QQ[]  # declare polynomial ring
sage: M = matrix([
....: [    0,     0,    a2,    b2,     0,     0,     0,     0],
....: [    0,     0,     0,     0,    a3,    b3,     0,     0],
....: [a1-a2, b1-b2, a2-a1, b2-b1,     0,     0,     0,     0],
....: [    0,     0, a2-a4, b2-b4,     0,     0, a4-a2, b4-b2],
....: [    0,     0,     0,     0, a3-a4, b3-b4, a4-a3, b4-b3],
....: [a1-a3, b1-b3,     0,     0, a3-a1, b3-b1,     0,     0],
....: [a1-a4, b1-b4,     0,     0,     0,     0, a4-a1, b4-b1]])
sage: ideal(M.minors(7)).minimal_associated_primes()
[Ideal (-b1*a3 + a1*b3 + b1*a4 - b3*a4 - a1*b4 + a3*b4) of Multivariate Polynomial Ring in a1, b1, a2, b2, a3, b3, a4, b4 over Rational Field,
 Ideal (-b1*a2 + a1*b2 + b1*a4 - b2*a4 - a1*b4 + a2*b4) of Multivariate Polynomial Ring in a1, b1, a2, b2, a3, b3, a4, b4 over Rational Field,
 Ideal (-b2*a3 + a2*b3) of Multivariate Polynomial Ring in a1, b1, a2, b2, a3, b3, a4, b4 over Rational Field]

The vectors are linearily dependent if all the $7\times7$ minors are zero. There are $8$ such minors. Now you could simply write down all the polynomial equations you get from the minors and declare that to be your condition. But probably you are looking for some simpler solution, in terms of the amount of formulas you have to write.

The solutions of this system of polynomial equations form an algebraic set which can be described using an ideal generated by these minors. That ideal contains the set of all polynomials in $R:=\mathbb Q[a_1,b_1,a_2,b_2,a_3,b_3,a_4,b_4]$ which are zero for every solution.

The ideal is not prime, so the algebraic set is not an irreducible algebraic variety, but the union of several such varieties. To describe these irreducible varieties, one can do a primary decomposition of the ideal and then take the associated primes for each component.

The documentation for associated_primes writes:

Return a list of the associated primes of primary ideals of which the intersection is $I$ = self.

An ideal $Q$ is called primary if it is a proper ideal of the ring $R$ and if whenever $ab \in Q$ and $a \not\in Q$ then $b^n \in Q$ for some $n \in \mathbb Z$.

If $Q$ is a primary ideal of the ring $R$, then the radical ideal $P$ of $Q$, i.e. $P = \{a \in R, a^n \in Q\}$ for some $n \in \mathbb Z$, is called the associated prime of $Q$.

If $I$ is a proper ideal of the ring $R$ then there exists a decomposition in primary ideals $Q_i$ such that

  • their intersection is $I$
  • none of the $Q_i$ contains the intersection of the rest, and
  • the associated prime ideals of $Q_i$ are pairwise different.

This method returns the associated primes of the $Q_i$.

The intersection of ideals corresponds to the union of algebraic sets. (This aspect has taken me a while to wrap my head around.) And going from the primary component $Q_i$ to the associated prime of that simplifies things. For example, if you have an ideal which contains $z_1^3$ as a generator, then the only way to satisfy the condition $z_1^3=0$ is by $z_1=0$. Thus dropping the power means you get the same zero set with simpler generators.

The function minimal_associated_primes has insufficient documentation. Reading the code suggests looking at the Singular function MinAssGTZ for details, but the documentation for that isn't too great either. Anyway, I would assume that the minimal associated primes are all associated primes (as described above) which are not supersets of associated primes of other components. I'm using this function as computing all associated primes takes longer than I care to wait.

Solution 2:

Because $\mathbb{R}^8_\mathbb{R}$ and $\mathbb{C}^4_\mathbb{R}$ are isomoprhic this is equivalent to finding conditions so that $V = \{v_1, ...,v_7\}$ is $\mathbb{R}$-linear independent.

Let $x = (x_1, ..., x_7) \in \mathbb{R}^7$. We want to obtain conditions on $V$ such that $$ \sum\limits_{n=1}^7 x_n v_n = 0 \implies \forall n \in 1...7, x_n = 0 $$

Since a zero vector would immediately ruin the linear independence of $V$ assume for a while that $z_1 \neq z_2$, $z_2 \neq z_4$, $z_3 \neq z_4$, $z_1 \neq z_3$ and $z_1 \neq z_4$. Later some of these will be replaced by stronger conditions.

Writing the equation $\sum\limits_{n=1}^7 x_n v_n = 0$ in matrix form we get

$$ \underbrace{ \left[ \begin{array}{ccccccc} 0 & 0 & z_1-z_2 & 0 & 0 & z_1-z_3 & z_1-z_4 \\ z_2 & 0 & z_2-z_1 & z_2-z_4 & 0 & 0 & 0 \\ 0 & z_3 & 0 & 0 & z_3-z_4 & z_3-z_1 & 0 \\ 0 & 0 & 0 & z_4-z_2 & z_4-z_3 & 0 & z_4-z_1 \\ \end{array} \right] }_{\displaystyle A} x = 0 $$

Row reducing $A$ $$ \left[ \begin{array}{ccccccc} \displaystyle 1 & 0 & 0 & 0 & \dfrac{z_4-z_3}{z_2} & \dfrac{z_1-z_3}{z_2} & 0 \\ 0 & 1 & 0 & 0 & \dfrac{z_3-z_4}{z_3} & \dfrac{z_3-z_1}{z_3} & 0 \\ 0 & 0 & 1 & 0 & 0 & \dfrac{z_1-z_3}{z_1-z_2} & \dfrac{z_1-z_4}{z_1-z_2} \\ 0 & 0 & 0 & 1 & \dfrac{z_3-z_4}{z_2-z_4} & 0 & \dfrac{z_4-z_1}{z_4-z_2} \\ \end{array} \right] x = 0$$

Multiplying $Ax$ and equating elements yields $$ x_2 = -x_1 \frac{z_2}{z_3} \tag{1} $$ $$ x_3 = x_1 \frac{z_2}{z_1-z_2} - x_7 \frac{z_1-z_4}{z_1-z_2} - x_5 \frac{z_3-z_4}{z_1-z_2} \tag{2} $$ $$ x_4 = - x_5 \frac{z_3-z_4}{z_2-z_4} - x_7 \frac{z_4-z_1}{z_4-z_2} \tag{3} $$ $$ x_6 = -x_1 \frac{z_2}{z_1-z_3} -x_5 \frac{z_4-z_3}{z_1-z_3} \tag{4} $$

Note that because $\dim\ker A \ge 3$ there will be at least $3$ free variables and this means that there are other ways of manipulating these equations. Nonetheless, every way will result in equivalent conclusions.

Starting by the simpler equation: $(1)$ we make $x_1 = x_2 = 0$ by requiring that $\operatorname{Im}{(\dfrac{z_2}{z_3})} \neq 0$.

Now setting $x_1 = 0$ in $(4)$, the same reasoning leads to requiring $\operatorname{Im}{(\dfrac{z_4-z_3}{z_1-z_3})} \neq 0$. Hence $x_5 = x_6 = 0$.

Similarly for $(3)$, we require $\operatorname{Im}{(\dfrac{z_4-z_1}{z_4-z_2})} \neq 0$. Then $x_4 = x_7 = 0$.

Thus $(2)$ yields $x_3 = 0$.

Therefore we conclude that the following six conditions are sufficient for $V$ to be $\mathbb{R}$-L.I.:

  1. $z_1 \neq z_2$
  2. $z_1 \neq z_3$
  3. $z_2 \neq z_4$
  4. $\operatorname{Im}{(\dfrac{z_2}{z_3})} \neq 0$
  5. $\operatorname{Im}{(\dfrac{z_4-z_3}{z_1-z_3})} \neq 0$
  6. $\operatorname{Im}{(\dfrac{z_4-z_1}{z_4-z_2})} \neq 0$

Solution 3:

This is not a full answer, but an attempt which would hopefully be helpful. We can consider $\mathbb{C}^n$ to be $\mathbb{R}^{2n}$ by identifying $\begin{bmatrix}x_1+iy_1\\\vdots\\x_n+iy_n\end{bmatrix}\in\mathbb{C}^n$ with $\begin{bmatrix}x_1\\\vdots\\ x_n\\y_1\\\vdots\\ y_n\end{bmatrix}\in\mathbb{R}^{2n}$. The given problem looks like an instance of the following setting.

Let $\alpha_1,\ldots,\alpha_n\in\mathbb{C}$ with $\alpha_j=\beta_j+i\gamma_j$. Let $\beta=\begin{bmatrix}\beta_1\\\vdots\\\beta_n\end{bmatrix},\,\gamma=\begin{bmatrix}\gamma_1\\\vdots\\\gamma_n\end{bmatrix}\in\mathbb{R}^n$. Then by our identification, we have $\alpha=\begin{bmatrix}\alpha_1\\\vdots\\\alpha_n\end{bmatrix}=\begin{bmatrix}\beta\\\gamma\end{bmatrix}$.

Now we have vectors $v_l,\,l=1,\ldots,s$ in $\mathbb{C}^n$, where with respect to the standard basis, each coordinate of $v_l$ is a $\mathbb{R}$-linear combination of $\alpha_i$'s. So we have matrices $T_l\in M_{n\times n}(\mathbb{R})$ such that $v_l=\begin{bmatrix}T_l&0\\0&T_l\end{bmatrix}\begin{bmatrix}\beta\\\gamma\end{bmatrix},\quad l=1,\ldots,s$.

For the given example, we have $\beta=\begin{bmatrix}x_1\\x_2\\x_3\\x_4\end{bmatrix},\,\gamma=\begin{bmatrix}y_1\\y_2\\y_3\\y_4\end{bmatrix}$. Further,

$$T_1=\begin{bmatrix}0&0&0&0\\0&1&0&0\\0&0&0&0\\0&0&0&0\end{bmatrix},\quad T_2=\begin{bmatrix}0&0&0&0\\0&0&0&0\\0&0&1&0\\0&0&0&0\end{bmatrix},\quad T_3=\begin{bmatrix}1&-1&0&0\\-1&1&0&0\\0&0&0&0\\0&0&0&0\end{bmatrix},\quad T_4=\begin{bmatrix}0&0&0&0\\0&1&0&-1\\0&0&0&0\\0&-1&0&1\end{bmatrix}$$

$$T_5=\begin{bmatrix}0&0&0&0\\0&0&0&0\\0&0&1&-1\\0&0&-1&1\end{bmatrix},\quad T_6=\begin{bmatrix}0&0&0&0\\1&0&-1&0\\0&0&0&0\\-1&0&1&0\end{bmatrix},\quad T_7=\begin{bmatrix}1&0&0&-1\\0&0&0&0\\0&0&0&0\\-1&0&0&1\end{bmatrix}$$

It can be checked that $\{T_1,\ldots,T_7\}$ is linearly independent in $M_{n\times n}(\mathbb{R})$. Define $\phi:M_{n\times n}(\mathbb{R})\to\mathbb{R}^{2n}$ as $\phi(T)=\begin{bmatrix}T&0\\0&T\end{bmatrix}\begin{bmatrix}\beta\\\gamma\end{bmatrix}$. Then $\phi(T_i)=v_i,\,i=1,\ldots,7$. Let $W=\text{span}\{T_1,\ldots,T_7\}$. Then $\{v_1,\ldots,v_7\}$ are linearly independent if and only if $\phi|_W$ is injective.