Did I just discover a new way to calculate the signature of a matrix?

Solution 1:

Very nice question and observation! The fact that your method works for the examples provided is not a coincidence and is indeed quite general.

First, let me give you an example which shows that your conjecture cannot hold as stated. Consider the following sequence of operations:

$$ \underbrace{\begin{bmatrix} -1 & 0 \\ 0 & -1 \end{bmatrix}}_{A_1} \xrightarrow{R_1 = R_1 - R_2} \underbrace{\begin{bmatrix} -1 & 1 \\ 0 & -1 \end{bmatrix}}_{A_2} \xrightarrow{R_2 = R_2 + 2R_1} \underbrace{\begin{bmatrix} -1 & 1 \\ -2 & 1 \end{bmatrix}}_{A_3} \xrightarrow{R_1 = R_1 - R_2} \underbrace{\begin{bmatrix} 1 & 0 \\ -2 & 1 \end{bmatrix}}_{A_4} \xrightarrow{R_2 = R_2 + 2R_1} \underbrace{\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}}_{A_5}. $$

The matrix $A_1$ is symmetric with signature $(0,2)$. The matrix $A_2$ is not symmetric nor diagonalizable but still has $-1$ as a double root of the characteristic polynomial so you might say it still has the "correct signature". The matrix $A_3$ however has eigenvalues $\pm i$. The matrix $A_4$ is not diagonalisable but now has $1$ as a double root of the characteristic polynomial. Finally, the matrix $A_5$ is symmetric with signature $(2,0)$.

You might object and say that this sequence of operations is "silly" but this means that if you start with an arbitrary symmetric matrix, perform arbitrary row addition operations until you get some upper triangular matrix, the signature can't always be read from the diagonal entries.

So why did you get the correct answers? Let's consider first what happened in your first and second example and try to formalize it. You started with a symmetric matrix $A$ and using only the operation of adding a multiple of a row to another row brought $A$ into an upper triangular matrix $U$. More precisely, you did not use a completely arbitrary sequence of row operations but did them in the "standard way" which usually appears in Gaussian elimination. For the $3 \times 3$ examples, you have first used $R_2 = R_2 + cR_1, R_3 = R_3 + c'R_1$ to eliminate all the elements below the diagonal in the first column. Then you used $R_3 = R_3 + c''R_2$ to eliminate all the elements below the diagonal in the second column. Each row operation corresponds to a multiplication of $A$ by an elementary matrix $P$. So you have made a sequence of operations

$$ A \rightarrow P_1 A \rightarrow P_2 P_1 A \rightarrow \dots \rightarrow \underbrace{ \left( P_k \cdots P_1 \right)}_{L} A=U. $$

Since you performed your operations in the "standard way", the matrix $L$ which encodes all the operations is lower triangular with $1$'s on the diagonal. Let's multiply the equation $LA = U$ by $L^T$ on the right to get $LAL^T = UL^T$. Now here comes the magic. Both $U$ and $L^T$ are upper triangular where $L^T$ has $1$'s on the diagonal. But the equation $LAL^T = UL^T$ shows that $UL^T$ is also symmetric and so it must be diagonal! Hence $A$ is congruent to a diagonal matrix whose entries are the diagonal entries on $U$.

This explains why you got the correct answer in the first two examples. Note that it doesn't matter if $A$ is nondegenerate or not (both your examples are degenerate/possibly degenerate). However, this is not a general algorithm because if $a_{11} = 0$ you obviously can't use it to cancel all the entries below. In Gaussian Elimination, you can pivot by using row swap to make this element non-zero. However, this might change the signature of the matrix. In your case you can use a simultaneous row-column operation to make $a_{11}$ non-zero and then proceed with your algorithm but then it might be the case that $a_{22}$ is zero and then you are "stuck" because you only performed "half the operations" so there is no reason that the "signature" will be preserved.

The third example is more interesting because this is precisely the case where you need pivoting. If you keep track of the elementary operations you used, you will see that $P_k \cdots P_1$ is not lower triangular anymore (even though it is "almost" lower triangular) so my argument fails but you still got the correct answer. I'm not sure if it is coincidental or not and will try to analyze it more carefully when I have the time.