Reverse engineering "nice" matrices from the eigenvalues
Before beginning, I should note some similarities to the question: Building matrices from eigenvalues . However, I have a different objective.
The Setup
When writing problems for an introductory linear algebra course, a generic question would be
Find the eigenvalues of $A= \begin{bmatrix} \dots \end{bmatrix}$.
From an instructor perspective, it is generally desirable that the computation be reasonably straightforward (e.g., not tasking students with finding roots of an irreducible quintic). Furthermore, the goal of the question is whether students can correctly find the characteristic polynomial and solve for the roots, rather than the ability to carefully add numerous fractions. As such, matrices $A$ with integer entries are generally preferable, perhaps along with a bias towards positive entries that aren't too large. However, we don't want to make the problem too easy. For example, tasking students with finding the eigenvalues of a diagonal/triangular matrix doesn't make for a particularly good question (unless it is specifically a question of recognizing this property).
As such, it is probably best to write such a question by starting with the eigenvalues and working backwards to generate a matrix $A$ with those eigenvalues.
These thoughts lead me to an (admittedly vague) definition:
A "nifty matrix" $A$ should
Have a given set of eigenvalues $\{\lambda_1, \dots, \lambda_n\}$.
Contain only non-zero integers: $A = (a_{ij}) \in \{\mathbb{Z}\setminus\{0\}\}^{n \times n}$
Note that our conditions necessitate that the eigenvalues be algebraic integers and that the list of eigenvalues must contain all of the Galois conjugates.
For example,
\[ D= \begin{bmatrix} 3 & 0 \\ 0 & 2 \end{bmatrix}, \quad T= \begin{bmatrix} 3 & 7 \\ 0 & 2 \end{bmatrix}, \quad \text{ and } A=\begin{bmatrix} 1 & 1 \\ -2 & 4 \end{bmatrix} \] all have the same eigenvalues, but only $A$ is "nifty matrix" for the eigenvalues $\{3,2\}$. Similarly, $$A= \begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix}$$ is a "nifty matrix" for the eigenvalues $\lambda_\pm = \frac{3 \pm \sqrt{5}}{2}$.
The Goal
Imagine that you're stuck in a textbook factory and need to write hundreds of "compute the eigenvalues" problems. How would you set about your task? Since you are picking the eigenvalues at the start, you can also control the characteristic and minimal polynomials of $A$, $\chi_A$ and $\mu_A$.
The way I envision this working is a function / algorithm that gives a (perhaps non-unique) mapping
\[{\text{eigenvalues & min. or char. poly} } \longrightarrow \text{nifty matrix}. \]
A simple approach
-
Start with a list of integer eigenvalues.
-
Form a diagonal matrix (or Jordan block matrix or upper-triangular matrix) $M$ that has the desired eigenvalues.
-
Build a matrix $P$ by doing some row operations on the identity matrix, without rescaling rows or adding non-integer copies of rows. By Cramer's rule, $P^{-1}$ will only have integer entries as well.
-
Form $A= P M P^{-1}$. See how we did. Keep trying different $P$ matrices until something works.
A slightly more sophisticated approach.
This approach has the benefit of starting with the characteristic polynomial, which allows you to generate nifty matrices for eigenvalues that aren't necessarily integers (unlike the above approach).
-
Pick a suitable characteristic polynomial $\chi$ and minimal polynomial $\mu$, each with integer coefficients.
-
Build a matrix $C$ that will be the rational canonical form of $A$ using the characteristic polynomial and minimal polynomial. Note that $C$ will have integer entries.
-
Use our trick from the previous algorithm: $A=P C P^{-1}$ where $\det(P)= \pm 1$. See how we did. Keep trying different $P$ matrices until something works.
The Question
Is there a better way? Note that both of these algorithms are ill-defined--how do we find a $P$ that ensures $A$ has no zero entries?
As an extension of this problem, suppose we defined a super nifty matrix to be a nifty matrix that minimizes $f(A) = \max_{i,j} \{|a_{ij|} \}$ over all similar nifty matrices? How could we find a (probably non-unique) minimizer? What if we minimized with respect to $g(A) = \sum_{i,j} |a_{ij}|$ or any other suitable norm on the entries?
What if we decided that negative numbers are too messy and wanted $A=(a_{ij}) \in \mathbb{N}^{n\times n}$? What conditions would this impose on the eigenvalues and our algorithms?
Edited to Add:
I should note that such "nifty matrices" truly aren't nice problems, outside of the $2 \times 2$ case. This is part of why I didn't call them "nice matrices," but rather "nifty" as in "Hey! This messy looking matrix of integers has my desired eigenvalues/ characteristic polynomial! Isn't that nifty?!" I am primarily interested in this problem as a mathematical construct; generating a clever linear algebra problem is only a motivating idea.
For another, perhaps more "realistic" motivation, you can recast the problem as:
Given a list of $n$ suitable eigenvalues, is it possible to created a complete graph on $n$ vertices such that each edge has a non-zero (possibly negative) integer weight and the weighted transition matrix has the desired eigenvalues? What is the "simplest" such graph? How can you algorithmically construct this graph / weighted transition matrix?
A possible solution looks as follows. The idea is that we start with a diagonal matrix and then alter the matrix without changing its characteristic polynomial.
Consider the following example. We start with the matrix $$ \begin{pmatrix} 3 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{pmatrix} \,, $$ which obviously has the eigenvalues $1$, $2$, and $3$. Its characteristic polynomial is given by $$ \det \begin{pmatrix} 3 - t & 0 & 0\\ 0 & 1 - t & 0 \\ 0 & 0 & 2 - t \end{pmatrix} \,. $$ The determinant does not change if we add a multiple of one row of the matrix to another row. For example, let us add two times the first row to the second row. We obtain that $$ \det \begin{pmatrix} 3 - t & 0 & 0\\ 6 - 2t & 1 - t & 0 \\ 0 & 0 & 2 - t \end{pmatrix} $$ is still the same polynomial as before. The problem, however, is that this is not the characteristic polynomial of the matrix where $t=0$, because the variable $t$ appears in an off-diagonal entry. We can eliminate this $t$, by adding a multiple of a suitable column to another column. In this example, we can add $(-2)$ times the second column to the first one. We obtain that $$ \det \begin{pmatrix} 3 - t & 0 & 0\\ 4 & 1 - t & 0 \\ 0 & 0 & 2 - t \end{pmatrix} $$ is still the same polynomial, and we see that this is the characteristic polynomial of the matrix $$ \det \begin{pmatrix} 3 & 0 & 0\\ 4 & 1 & 0 \\ 0 & 0 & 2 \end{pmatrix} \,. $$ This procedure can be repeated until a satisfying result is reached. In summary the procedure is
- Choose a diagonal matrix having the desired eigenvalues (from $\mathbb{Z}$) on the diagonal.
- Pick $i \not= j$, and $a \in \mathbb{Z}$.
- Add $a$ times the $j$th row to the $i$th row.
- Add $-a$ times the $i$th column to the $j$th column.
- Repeat from step 2. until satisfied.
This procedure works, as long as not all eigenvalues are identical.
I turned this idea into a Python script, which can be found at github. The script might generate matrices that sometimes have zeros in random places, however, this could be fixed using more sophisticated program logic.