A real matrix whose eigenvalues have all negative real parts
While taking a look in some lecture notes of an ODE course, I found the following claim, which appeared in the text as an exercise:
Let $A$ be a real $n\times n$ matrix whose eigenvalues have all negative real parts. Then there is some $\beta>0$ such that $$\forall x\in\mathbb{R}^n\quad\langle Ax,x\rangle\leq-\beta\|x\|^2.$$I think the claim is false, as can be shown by taking for example $$A=\left(\begin{array}{cc}0&-1\\1&-1\end{array}\right).$$The characteristic polynomial is $$f_A(x)=\left|\begin{array}{rr}x&1\\-1&x+1\end{array}\right|=x^2+x+1,$$and its roots are $$x=\frac{-1\pm\sqrt{-3}}{2},$$so the real parts are negative. But taking $x=(1,0),$ $$\langle Ax,x\rangle=\langle(0,1),(1,0)\rangle=0,$$which shows that the above claim does not hold.
My problem: This claim is used in the text to prove a theorem which seems to be well-known and important. I guess the proof is not totally wrong, therefore I suspect that the above claim can be modified and turn into a true one. This is the point where I'll be happy to hear any suggestions.
Some ODE intuition: Consider the linear equation $$\frac{dy}{dt}=Ay.$$ Any solution is of the form $$y(t)=e^{tA}y_0,$$and if $A$'s eigenvalues all have negative real parts, it turns out that all the solutions get closer to $0$ as $t\to\infty$. The above (false) claim says that $\|y\|$ is decreasing at any time in this case. It could be nice to state a condition that can actually guarantee that.
Any thoughts?
The condition for $\|y\|$ to be (strictly) decreasing for all $y_0 \ne 0$, where $y = \exp(At) y_0$, is that $A + A^T$ is negative definite. If $A + A^T$ is negative semidefinite (as in your example), $\|y\|$ will be nonincreasing. But you can also have cases where all eigenvalues of $A$ are negative and $A + A^T$ is indefinite, e.g. $$A = \pmatrix{1 & 3\cr -3 & -4\cr}$$ and then $\|y\|$ will not be decreasing.
However, it is still true that $\|\exp(tA) y_0\| \to 0$ as $t \to \infty$ whenever all eigenvalues of $A$ have negative real parts. In fact, using Jordan canonical form, we can write $\exp(t A) = \sum_j P_j(t) \exp(-\lambda_j t)$ where $P_j(t)$ are matrices whose entries are polynomials in $t$. If all $\text{Re}(\lambda_j) < 0$, this implies that all entries of $\exp(tA)$ go to $0$ as $t \to \infty$.
If you don't want to use Jordan canonical form, you can also obtain $\exp(t A) = \sum_j P_j(t) \exp(-\lambda_j t)$ from a Laplace transform. Let $$G(s) = \int_0^\infty \exp(-st) \exp(tA)\; ds = \int_0^\infty \exp(t(A-sI))\; dt$$ be the Laplace transform of $\exp(tA)$ (i.e. the matrix whose entries are the Laplace transforms of the entries of $\exp(tA)$). For $\text{Re}(s)$ sufficiently large, the improper integral converges, since $\|\exp(tA)\| \le \exp(t\|A\|)$ for $t \ge 0$. Note that $$A G(s) = \int_0^\infty \exp(-st) A \exp(-tA)\; dt = \int_0^\infty \exp(-st) \dfrac{d}{dt} \exp(tA)\; dt $$ and using integration by parts (for $s$ large enough) this is $$ I + s \int_0^\infty \exp(-st) \exp(tA)\; dt = I + s G(s) $$ Thus $(A - s I) G(s) = I$, or $$G(s) = (A - s I)^{-1}$$ The entries of $(A - s I)^{-1}$ are rational functions of $s$ with poles at the eigenvalues $\lambda_j$ of $A$ (obtainable from Cramer's Rule), and go to $0$ as $s \to \infty$. Expand this in partial fractions: $$G(s) = \sum_j \sum_{k=1}^{n_j} \dfrac{R_{jk}}{(s - \lambda_j)^{k}}$$ for some matrices $R_{jk}$ and positive integers $n_j$. Now take the inverse Laplace transform.