Is there a list of all typos in Hoffman and Kunze, Linear Algebra?

Where can I find a list of typos for Linear Algebra, 2nd Edition, by Hoffman and Kunze? I searched on Google, but to no avail.


This list does not repeat the typos mentioned in the other answers.

Chapter 1

  1. Page 6, last paragraph.

An elementary row operation is thus a special type of function (rule) $e$ which associated with each $m \times n$ matrix . . .

It should be "associates".

  1. Page 10, proof of Theorem 4, second paragraph.

say it occurs in column $k_r \neq k$.

It should be $k' \neq k$.

  1. Page 18, last paragraph.

If $B$ is an $n \times p$ matrix, the columns of $B$ are the $1 \times n$ matrices . . .

It should be $n \times 1$.

  1. Page 24, statement of second corollary.

Let $\text{A} = \text{A}_1 \text{A}_2 \cdots A_\text{k}$, where $\text{A}_1 \dots,A_\text{k}$ are . . .

The formatting of $A_\text{k}$ is incorrect in both instances. Also, there should be a comma after $\text{A}_1$ in the second instance. So, it should be "Let $\text{A} = \text{A}_1 \text{A}_2 \cdots \text{A}_\text{k}$, where $\text{A}_1, \dots,\text{A}_\text{k}$ are . . .".

Chapter 2

  1. Page 52, below equation (2–16).

Thus from (2–16) and Theorem 7 of Chapter 1 . . .

It should be Theorem 13.

  1. Page 57, second last displayed equation.

$$ \beta = (0,\dots,0,\ \ b_{k_s},\dots,b_n), \quad b_{k_s} \neq 0$$

The formatting on the right-hand side is not correct. There is too much space before $b_{k_s}$. It should be $$\beta = (0,\dots,0,b_{k_s},\dots,b_n), \quad b_{k_s} \neq 0$$ instead.

  1. Page 57, last displayed equation.

$$ \beta = (0,\dots,0,\ \ b_t,\dots,b_n), \quad b_t \neq 0.$$

The formatting on the right-hand side is not correct. There is too much space before $b_t$. It should instead be $$\beta = (0,\dots,0,b_t,\dots,b_n), \quad b_t \neq 0.$$

  1. Page 62, second last paragraph.

So $\beta = (b_1,b_2,b_3,b_4)$ is in $W$ if and only if $b_3 - 2b_1$. . . .

It should be $b_3 = 2b_1$.

Chapter 3

  1. Page 76, first paragraph.

let $A_{ij},\dots,A_{mj}$ be the coordinates of the vector . . .

It should be $A_{1j},\dots,A_{mj}$.

  1. Page 80, Example 11.

For example, if $U$ is the operation 'remove the constant term and divide by $x$': $$ U(c_0 + c_1 x + \dots + c_n x^n) = c_1 + c_2 x + \dots + c_n x^{n-1}$$ then . . .

There is a subtlety in the phrase within apostrophes: what if $x = 0$? Rather than having to specify for this case separately, the sentence can be worded more simply as, "For example, if $U$ is the operator defined by $$U(c_0 + c_1 x + \dots + c_n x^n) = c_1 + c_2 x + \dots + c_n x^{n-1}$$ then . . .".

  1. Page 81, last line.

(iv) If $\{ \alpha_1,\dots,\alpha_{\text{n}}\}$ is basis for $\text{V}$, then $\{\text{T}\alpha_1,\dots,\text{T}\alpha_{\text{n}}\}$ is a basis for $\text{W}$.

It should read "(iv) If $\{ \alpha_1,\dots,\alpha_{\text{n}}\}$ is a basis for $\text{V}$, then . . .".

  1. Page 90, second last paragraph.

We should also point out that we proved a special case of Theorem 13 in Example 12.

It should be "in Example 10."

  1. Page 91, first paragraph.

For, the identity operator $I$ is represented by the identity matrix in any order basis, and thus . . .

It should be "ordered".

  1. Page 92, statement of Theorem 14.

Let $\text{V}$ be a finite-dimensional vector space over the field $\text{F}$ and let $$\mathscr{B} = \{ \alpha_1,\dots,\alpha \text{i} \} \quad \textit{and} \quad \mathscr{B}'=\{ \alpha'_1,\dots,\alpha'_\text{n}\}$$ be ordered bases . . .

It should be $\mathscr{B} = \{ \alpha_1,\dots,\alpha_\text{n}\}$.

  1. Page 100, first paragraph.

If $f$ is in $V^*$, and we let $f(\alpha_i) = \alpha_i$, then when . . .

It should be $f(\alpha_i) = a_i$.

  1. Page 101, paragraph following the definition.

If $S = V$, then $S^0$ is the zero subspace of $V^*$. (This is easy to see when $V$ is finite dimensional.)

It is equally easy to see this when $V$ is infinite-dimensional, so the statement in the brackets is redundant. Perhaps the authors meant to say that $\{ v \in V : f(v) = 0\ \forall\ f \in V^* \}$ is the zero subspace of $V$. This question asks for details on this point.

  1. Page 102, proof of the second corollary.

By the previous corollaries (or the proof of Theorem 16) there is a linear functional $f$ such that $f(\beta) = 0$ for all $\beta$ in $W$, but $f(\alpha) \neq 0$. . . .

It should be "corollary", since there is only one previous corollary. Also, $W$ should be replaced by $W_1$.

  1. Page 112, statement of Theorem 22.

(i) rank $(T^t) = $ rank $(T)$

There should be a semi-colon at the end of the line.

Chapter 4

  1. Page 118, last displayed equation, third line.

$$=\sum_{i=0}^n \sum_{j=0}^i f_i g_{i-j} h_{n-i} $$

It should be $f_j$. It is also not immediately clear how to go from this line to the next line.

  1. Page 126, proof of Theorem 3.

By definition, the mapping is onto, and if $f$, $g$ belong to $F[x]$ it is evident that $$(cf+dg)^\sim = df^\sim + dg^\sim$$ for all scalars $c$ and $d$. . . .

It should be $(cf+dg)^\sim = cf^\sim + dg^\sim$.

  1. Page 126, proof of Theorem 3.

Suppose then that $f$ is a polynomial of degree $n$ such that $f' = 0$. . . .

It should be $f^\sim = 0$.

  1. Page 128, statement of Theorem 4.

(i) $f = dq + r$.

The full stop should be a semi-colon.

  1. Page 129, paragraph before statement of Theorem 5. The notation $D^0$ needs to be introduced, so the sentence, "We also use the notation $D^0 f = f$" can be added at the end of the paragraph.

  2. Page 131, first displayed equation, second line.

$$ = \sum_{m = 0}^{n-r} \frac{(D^m g)}{m!}(x-c)^{r+m} $$

There should be a full stop at the end of the line.

  1. Page 135, proof of Theorem 8.

Since $(f,p) = 1$, there are polynomials . . .

It should be $\text{g.c.d.}{(f,p)} = 1$.

  1. Page 137, first paragraph.

This decomposition is also clearly unique, and is called the primary decomposition of $f$. . . .

For the sake of clarity, the following sentence can be added after the quoted line: "Henceforth, whenever we refer to the prime factorization of a non-scalar monic polynomial we mean the primary decomposition of the polynomial."

  1. Page 137, proof of Theorem 11. The chain rule for the formal derivative of a product of polynomials is used, but this needs proof.

  2. Page 139, Exercise 7.

Use Exercise 7 to prove the following. . . .

It should be "Use Exercise 6 to prove the following. . . ."

Chapter 5

  1. Page 142, second last displayed equation.

$$\begin{align} D(c\alpha_i + \alpha'_{iz}) &= [cA(i,k_i) + A'(i,k_i)]b \\ &= cD(\alpha_i) + D(\alpha'_i) \end{align}$$

The left-hand side should be $D(c\alpha_i + \alpha'_i)$.

  1. Page 166, first displayed equation.

$$\begin{align*}L(\alpha_1,\dots,c \alpha_i + \beta_i,\dots,\alpha_r) &= cL(\alpha_1,\dots,\alpha_i,\dots,\alpha_r {}+{} \\ &\qquad \qquad \qquad \qquad L(\alpha_1,\dots,\beta_i,\dots,\alpha_r)\end{align*}$$

The first term on the right has a missing closing bracket, so it should be $cL(\alpha_1,\dots,\alpha_i,\dots,\alpha_r)$.

  1. Page 167, second displayed equation, third line.

$${}={} \sum_{j=1}^n A_{1j} L\left( \epsilon_j, \sum_{j=1}^n A_{2k} \epsilon_k, \dots, \alpha_r \right) $$

The second summation should run over the index $k$ instead of $j$.

  1. Page 170, proof of the lemma. To show that $\pi_r L \in \Lambda^r(V)$, the authors show that $(\pi_r L)_\tau = (\operatorname{sgn}{\tau})(\pi_rL)$ for every permutation $\tau$ of $\{1,\dots,r\}$. This implies that $\pi_r L$ is alternating only when $K$ is a ring such that $1 + 1 \neq 0$. A proof over arbitrary commutative rings with identity is still needed.

  2. Page 170, first paragraph after proof of the lemma.

In (5–33) we showed that the determinant . . .

It should be (5–34).

  1. Page 171, equation (5–39).

$$\begin{align} D_J &= \sum_\sigma (\operatorname{sgn} \sigma)\ f_{j_{\sigma 1}} \otimes \dots \otimes f_{j_{\sigma r}} \tag{5–39}\\ &= \pi_r (f_{j_1} \otimes \dots \otimes f_{j_r}) \end{align}$$

The equation tag should be centered instead of being aligned at the first line.

  1. Page 173, Equation (5-42)

$$ D_J(\alpha_1,\dotsc,\alpha_r) = \sum_\sigma (\operatorname{sgn} \sigma) A(1,j_{\sigma 1})\dotsm A(n,j_{\sigma n}) \tag{5-42}$$

There are only $r$ terms in the product. Hence the equation should instead be: $D_J(\alpha_1,\dotsc,\alpha_r) = \sum_\sigma (\operatorname{sgn} \sigma) A(1,j_{\sigma 1})\dotsm A(r,j_{\sigma r})$.

  1. Page 174, below the second displayed equation.

The proof of the lemma following equation (5–36) shows that for any $r$-linear form $L$ and any permutation $\sigma$ of $\{1,\dots,r\}$ $$ \pi_r(L_\sigma) = \operatorname{sgn} \sigma\ \pi_r(L) $$

The proof of the lemma actually shows $(\pi_r L)_\sigma = \operatorname{sgn} \sigma\ \pi_r(L)$. This fact still needs proof. Also, there should be a full stop at the end of the displayed equation.

  1. Page 174, below the third displayed equation.

Hence, $D_{ij} \cdot f_k = 2\pi_3(f_i \otimes f_j \otimes f_k)$.

This is not immediate from just the preceding equations. The authors implicitly assume the identity $(f_{j_1} \otimes \dots \otimes f_{j_r})_\sigma = f_{j_{\sigma^{-1} 1}}\! \otimes \dots \otimes f_{j_{\sigma^{-1} r}}$. This identity needs proof.

  1. Page 174, sixth displayed equation.

$$(D_{ij} \cdot f_k) \cdot f_l = 6 \pi_4(f_i \otimes f_j \otimes f_k \otimes f_l)$$

The factor $6$ should be replaced by $12$.

  1. Page 174, last displayed equation.

$$ (L \otimes M)_{(\sigma,\tau)} = L_\sigma \otimes L_\tau$$

The right-hand side should be $L_\sigma \otimes M_\tau$.

  1. Page 177, below the third displayed equation.

Therefore, since $(N\sigma)\tau = N\tau \sigma$ for any $(r+s)$-linear form . . .

It should be $(N_\sigma)_\tau = N_{\tau \sigma}$.

  1. Page 179, last displayed equation.

$$ (L \wedge M)(\alpha_1,\dots,\alpha_n) = \sum (\operatorname{sgn} \sigma) L(\alpha \sigma_1,\dots,\alpha_{\sigma r}) M(\alpha_{\sigma(r+1)},\dots,\alpha_{\sigma_n}) $$

The right-hand side should have $L(\alpha_{\sigma 1},\dots,\alpha_{\sigma r})$ and $M(\alpha_{\sigma (r+1)},\dots,\alpha_{\sigma n})$.

Chapter 6

  1. Page 183, first paragraph.

If the underlying space $V$ is finite-dimensional, $(T-cI)$ fails to be $1 : 1$ precisely when its determinant is different from $0$.

It should instead be "precisely when its determinant is $0$."

  1. Page 186, proof of second lemma.

one expects that $\dim W < \dim W_1 + \dots \dim W_k$ because of linear relations . . .

It should be $\dim W \leq \dim W_1 + \dots + \dim W_k$.

  1. Page 194, statement of Theorem 4 (Cayley-Hamilton).

Let $\text{T}$ be a linear operator on a finite dimensional vector space $\text{V}$. . . .

It should be "finite-dimensional".

  1. Page 195, first displayed equation.

$$T\alpha_i = \sum_{j=1}^n A_{ji} \alpha_j,\quad 1 \leq j \leq n.$$

It should be $1 \leq i \leq n$.

  1. Page 195, above the last paragraph.

since $f$ is the determinant of the matrix $xI - A$ whose entries are the polynomials $$(xI - A)_{ij} = \delta_{ij} x - A_{ji}.$$

Here $xI-A$ should be replaced $(xI-A)^t$ in both places, and it could read "since $f$ is also the determinant of" for more clarity.

  1. Page 203, proof of Theorem 5, last paragraph.

The diagonal entries $a_{11},\dots,a_{1n}$ are the characteristic values, . . .

It should be $a_{11},\dots,a_{nn}$.

  1. Page 207, proof of Theorem 7.

this theorem has the same proof as does Theorem 5, if one replaces $T$ by $\mathscr{F}$.

It would make more sense if it read "replaces $T$ by $T \in \mathscr{F}$."

  1. Page 207-208, proof of Theorem 8.

We could prove this theorem by adapting the lemma before Theorem 7 to the diagonalizable case, just as we adapted the lemma before Theorem 5 to the diagonalizable case in order to prove Theorem 6.

The adaptation of the lemma before Theorem 5 is not explicitly done. It is hidden in the proof of Theorem 6.

  1. Page 212, statement of Theorem 9.

and if we let $\text{W}_\text{i}$ be the range of $\text{E}_\text{i}$, then $\text{V} = \text{W}_\text{i} \oplus \dots \oplus \text{W}_\text{k}$.

It should be $\text{V} = \text{W}_1 \oplus \dots \oplus \text{W}_\text{k}$.

  1. Page 216, last paragraph.

One part of Theorem 9 says that for a diagonalizable operator . . .

It should be Theorem 11.

  1. Page 220, statement of Theorem 12.

Let $\text{p}$ be the minimal polynomial for $\text{T}$, $$\text{p} = \text{p}_1^{\text{r}_1} \cdots \text{p}_k^{r_k}$$ where the $\text{p}_\text{i}$ are distinct irreducible monic polynomials over $\text{F}$ and the $\text{r}_\text{i}$ are positive integers. Let $\text{W}_\text{i}$ be the null space of $\text{p}_\text{i}(\text{T})^{\text{r}_j}$, $\text{i} = 1,\dots,\text{k}$.

The displayed equation is improperly formatted. It should read $\text{p} = \text{p}_1^{\text{r}_1} \cdots \text{p}_\text{k}^{\text{r}_\text{k}}$. Also, in the second sentence it should be $\text{p}_\text{i}(\text{T})^{\text{r}_\text{i}}$.

  1. Page 221, below the last displayed equation.

because $p^{r_i} f_i g_i$ is divisible by the minimal polynomial $p$.

It should be $p_i^{r_i} f_i g_i$.

Chapter 7

  1. Page 233, proof of Theorem 3, last displayed equation in statement of Step 1. The formatting of "$\alpha$ in $V$" underneath the "$max$" operator on the right-hand side is incorrect. It should be "$\alpha$ in $\text{V}$".

  2. Page 233, proof of Theorem 3, displayed equation in statement of Step 2. The formatting of "$1 \leq i < k$" underneath the $\sum$ operator on the right-hand side is incorrect. It should be "$1 \leq \text{i} < \text{k}$".

  3. Page 238, paragraph following corollary.

If we have the operator $T$ and the direct-sum decomposition of Theorem 3, let $\mathscr{B}_i$ be the ‘cyclic ordered basis’ . . .

It should be “of Theorem 3 with $W_0 = \{ 0 \}$, . . .”.

  1. Page 239, Example 2.

If $T = cI$, then for any two linear independent vectors $\alpha_1$ and $\alpha_2$ in $V$ we have . . .

It should be "linearly".

  1. Page 240, second last displayed equation.

$$f = (x-c_1)^{d_1} \cdots (x - c_k)^{d_k}$$

It should just be $(x-c_1)^{d_1} \cdots (x - c_k)^{d_k}$ because later (on page 241) the letter $f$ is again used, this time to denote an arbitrary polynomial.

  1. Page 244, last paragraph.

where $f_i$ is a polynomial, the degree of which we may assume is less than $k_i$. Since $N\alpha = 0$, for each $i$ we have . . .

It should be “where $f_i$ is a polynomial, the degree of which we may assume is less than $k_i$ whenever $f_i \neq 0$. Since $N\alpha = 0$, for each $i$ such that $f_i \neq 0$ we have . . .”.

  1. Page 245, first paragraph.

Thus $xf_i$ is divisible by $x^{k_i}$, and since $\deg (f_i) > k_i$ this means that $$f_i = c_i x^{k_i - 1}$$ where $c_i$ is some scalar.

It should be $\deg (f_i) < k_i$. Also, the following sentence should be added at the end: "If $f_j = 0$, then we can take $c_j = 0$ so that $f_j = c_j x^{k_j - 1}$ in this case as well."

  1. Page 245, last paragraph.

Furthermore, the sizes of these matrices will decrease as one reads from left to right.

It should be “Furthermore, the sizes of these matrices will not increase as one reads from left to right.”

  1. Page 246, first paragraph.

Also, within each $A_i$, the sizes of the matrices $J_j^{(i)}$ decrease as $j$ increases.

It should be “Also, within each $A_i$, the sizes of the matrices $J_j^{(i)}$ do not increase as $j$ increases.”

  1. Page 246, third paragraph.

The uniqueness we see as follows.

This part is not clearly written. What the authors want to show is the following. Suppose that the linear operator $T$ is represented in some other ordered basis by the matrix $B$ in Jordan form, where $B$ is the direct sum of the matrices $B_1,\dots,B_s$. Suppose each $B_i$ is an $e_i \times e_i$ matrix that is a direct sum of elementary Jordan matrices with characteristic value $\lambda_i$. Suppose the matrix $B$ induces the invariant direct-sum decomposition $V = U_1 \oplus \dots \oplus U_s$. Then, $s = k$, and there is a permutation $\sigma$ of $\{ 1,\dots,k\}$ such that $\lambda_i = c_{\sigma i}$, $e_i = d_{\sigma i}$, $U_i = W_{\sigma i}$, and $B_i = A_{\sigma i}$ for each $1 \leq i \leq k$.

  1. Page 246, third paragraph.

The fact that $A$ is the direct sum of the matrices $\text{A}_i$ gives us a direct sum decomposition . . .

The formatting of $\text{A}_i$ is incorrect. It should be $A_i$.

  1. Page 246, third paragraph.

then the matrix $A_i$ is uniquely determined as the rational form for $(T_i - c_i I)$.

It should be "is uniquely determined by the rational form . . .".

  1. Page 248, Example 7.

Since $A$ is the direct sum of two $2 \times 2$ matrices, it is clear that the minimal polynomial for $A$ is $(x-2)^2$.

It should read "Since $A$ is the direct sum of two $2 \times 2$ matrices when $a \neq 0$, and of one $2 \times 2$ matrix and two $1 \times 1$ matrices when $a = 0$, it is clear that the minimal polynomial for $A$ is $(x-2)^2$ in either case."

  1. Page 249, first paragraph.

Then as we noted in Example 15, Chapter 6 the primary decomposition theorem tells us that . . .

It should be Example 14.

  1. Page 249, last displayed equation

$$\begin{align} Ng &= (r-1)x^{r-2}h \\ \vdots\ & \qquad \ \vdots \\ N^{r-1}g &= (r-1)! h \end{align}$$

There should be a full stop at the end.

  1. Page 257, definition.

(b) on the main diagonal of $\text{N}$ there appear (in order) polynomials $\text{f}_1,\dots,\text{f}_l$ such that $\text{f}_\text{k}$ divides $\text{f}_{\text{k}+1}$, $1 \leq \text{k} \leq l - 1$.

The formatting of $l$ is incorrect in both instances. So, it should be $\text{f}_1,\dots,\text{f}_\text{l}$ and $1 \leq \text{k} \leq \text{l} - 1$.

  1. Page 259, paragraph following the proof of Theorem 9.

Two things we have seen provide clues as to how the polynomials $f_1,\dots,f_{\text{l}}$ in Theorem 9 are uniquely determined by $M$.

The formatting of $l$ is incorrect. It should be $f_1,\dots,f_l$.

  1. Page 260, third paragraph.

For the case of a type (c) operation, notice that . . .

It should be (b).

  1. Page 260, statement of Corollary.

The polynomials $\text{f}_1,\dots,\text{f}_l$ which occur on the main diagonal of $N$ are . . .

The formatting of $l$ is incorrect. It should be $\text{f}_1,\dots,\text{f}_\text{l}$.

  1. Page 265, first displayed equation, third line.

$$ = (W \cap W_1) + \dots + (W \cap W_k) \oplus V_1 \oplus \dots \oplus V_k.$$

It should be $$ = (W \cap W_1) \oplus \dots \oplus (W \cap W_k) \oplus V_1 \oplus \dots \oplus V_k.$$

  1. Page 266, proof of second lemma. The chain rule for the formal derivative of a product of polynomials is used, but this needs proof.

Chapter 8

  1. Page 274, last displayed equation, first line.

$$ (\alpha | \beta) = \left( \sum_k x_n \alpha_k \bigg|\, \beta \right) $$

It should be $x_k$.

  1. Page 278, first line.

Now using (c) we find that . . .

It should be (iii).

  1. Page 282, second displayed equation, second last line.

$$ = (2,9,11) - 2(0,3,4) - -4,0,3) $$

The right-hand side should be $(2,9,11) - 2(0,3,4) - (-4,0,3)$.

  1. Page 284, first displayed equation.

$$ \alpha = \sum_k \frac{(\beta | \alpha_k)}{\| \alpha_k \|^2} \alpha_k $$

This equation should be labelled (8–11).

  1. Page 285, paragraph following the first definition.

For $S$ is non-empty, since it contains $0$; . . .

It should be $S^\perp$.

  1. Page 285, line following the first displayed equation.

thus $c\alpha + \beta$ also lies in $S$. . . .

It should be $S^\perp$.

  1. Page 289, Exercise 7, displayed equation.

$$\| (x_1,x_2 \|^2 = (x_1 - x_2)^2 + 3x_2^2. $$

The left-hand side should be $\| (x_1,x_2) \|^2$.

  1. Page 316, first line.

matrix $\text{A}$ of $\text{T}$ in the basis $\mathscr{B}$ is upper triangular. . . .

It should be "upper-triangular".

  1. Page 316, statement of Theorem 21.

Then there is an orthonormal basis for $\text{V}$ in which the matrix of $\text{T}$ is upper triangular.

It should be "upper-triangular".

Chapter 9

  1. Page 344, statement of Corollary.

Under the assumptions of the theorem, let $\text{P}_\text{j}$ be the orthogonal projection of $\text{V}$ on $\text{V}(\text{r}_\text{j})$, $(1 \leq \text{j} \leq \text{k})$. . . .

The parentheses around $1 \leq \text{j} \leq \text{k}$ should be removed.


I'm using the second edition. I think that the definition before Theorem $9$ (Chapter $1$) should be

Definition. An $m\times m$ matrix is said to be an elementary matrix if it can be obtained from the $m\times m$ identity matrix by means of a single elementary row operation.

instead of

Definition. An $\color{red}{m\times n}$ matrix is said to be an elementary matrix if it can be obtained from the $m\times m$ identity matrix by means of a single elementary row operation.

Check out this question for details.