Motivation for adjoint operators in finite dimensional inner-product-spaces

Given a finite dimensional inner-product-space $(V,\langle\;,\rangle)$ and an endomorphism $A\in\mathrm{End}(V)$ we can define its adjoint $A^*$ as the only endomorphism such that $\langle Ax, y\rangle=\langle x, A^*y\rangle $ for all $x,y\in V$.

While all of this lets us prove some stuff about unitary, normal and hermitian matrices, I'd like to know if there's some other motivation behind its introduction (Is there a geometric interpretation? Any other algebraic remark?)


Here is an algebraic approach to adjoint operators. Let us strip away the existence of an inner product and instead take two vector spaces $V$ and $W$. Furthermore, let $V^*$ and $W^*$ be the linear duals of $V$ and $W$, that is, the collection of linear maps $V\to k$ and $W\to k$, where $k$ is the base field. If you're working over $\mathbb R$ or $\mathbb C$, or some other topological field, you might want to work with continuous linear maps between topological vector spaces.

Given a linear operator $A: V\to W$, we can define a dual map $A^*: W^* \to V^*$ by $(A^*(\phi))(v)=\phi(A(v))$. It is straight forward to verify that this gives a well defined linear map between the vector spaces. This dual map is the adjoint of $A$. For most sensible choices of dual topologies, this map should also be continuous.

The question is, how does this relate to what you are doing with inner products? Giving an inner product on $V$ is the same as giving an isomorphism between $V$ and $V^*$ as follows:

Given an inner product, $\langle x, y \rangle$, we can define an isomorphism $V\to V^*$ via $x\mapsto \langle x, - \rangle$. This will be an isomorphism by nondegeneracy. Similarly, given an isomorphism $\phi:V\to V^*$, we can define an inner product by $\langle x,y\rangle =\phi(x)(y)$. The "inner products" coming from isomorphisms will not in general be symmetric, and so they are better called bilinear forms, but we don't need to concern ourselves with this difference.

So let $\langle x,y \rangle$ be an inner product on $V$, and let $\varphi$ be the corresponding isomorphism $\varphi:V\to V^*$ defined above. Then given $A:V\to V$, we have a dual map $A^*:V^* \to V^*$. However, we can use our isomorphism to define a different dual map (also denoted $A^*$, but which we will denote by $A^{\dagger}$ to prevent confusion) by $A^{\dagger}(v)=\varphi^{-1}(A^*\phi(v))$. This is the adjoint that you are using.

Let us see why. In what follows, $x\in V, f\in V^*$. Note that $\langle x, \varphi^{-1} f \rangle = f(x)$ and so we have

$$ \langle Ax, \varphi^{-1}f \rangle = f(Ax)=(A^*f)(x)=\langle x, \varphi^{-1}(A^* f) \rangle $$

Now, let $y=\varphi^{-1}f$ so that $\varphi(y)=f$ Then we can rewrite the first and last terms of the above equality as

$$\langle Ax, y \rangle = \langle x, \varphi^{-1}(A^* \phi(y)) \rangle = \langle x, A^{\dagger}y \rangle $$


You should think of $A^{\ast}$ as the "backwards" version of $A$ (not to be confused with its inverse).

Let's work in a related setting, namely that of sets and relations. Recall that a relation $R$ from a set $X$ to a set $Y$ is a subset of $X \times Y$. I want to think of a relation as a kind of function from $X$ to $Y$, but which outputs subsets rather than elements of $Y$; we say that $Rx$ is the set of all $y$ such that $(x, y)$ lies in my subset.

More generally, if $E$ is a subset of $X$, then $RE$ is the set of all $y$ such that $(x, y)$ lies in my subset for some $x \in E$.

Sets admit an "inner product" taking values in $\{ 0, 1 \}$ defined as follows: if $S, T$ are two subsets of a set $X$, then we define $\langle S, T \rangle = 1$ if $S \cap T$ is non-empty and $\langle S, T \rangle = 0$ otherwise.

Now, I claim that for any relation $R : X \to Y$ there is a unique relation $R^{\ast} : Y \to X$ such that $$\langle RE, F \rangle = \langle E, R^{\ast} F \rangle$$

for all subsets $E$ of $X$ and $F$ of $Y$. It turns out that $R^{\ast}$ is just the subset $\{ (y, x) : (x, y) \in R \}$; thinking of a relation as a matrix with entries in $\{ 0, 1 \}$ it is the transpose matrix.

Thinking of a relation as a collection of arrows from elements of $X$ to elements of $Y$, taking adjoints of relations corresponds to reversing the direction of all of the arrows. This is roughly speaking what happens when you take the adjoint of a bounded linear operator between Hilbert spaces.