Intuitive interpretation of the adjacency matrix as a linear operator.

Naturally we can describe graphs via tables of "yes there is an edge" or "no there is not" between each pair of vertices, so the definition of an adjacency matrix is easily understood. Thinking of these tables as matrices, however, adds structure - specifically, an interpretation as a linear operator. Why do we look at them in this light? Is it just for application - for example, efficiently obtaining a lot of data about a graph by computing its spectrum? Or is there also an intuitive geometric (or algebraic) motivation behind the adjacency matrix?

For example, the $2$-path 2-path has adjacency matrix $$\mathcal{A}(P_2)=\left(\begin{array}{cc} 0 & 1\\1 & 0\end{array}\right)$$ which acts on a $2$-dimensional vector space by flipping the coordinates, $(x,y)\mapsto (y,x)$. Can we somehow intuitively connect this action to the $2$-path? What about for other simple graphs?


If $G = (V, E)$ is a graph, then the adjacency matrix is an operator which acts on the space of functions $V \to \mathbb{R}$ (say) via

$$A(f)(v) = \sum_{v \to w} f(w).$$

Constructing the adjacency matrix is therefore a form of linearization. Note that this description of the adjacency matrix makes the connection between powers of the adjacency matrix and paths on the graph immediate.

Said another way, there is a category called the category of spans of sets, and a graph may be regarded as an endomorphism in this category. The category of spans of sets is closely related to the category of relations except that we keep track of not only whether two things are related but a "set of ways" that two things are related. A suitable subcategory of this category admits a linearization functor to the category of vector spaces.