Motivation/intuition behind using linear algebra behind these combinatorics problem

What is the motivation behind using linear algebra in these three problems ?

  • A pair $(m,n)$ is called nice if there is a directed graph with (self edge are allowed, but multiple edge are not allowed) $n$ vertices such that for every pair of vertices is connected by exactly $m$ paths of lenght $2$. Prove $(m,n)$ is nice iff $m \leq n$ and $\sqrt{mn} \in \mathbb{Z}$.

  • Let $S$ be a finite subset of $[0,1]$ containing $0$ and $1$ and such that every distance that occurs betwen paris of elements occurs at least twice, except for the distance $1$. Prove that $S$ contains only rational numbers.

  • Prove that $n$ distnict points, not all of them lying on a line, determine atleast $n$ distnict lines ?

By "motivation", I mean suppose you are good in linear algebra, but you don't know how to use linear algebra in combinatorics. What structure in the problem would trigger you that linear algebra is a good tool in using that ?

I understand the solutions using linear algebra to all these above three problems, but I have a hard time figuring out how one could have think of them in the first place - I mean, linear transformation between vector spaces and directed graph are two completely different things, how can one see the connection between them ?


As an example of what I mean, consider this problem:

Let $n$ be an even integer. How many subsets of the set $\{1,2,\dots,n\}$ can you pick if they all have to have odd size but the intersection of any two of them has to have even size?

For this problem, this by Fields Medallist Timothy Gowers gives a really good motivation of how one can think of using linear algebra (I'm wanting this kind of answer)

If you continue experimenting in this way, you will find the same thing every time: if you have chosen fewer than $n$ sets then you can choose more, but once you get to n sets then you get stuck. But there seems to be a great deal of freedom about how you choose the sets. This contrasts with many problems in extremal combinatorics, where the best possible examples are often unique, or unique up to some obvious symmetry. (A good example of the latter: how many subsets of $\{1,2,\dots,n\}$ can you choose if none of your subsets is contained in any other? The unique best possible collection of sets is all sets of size n/2.)

Are there any other circumstances where you have a collection of objects, and a condition on pairs $(x,y)$ of those objects, such that (i) whenever you have chosen objects $x_1,x_2,\dots,x_m$ with $m<n$ there are many different ways of choosing y such that the condition holds for each pair $(x_i,y)$, and (ii) it is impossible to choose more than n of them if any two have to satisfy the condition? Yes there are: a common one is when the objects are vectors in $\mathbb{R}^n$ and the condition on a pair $(x,y)$ is that x and y should be orthogonal. Since orthogonality implies independence, we cannot choose more than n non-zero vectors that are all orthogonal to each other.

Can we relate these two situations?


Your question has an answer in Jukna's book Extremal Combinatorics, but it is five chapters long (Part III of the book) so a summary will be made.

Dimensionality is a key property that the "linear algebra method" makes use of. Essentially linear algebra can be used in combinatorial problems where one wants to count the number of objects which satisfy some sort of independence property analogous to linear independence. When objects in problems have one-to-one correspondence with vectors in finite fields or polynomials of a fixed degree with structured coefficients then these are hints to consider using linear algebra.