Simple theorems that are instances of deep mathematics

So, this question asks about how useful computational tricks are to mathematics research, and several people's response was "well, computational tricks are often super cool theorems in disguise." So what "computational tricks" or "easy theorems" or "fun patterns" turn out to be important theorems?

The ideal answer to this question would be a topic that can be understood at two different levels that have a great gulf in terms of sophistication between them, although the simplistic example doesn't have to be "trivial."

For example, the unique prime factorization theorem is often proven from the division algorithm through Bezout's lemma and the fact that $p\mid ab\implies p\mid a$ or $p\mid b$. A virtually identical proof allows you to establish that every Euclidean Domain is a unique factorization domain, and the problem as a whole - once properly abstracted - gives rise to the notion of ideals and a significant amount of ring theory.

For another example, it's well known that finite dimensional vector spaces are uniquely determined by their base field and their dimension. However, a far more general theorem in Model Theory basically lets you say "given a set of objects that have a dimension-like parameter that are situated in the right manner, every object with finite "dimension" is uniquely determined by its minimal example and the "dimension." I don't actually quite remember the precise statement of this theorem, so if someone wants to explain in detail how vector spaces are a particular example of $k$-categorical theories for every finite $k$ that would be great.

From the comments: In a certain sense I'm interested in the inverse question as this Math Overflow post. Instead of being interested in deep mathematics that produce horribly complicated proofs of simple ideas, I want simple ideas that contain within them, or generalize to, mathematics of startling depth.


In school they teach us that

$$\int\frac 1x\;\mathrm dx=\log\left|x\right|+C$$

But as Tom Leinster points out, this is an incomplete solution. The function $x\mapsto 1/x$ has more antiderivatives than just the ones of the above form. This is because the constant $C$ could be different on the positive and negative portions of the axis. So really we should write:

$$\int\frac 1x\;\mathrm dx=\log\left|x\right|+C\cdot1_{x>0}+D\cdot1_{x<0}$$

where $1_{x>0}$ and $1_{x<0}$ are the indicator functions for the positive and negative reals.

This means that the space of antiderivatives of the fuction $x\mapsto 1/x$ is two dimensional. Really what we have done is to calculate the zeroth de Rham cohomology of the manifold $\mathbb R-\{0\}$ (the domain on which $x\mapsto 1/x$ is defined). The fact that $\mathrm{H}^0_{\mathrm{dR}}\!\!\left(\mathbb R-\{0\}\right)=\mathbb R^2$ results from the fact that $\mathbb R-\{0\}$ has two components.


I'm not sure if this answer really fits the question. But the nice question prompted me to write down some thoughts I've been mulling for a while.

I think the simple distributive law is essentially deep mathematics that comes up early in school.

I hang out in K-3 classrooms these days. I'm struck by how often understanding a kid's problem turns out to hinge on showing how the distributive law applies. For example to explain $20+30=50$ (sometimes necessary) - you start with "2 apples + 3 apples = 5 apples" and then $$ 20 + 30 = 2 \text{ tens} + 3 \text{ tens} = (2+3)\text{ tens} = 5 \text{ tens} = 50. $$ So the distributive law is behind positional notation, and the idea that you "can't add apples to oranges" (unless you generalize to "fruits"). You even get to discuss a little etymology: "fifty" was literally once "five tens".

Euclid relies on the distributive law when he computes products as areas, as in Book II Proposition 5, illustrated with

enter image description here

The distributive law is behind lots of grade school algebra exercises in multiplying and factoring. If it were more explicit I think kids would understand FOIL as well as memorizing the rule.

Later on you wish they'd stop thinking everything distributes, leading to algebra errors with square roots (and squares), logarithms (and powers).

All of this before you study linear transformations, abstract algebra, rings, and ring-like structures where you explore the consequences when distributivity fails.