What makes a theorem "fundamental"?

Let's examine some of these so-called "fundamental" theorems.

($1$) The Fundamental Theorem of Arithmetic.

"Every natural number greater than $1$ has a unique representation as a product of primes."

This theorem about factorization establishes the primes as fundamental building blocks for studying numbers. This idea and the obsession with these numbers (who have an entire theorem named after them) which are the core building blocks of all integers has sparked multiple entire fields of research. Almost all problems about integers in Diophantine equations makes use of this theorem, because it makes proving an enormous body of results vastly simpler and in many cases makes them possible where direct proof is impossible. Most fields in mathematics would probably be crippled without this theorem. And because of the immense advantage of unique factorization (which not all settings have) number theorists were led to develop the field of algebraic number theory where this theorem finds its ultimate generalization in the context of Dedekind domains. In fact, to underscore just how good unique factorization is: if it were true that every number ring had unique factorization, there are old proofs of Fermat's last theorem which would have given us the result ages ago, instead of making us wait 350 years!


($2$) The Fundamental Theorem of Algebra.

"Every polynomial of degree $n$ with coefficients in the complex numbers has exactly $n$ complex roots (counting multiplicity)."

Algebra has its roots in the study of polynomials (no pun intended). You learn about other topics in a high school course named "algebra," but good old polynomials were some of the first objects considered by mathematicians in this field. Ring theory uses polynomials extensively, group theory was created to deal with the problem of understanding solutions to polynomials. Field theory would not be a thing without polynomials. And the fact that a polynomials is essentially uniquely determined by its roots (in the classical case of $\Bbb C$ certainly) is--like the FTA from ($1$)--essential in huge numbers of proofs in the subject, and without which, again, it probably would not be so much of an object of study, and certainly would be hundreds of years behind in results proven without it.


($3$) The Fundamental Theorem of Calculus.

"If $f$ is a continuous, $\Bbb R$-valued function on an interval $[a,b]$ and for $a<x<b$ define

$$F(x) = \int_a^x f(t)\,\mathrm{d}t.$$

Then $F(x)$ is differentiable on $(a,b)$ and $F'(x) = f(x)$."

This theorem links the two biggest concepts in calculus: derivatives, which measure rates of change, and integrals, which measure areas underneath curves--which have huge numbers of physical interpretations other than just the naive notion of "area." Before this theorem, computing integrals was a task of doing Riemann sums directly using hard limit arguments and a lot of work. This theorem revolutionized the field because it related integration to differentiation, a process which is much easier to perform because--for example--any function expressible in terms of elementary functions has an elementary function styled derivative. It destroyed excess computation time, and made the use of both tools much richer and formed the basis for the study of calculus in greater generality and in more applications. The FTC allows one to prove the change of variables formula and integration by-parts. The former, which is basically the chain rule in its derivative form, is a basic tool in proving many identities, which are the basic things in any mathematician's toolbox. The latter--Leibniz' product rule in derivative form--gives rise to the notion of a weak derivative and--for some very general notions about integration--allows one to rigorously explore ideas in areas such as partial differential equations, which are basically what describe most everything in the physical world. This idea founded several fields as well, including differential geometry and the idea of differential forms through Stokes' theorem, which is something that has helped us understand huge areas in mathematical physics and high-dimensional analysis, and has advanced society itself an enormous amount.


($4$) The Fundamental Theorem of Galois Theory.

Let $k$ be a field and $\overline{k}$ be its separable closure. Denote by $G_k=\text{Gal}\left(\overline{k}/k\right)$ the group of continuous field automorphisms of $\overline{k}$ which fix $k$ pointwise. Then there is an inclusion reversing bijection between subfields $k\subseteq K\subseteq\overline{k}$ and closed subgroups $H\le G_k$."

This is another theorem that launched a huge enterprise and--along with its applications--made Galois theory a topic which today finds applications in number theory, algebra, and beyond. The field itself is mostly settled now, but this, its main theorem has the staying power to make powerful advances in all the applications it has found, and has a central role in the Langland's program, a current area of research which is massive and global in scale.


Addendum (non "fundamental" fundamental theorems)

Here I'll note that there are fundamental theorems in all areas, not just the ones which have theorems that possess the word "fundamental" in the title. This also means that this is just works of my opinion, and not necessarily what everyone else might consider the "most fundamental" theorem to the topic.

($1'$) Demorgan's laws. (Fundamental Theorem of Logic)

"If we have two logical statements, $P,Q$. Then the negation of $P\land Q$, is $\lnot P\lor\lnot Q$, and the negation of $P\lor Q$ is $\lnot P\land\lnot Q$."

This set of relations, which fundamentally links logic to set theory--replace statements with sets, $\land$ with $\cap$, $\lor$ with $\cup$ and $\lnot$ with set complement--allows for a greater exploration of how one can do mathematics in a way which utilizes mathematics itself. It is one of the first theorems in logic, and provides a basis also for the idea of quantifiers and the duality which states $\lnot (\forall x : P)\iff \exists x : \lnot P$, which is an idea used in mathematical proofs as a technique time and again. Never mind content, which the other theorems thus-far address: without this, we'd have trouble proving anything!


($2'$) The law of cosines. (Fundamental Theorem of [Analytical] Geometry)

"If three sides of a triangle have lengths $a,b,c$ and the angles opposite those sides are $A,B,C$ respectively, then the relation

$$c^2=a^2+b^2-2ab\cos(C)$$

holds."

I choose this one, because the Pythagorean theorem, while excellent, is not the one that best generalizes. This notion of using this extra term, $2ab\cos(C)$ leads to the idea of an inner-product, which is an idea exploited in functional analysis, and is a foundation for the study of linear algebra, a subject without which no professional mathematician would function. It is essential in areas such as functional analysis, applied mathematics, mathematical physics, and much more. It's a revolutionary idea which allows us to change from pure geometry to the idea of analytical geometry.


($3'$) The Central Limit Theorem and Law of Large Numbers. (Fundamental Theorems of Statistics)

(CLT) "If we look at a bunch of averages of samples from our population of the same size, then their distribution looks like a bell curve."

(LLN) "If we take the average of some observations, then--so long as we make enough of them--we will get closer to what the real average of all the numbers is."

Here I do not think either can supersede the other. These two theorems together, form the basis for statistical analysis. Without the Central limit theorem, it would be difficult to impossible to analyze averages of random variables in any sort of efficient way. It takes very general objects one could care about (usually interpreted as samples of data from a population) and allows us to make inferences with good confidence on our errors to better understand the world around us. Without it, civilization would not exist in this information age of the internet. Information is the basis for the kind of advanced understanding we have of objects and physical laws, and has allowed us to develop things like the internet, smart phones, cars, spaceships, and so much more. The LLN is what allows us to be confident in our use of data. That is: without know much of anything about data we're dealing with (i.e. how it's distributed, which can drastically affect what we observe and what happens) we can still make good guesses about what will happen on average, which allows us safety to build on this information and achieve the same things as in the CLT.


($4'$) Tychonoff's Theorem and Urysohn's Lemma. (Fundamental Theorems of [Point Set] Topology

(Tychonoff) "A product of any collection of compact spaces is compact."

(Urysohn) "A topological space is normal if and only if two closed sets can be separated by a continuous function."

These theorems are two of the most fundamental and often-applied theorems from classical topology. The applications of Tychonoff's theorem top all other topological theorems by a huge margin. While not necessarily central to topology itself, the field as a whole is built more horizontally than vertically, and Tychonoff's theorem is one of the results which realizes the immense utility and ubiquity of topological methods. With the advent of the modern definition of compact as finite subcovers of open covers replacing the old notion from metric spaces, this result stands at a turning-point in the history of topology. Along with the modern definition of continuity, compact spaces are the new ABCs of modern mathematics as finite sets were in the classical days. Almost nowhere in modern mathematics will one see a proof about an object where topological concepts do not occur, and compact sets being the "nicest" sorts of sets, knowing that your favorite object is compact (or locally compact et cetera) is an advantage in proving results that cannot be understated. Furthermore, the idea behind using a product topology instead of the box topology originated in Tychonoff's work, and from a very general standpoint, this idea is somehow the most "right" way to think about topological products, so that--aside from the theorem itself--the ideas of its proof are also fundamental.

Urysohn's lemma, while less applied overall, is more central within the field of topology itself. Topology came as a refinement to the old notion of a metric space, but naturally metric spaces already had a great deal of theorems proven about them, so it was a worthwhile enterprise to find topological descriptions of a space which implies ones favorite space is a metric space and hence allowed for the use of all the hard work everyone had put into metric spaces to be applied to one's favorite spaces in this new context. This is the basic idea behind metrization theorems. Urysohn's lemma is a foundational result which helps one to prove many of these theorems and hence is very useful withing the field itself. While not carrying as much utility overall as Tychonoff, it is used to prove more theorems in topology than is Tychonoff, which is more useful for proving theorems with topology.


($5'$) The Three Isomorphism Theorems. (Fundamental Theorems of Group and Ring Theory)

Again there is a lot since there are three theorems and each is slightly different for groups vs rings, so I'll leave the link for those interested in reading about them.

These three results constitute the bedrock of group and ring theories--areas in modern algebra which study symmetry and algebraic relations in their most general forms. Group and rings are both primarily understood by examining sub and quotient groups/rings. The idea of a homomorphism is quite possibly the backbone of all modern mathematics. It's a notion of a function which preserves structure, in other words it's a notion of how similar two objects are while taking into account only the relevant properties we care about. In the case of groups, we care if we preserve the main bit of structure in a group, the group multiplication. In the case of rings, they need to preserve multiplication and addition, the two ring operations. The same holds for fields, boolean algebras, this list can go on ad nauseum, so I'll stop here before I write an entire paragraph for the list of things we use homomorphisms for. The isomorphism theorems are essential building blocks towards understanding the fundamental essence of homomorphisms and how to think about the objects that naturally arise in their study, namely subobjects (i.e. subgroups or rings) and quotient objects (i.e. quotient groups and rings). Thanks to the idea of a composition series and the celebrated Jordan-Hölder theorem, it tells us that a thorough understanding of homomorphisms can give us essentially unlimited insight into the structure of groups.


($6'$) Cauchy's Integral Formula. (Fundamental Theorem of Complex Analysis)

"Let $f(z)$ be holomorphic function on the open set $U\subseteq\Bbb C$. Let $z\in U$ and $\gamma=\gamma_z: [0,1]\to U$ be a simple, closed curve around $z$, oriented positively. Then

$$f(z)=\oint_{\gamma}{f(\xi)\over \xi-z}\,\mathrm{d}\xi.$$

Cauchy's integral formula deserves to be called "fundamental" in complex analysis if anything does. It proves the residue theorem which is hugely important in complex analysis, algebraic geometry, and analytic number theory--where it connects to the idea of a $\zeta$-function and distributions of primes in a very general setting. It proves the argument principle which led to the idea of branch cuts and multi-valued functions. This allowed us to more concretely understand the complex logarithm and root functions in sufficient generality in such a genius way that it basically jump started the field of topology through the notion of a Riemann surface. Finally, this result establishes that a single complex derivative ensures infinite differentiability and a local power series everywhere on an open set where a function is holomorphic, completely abolishing the notion that complex-differentiable and real-differentiable functions are behave similarly. This is what makes classical complex analysis such an immensely beautiful subject, the rigidity of the functions gives them amazing properties real functions cannot compete with for the very mild condition of being differentiable.


($7'$) The Artin Reciprocity Law. (Fundamental Theorem of Class Field Theory)

"Let $K/k$ be an abelian extension of fields and $\mathfrak{c}$ be an admissible cycle for this extension. Then the Artin map

$$I(\mathfrak{c})\to \text{Gal}(K/k)$$

is surjective with kernel $P_{\mathfrak{c}}\mathfrak{N}(\mathfrak{c}$) (described in the articles cited, omitted for brevity)."

Artin's reciprocity law is the ultimate generalization of quadratic reciprocity a theorem first proved by Gauß and lauded by him as the "golden theorem." Combined with the Hasse principle, it describes an incredible fact about quadratic forms and quadratic fields. These forms are heavily studied objects. Even today interesting theorems from many areas--such as the dynamical systems approach of Margulis to problems in fields such as Diophantine approximation, and $\vartheta$-functions and hence modular forms--are proven about them. Alongside the work of Hecke and Tate these forms are of huge importance. In the general case, it establishes norms as very special objects, and connects the idèle class group to the Galois group of a field extension in a beautiful and deep way. This is the foundation of modern ideas related to the Hilbert symbol, the Kronecker-Weber theorem, and the Langland's program (before mentioned).


($8'$) Parseval's Formula and the Plancherel Theorem. (Fundamental Theorems of Fourier Analysis)

(Parseval) "Let $f,g$ be functions of period $2\pi$ with Fourier series

$$f(x)\sim \sum_{n\in\Bbb Z} a_ne^{inx},\quad g(x)\sim\sum_{n\in\Bbb Z}b_ne^{inx}$$

Then the inner-product of the $\ell^2(\Bbb Z)$ functions, $(a_n),(b_n)$ is equal to

$${1\over\pi}\int_{-\pi}^\pi f(x)\overline{g(x)}\,\mathrm{d}x."$$

(Plancherel) "Let $\mathcal{F}$ be the Fourier transform on $(L^1\cap L^2)(\Bbb R)$ with normalization

$$\mathcal{F}(f)(\xi)=\int_{\Bbb R}f(x)e^{-2\pi i x\xi}\,\mathrm{d}x.$$

Then the continuous extension of $\mathcal{F}$ to $L^2(\Bbb R)$ is an isometry (in fact a unitary operator)."

The study of Fourier analysis goes back hundreds of years (and prototypes date back much farther). The Parseval formula transforms questions about periodic functions--which are a class of what are arguably one of the most important in all of mathematics--into those about its Fourier series, where techniques involved with sequences. Together with its generalizations, this greatly simplifies the study of functions to understanding their Fourier series (more on this after Plancherel). The Plancherel theorem establishes a truly remarkable duality between functions and their Fourier transforms. Because of the myriad properties of this transform, one can transfer very hard problems into much easier ones. In differential equations, one can turn differentiation (a very hard process to deal with in general) to questions about polynomials: objects studied since antiquity. In applied mathematics, one uses Fourier analysis and things like the sampling theorem to reconstruct signals from sampled data. This is what powers digital music, your ability to hear people over wireless phones, and a nearly infinite list of other things. Indeed, to do justice to the sheer magnitude and variety of uses of Fourier analysis and this duality established by Plancherel, would take three answers as long as this one to do a true modicum of justice. It is used in topology, representation theory, all forms of number theory (in things like the Poisson summation formula), and on and on and on. Parseval embodies analysis on the circle group--the prototype for general periodic functions--an idea which has proven to have a lot of mileage.

The two of them together underpin the works of Fourier analysis, a subject which has generalized to locally compact groups and their characters by the miracle of the Haar measure. This allows this fruitful and powerful field reign in very general problems, and is the source of entire industries of and careers in mathematics.


The Punchline There are a lot of things which make a theorem "fundamental" to a field.

The $4$ with "official" status have a variety of reasons for their names. ($1$) is extremely old (due to Euclid). It is also powerful and useful in proving things in arithmetic (i.e. number theory). ($2$) gets the name as a product of history. To see how much, one considers that Cardano's method for solving cubics was first used to win some of the old math contests from the 1500s. Algebra has branched off a lot since, but for a long time it was much simpler in scope, and unique factorization of polynomials found itself as a long-standing central interest and has kept its name due to this history. ($3$), like ($1$), richly deserves its name: the huge conceptual leap to connect the two different ideas of differentiation and integration is perhaps the single result which it is most satisfying to call "fundamental." Finally ($4$) stands at the top of its theory. It qualitatively embodies the philosophy of Galois theory in the capstone of the subject. No other result is as reflects the culmination of a field better than the FTGT. Notice, that it is not used a great deal to prove more results within Galois theory itself, here the result simply stands at the end of a long road, rather than near the beginning.

There are myriad reasons to lack a "fundamental theorem." In ($1'$), it would be strange to have a single, fundamental result. If the entire field of logic depended on one result, all of mathematics would be as well, which would either make this result unreasonably powerful or mathematics as a whole ridiculously simple.

For ($2'$), the field--as in ($4'$)--the results build more horizontally, so because of the diversity it makes less sense to name something "fundamental."

In ($3'$) and ($5'$), one notes that: (i) they have multiple results which are fundamental, no single one takes center stage and (ii) the chosen names are just better. Calling something "the fundamental theorem of [blah]" underscores--and can slightly over-inflate--a result's importance, but it makes it a lot less descriptive. The LLN and CLT are best with the name they have, as are the isomorphism theorems.

Finally ($6'$), ($7'$), and ($8'$) share the distinction that their theorems are named after the geniuses who proved them. This is more and more common in modern days, especially. Lang calls Artin Reciprocity the Fundamental Theorem of Class Field Theory, and most would probably agree. However, its official name is still "Artin reciprocity."

On the whole: what makes the theorems we have "fundamental," in name is often a combination of the amount of history the subject and result have. For ($1$) and ($2$) especially, the fields are extremely old, whereas subjects like analysis, group/ring theory, and topology don't date back nearly as far. What makes all these results more "fundamental" in spirit is their incredible power and flexibility. These are some of the results which have helped shape the fields they correspond to. Some motivate a great deal of further research, some provide the key ingredients for an absurd number of proofs, some act as ambassadors between subjects in a way that links them much more strongly than others.

This multiplicity of reasons means that it's fundamentally a flawed idea to attach particular importance just to the official name "fundamental," or hope for a very insightful and "right" definition of what that means. If you look at the two lists, the "unofficial" list ended up being twice as long as the official one, and some of the unofficial ones--such as ($6'$)-($8'$)--are perhaps more "fundamental" to their subjects than ($2$). So any definition tailored to fit the current state-of-affairs which also did justice to the name "fundamental" would probably need to include a few of the results in the unofficial list and could not be a good definition since it is clear that it is not always used.


Just a little addendum after @Adam Hughes' nice answer.

Let $(\Omega, \mathcal F, P)$ be a probability space. The information $I(A)$ of an event $A$ (any element of the $\sigma$-algebra $\mathcal F$) is given by $I(A)=f(P(A))$, denoting by $f$ a real function on $[0,1]$.

Then one has the

Fundamental theorem of information (FTI)

The Shannon entropy $f(x):=x\ln \frac{1}{x}+(1-x)\ln\frac{1}{1-x}$ is the unique continuous solution of the fundamental equation of information (FEI) $$f(x)+(1-x)f\left(\frac{y}{1-x}\right)=f(y)+(1-y)f\left(\frac{x}{1-y}\right) ~~(*)$$ on $D:=\{(x,y)~|~ x\in [0,1), y\in[0,1), x+y\leq 1\}$.

The Shannon entropy plays a fundamental role in information theory, information geometry (it even generates Kullback Leibler divergences), and in many applications. There exists even an analogue of the FEI in number theory; for the whole exposition I refer to Kontsevich, The $1\frac{1}{2}$-logarithm. Composition Math. 130 (2002).

Apart from applications et similia, I believe that it is the formulation of the FTI via functional equations that makes it interesting: after all, many important functions in mathematics are defined as solutions of functional equations (in primis the exponential function): the Shannon entropy becomes one of them.