Why is complex analysis so nice? And how is it connected/motivating for algebraic topology? [duplicate]

This is very much a soft question, but after seeing Cauchy's integral formula in lecture today I was really struck by how neat complex analysis is. I don't understand how all of these amazing analytic properties (global extrapolations from local properties/holomorphic implies infinitely holomorphic) can come from just algebraically adjoining the square root of -1.

When I asked my professor about this, he said it was a function of the complementary relationship between complex analysis and algebraic topology and didn't really expand on that.

Even not knowing much algebraic topology, this connection does seem clear in some ways (the importance of simple connectedness for Cauchy's theorem and dealing with so many paths/using code words for homotopy). However, I am still not sure what it is about the complex plane that lends itself to this special link, especially when it comes to functions. $\mathbb{R}^{2}$ is topologically equivalent (maybe just point set topology questions?) from what I understand and it definitely isn't as nice.

I would appreciate any sort of discussion or direction towards references (especially for someone that hasn't learned much topology formally-Hatcher is a difficult text for me to grapple with on my own) and I hope this is interesting to other people.


I think one reason Complex Analysis is so nice is because being holomorphic/analytic is an extremely strong condition.

As opposed to real analysis, differentiability is a rather weak condition, so we have functions that are differentiable once but not twice etc. Real analysis is full of nasty counterexamples like the Weierstrass function which is continuous everywhere but differentiable nowhere.

Analytic functions are $C^\infty$, meaning they can be infinitely differentiated. Even more than that, analytic series is equal to its own Taylor series.

With regards to Algebraic Topology (AT), Hatcher does not focus much on the link between Complex Analysis and AT. Something interesting is that the Fundamental Theorem of Algebra can be proved in two different ways using Complex Analysis or Algebraic Topology (found in Chapter 1 of Hatcher).


I'm going to narrow your question to the following first paragraph:

I don't understand how all of these amazing analytic properties (global extrapolations from local properties/holomorphic implies infinitely holomorphic) can come from just algebraically adjoining the square root of -1.

There are three questions hidden here: Why $-1$? Why the square root (as opposed to, say, the cube root)? And how do these produce "these amazing analytic properties"?

The other answer and the question John Kyon linked in comments give excellent answers to the third question. To quickly summarize, holomorphisms are nice, because Cauchy's formula and shifting contours give us the implication $\text{integrable}\Rightarrow\text{differentiable}$. We get Cauchy's formula and shifting contours from the Cauchy-Riemann equations, and the CR-equations arise because we want the derivative of a $\mathbb{C}\to\mathbb{C}$ function at a given point to itself be an element of $\mathbb{C}$.

But this leaves the first two questions more mysterious. We can generalize the construction of $\mathbb{C}$ quite substantially: given a commutative ring $R\leq\mathbb{R}$ and an $R$-algebra $A$, we can ask about the functions $A\to A$ with derivatives given by the multiplication action of $A$ on itself. For example, we could always look at numbers of the form $a+b\sqrt{-2}$ instead of $a+b\sqrt{-1}$. Of course, it turns out that those numbers are just $\mathbb{C}$ again…but can you be sure this isn't just a bad example? Why isn't there just as nice a theory for these other algebras? Why don't we hear about them?

The answer is that there are (essentially) no other algebras. We need to have some sort of underlying complete field in order to define derivatives. So we need to start with $R=\mathbb{R}$ above. But then abstract algebra tells us that, since $\mathbb{R}$ is a field, any commutative, finite-dimensional $\mathbb{R}$-algebra is a direct sum (as $\mathbb{R}$-algebras) of field extensions of $\mathbb{R}$. So the $A$ we wanted to analyze above is built out of objects like $\mathbb{C}=\mathbb{R}(\sqrt{-1})$…or $\mathbb{R}(\sqrt{-2})$.

So what sort of numbers can we adjoin to $\mathbb{R}$ to get something bigger than $\mathbb{R}$? By a Galois-theoretic argument (see Dummit and Foote, section 14.6), the answer is precisely "square roots of negative numbers." Moreover, those additions give all of $\mathbb{C}$, by comparing the dimensions as $\mathbb{R}$-vector spaces. So if we create an algebra by adjoining a different square root, we still get $\mathbb{C}$, but with a weird coordinatization that makes no geometric sense. We might as well use the coordinatization that gives good geometry!