Cohomology easier to compute (algebraic examples)

Solution 1:

One slogan for cohomology is:

Cohomology is representable, homology is not.

That means that for a particular cohomology group, say $H^1(X;\mathbb{Z})$, there is an honest space[1] (which in this case we write $K(\mathbb{Z},1)$) such that $H^1(X;\mathbb{Z}) \cong [X,K(\mathbb{Z},1)]$, where $[X,Y]$ is the set of homotopy classes of maps from $X$ to $Y$.

The use of this is that to know your cohomology theory, you just need to know about the spaces $K(\pi,n)$, for $\pi$ an (abelian) group and $n$ an integer. That's a lot simpler than studying all cohomology groups of all spaces in a single go.

It explains (for some definition of the word "explain") why cohomology groups have the structure that they do: if there are suitable maps between the $K(\pi,n)$s, then there are suitable operations on all the corresponding cohomology groups. Proving that homology has nice structure is conversely quite difficult.

(For example, only for only some spaces is the homology a co-ring, but the conditions aren't pleasant. However, all (ordinary) cohomology is a ring.)

Another place where this makes life easier is in the theory of operations. Basically, the more stuff you can find out about your theory, cohomology or homology, the more powerful it becomes. There are spaces with the same cohomology groups but different ring structures ($S^1 \times S^1$ and $S^2 \vee S^1 \vee S^1$, for example; incidentally, it's a lot harder to see that these are different looking at just the homology). Operations are a bit like generalised ring structures: they provide more stuff than just multiplication and scalar multiplication. There are spaces with the same cohomology rings but different actions of the cohomology operations. So knowing the operations adds power to your theory.

And it's thanks to representability that we can get at the structure of the operations. Using the Yoneda Lemma, operations on, say $H^1(-;\mathbb{Z})$ are the same as $[K(\mathbb{Z},1),K(\mathbb{Z},1)]$. So the study of operations comes down to studying one particular set, with all its attendant structure.

If you search the literature, you will see a lot of work has gone in to studying cohomology operations but not a lot for homology operations. Those are much, much harder to work with because they are so hard to get a hold of.

So, in summary, because cohomology is representable, it is much more accessible and the structure is "on the surface": there is no need to dig deep to study it.

I realise that this is short on examples, you could extract a silly one from the above: if you compute the homology of $S^1 \times S^1$ and of $S^2 \vee S^1 \vee S^1$ then you get the same and I challenge you to see from the bare homology that they are different (I don't know how hard that is!), but the cohomology with the ring structure instantly differentiates them.

[1] Technically, it is a homotopy type as there are several spaces that will do. But all of them will do, you can just choose the one you like and stick with it.

Solution 2:

Here are a few other remarks, to add to the existing answer. I will primarily discuss Poincare duality, and how it is simpler to construct using cohomology rather than homology. Although this is a rather particular thing to focus on, it is quite important, and also quite nicely illustrates some more general aspects of homology vs. cohomology.

I will begin by discussing the homology side of things, since this is the most geometric setting (although, as we will see, one confronts more technical difficulties if one tries to work directly with homology):

The basic ingredient of Poincare duality, phrased in terms of homology, is the following: if $M$ is a connected closed oriented $n$-manifold, then there is a natural bilinear map $H_i \times H_j \to H_{i + j - n}$, sometimes written $z_1\times z_2 \mapsto z_1\cdot z_2$, given by the intersection of cycles. (Here I will implicitly work with $\mathbb Q$ coefficients, to avoid torsion phenomena, which would complicate the discussion somewhat.)

The direct construction of this intersection pairing is somewhat non-trivial if you are working from first principles: you have to find reasonably nice representatives of the two cycles to be paired that meet transversally, then interpret the intersection as a homology class, and finally check that the answer is well-defined independent of the original choice of representatives.

One idea for simplifying this process is as follows: rather than directly intersecting two cycles, say $z_1$ and $z_2$, on the given manifold $M$, we can form the product cycle $z_1\times z_2$ on $M\times M$, and intersect that with the diagonal $\Delta_M \subset M\times M$.

As a justification for this, note that a moment's thought will show at least that if we were forming just the intersection of subsets $S_1$ and $S_2$ of $M$, then we would get the same answer by intersecting $S_1\times S_2$ with $\Delta_M$.

Does this carry over to intersecting cycles? Well, we can think of $z_1\times z_2$ as a cycle on $M\times M$ via Eilenberg–Zilber/Künneth. And so if we know how to form intersection with the diagonal $\Delta_M$, we can then intersect $z_1\times z_2$ with $\Delta_M$ (thought of as a cycle on $M\times M$) to obtain a cycle which physically lives on $\Delta_M$ (now thought of as a submanifold of $M$).

The problem is that we now want to identify this cycle $(z_1\times z_2)\cdot \Delta_M$ as a cycle on the original manifold $M$. Of course we have the natural isomorphism $M \cong \Delta_M$ given by the diagonal embedding of $M$ into $M\times M$, but we don't have a general mechanism via which we can take a cycle which technically lives in the homology of $M\times M$, but which happens to be supported on $\Delta_M$, and move it to $M$. In this particular case we could try to do something by hand, as it were, but it is normally easier if a construction can be made via general principles rather than by ad hoc methods.

Cohomology presents us with a way around both difficulties: how to actually define the intersection $(z_1\times z_2)\cdot \Delta_M$, and, once it's defined, how to move the resulting cycle back onto $M$. In fact, it deals with both problems at a single stroke.

Of course, we have to begin by recasting things in terms of cohomology. For this, we recall that $H^i$ is the dual to $H_i$ (when we have $\mathbb Q$-coefficients), and working with dual spaces, i.e. with cohomology, should be just as good as working with the original space, i.e. with homology; we can always (at least try to) get back to the original context by passing to double duals (since $H_i$ of a closed manifold is finite-dimensional).

Eilenberg–Zilber/Künneth equally well allows us to form the product of cohomology classes.

So now how do we "intersect with diagonal and then move from $\Delta_M$ to $M$"? Well, this is easy with cohomology, because cohomology is contravariant: we simply pull-back the product of our cohomology classes along the diagonal map $M \to M\times M$!

Note: this procedure is exactly the cup-product, and I believe that this is (at least one part of) the origin of the cup-product in cohomology — it comes out of trying to find a nice way to describe intersection pairings.


In summary: intersection pairings, which are subtle to construct in terms of homology, are easier to construct in terms of cohomology, because of the nature of its functoriality.


Of course, one has to do more than simply construct the cup-product in order to get intersection pairings on manifolds: cup product will give a map $H^i \times H^j \to H^{i+j}$, or equivalently (passing to duals), a map $H_{i+j} \to H_i \otimes H_j$, whereas what we want is a map $H_i \times H_j \to H_{i+j - n}.$ So cup-product hasn't solved all our problems in constructing the intersection pairing.

To get from what we have to what we want, we need an extra step that identifies $H^{n-i}$ with $(H^{i})^*$, i.e. with $H_{i}$. Then the first pairing (with $i$ and $j$ replaced by $n-i$ and $n-j$) will become $H_i \times H_j \to H_{i +j - n},$ as required.

What is the moral reason that $H_i$ can be identified with $H^{n-i},$ i.e. with $H_{n-i}^*$? It is that under intersection pairing they pair into $H_0 = \mathbb Q$, and this pairing is (ultimately going to be proved to be) non-degenerate.

But again, it is tricky to define this, so it is technically easier to go to the cohomology side and consider $H^i \times H^{n-i} \to H^n$. Then we have a well-defined map by cup-product essentially for free, and so we are reduced to showing that $H^n = \mathbb Q$ (this comes from orientability and the related theory of the fundamental class), and that cup-product is non-degenerate. The latter statement is Poincare duality in its cohomological form.


To summarize a second time: working with cohomology allows one to side-step the problem of defining the intersection pairing on homology altogether, by using cup-product instead. (To get the statement of Poincare duality one still has to do work, but at least the "pairing", which is now just cup-product, is there from the beginning.) In fact, because of this, I believe it's not so easy to find a modern text which treats intersection pairing on homology at all, especially from a geometric point of view (rather than say working with cohomology first and then just defining intersection pairing by using duality between homology and cohomology to move to the homology setting).

Furthermore, cup-product exists in great generality, unlike the intersection pairing on homology, which only exists in the context of manifolds. (In the cohomology treatment, definining the pairing and then proving Poincare duality in the setting of manifolds become two separate issues, the first of which is just solved straight away in a completely general setting by the existence of cup-product, while in the homology treatment, the two issues are all tangled up together.)


A final remark, closely related to Andrew Stacey's answer: cohomology is contravariant (which is why one can define cup-product so easily). Other basic objects, e.g. functions, are also contravariant. (E.g. if $f$ is a continuous function on $Y$ (to any target), and $\phi:X\to Y$ is continuous, then $\phi^*f = f\circ \phi$ is a continuous function on $X$ to the same target.)

Pulling back is generally very nice, compared to pushing forward. (Think about the way set-theoretic operations like intersections and complements behave under pull-back compared to pushforward, or about the fact that we can pull back covector fields and differential forms, but can't pushforward vector fields in general.)

This gives cohomology some intrinsic advantages over homology; the above discussion of Poincaré duality illustrates this.

Solution 3:

One advantage cohomology has over homology is that it is a ring under the cup product operation. For example, $H^*(\mathbb{RP}^n;\mathbb Z_2)$ is the truncated polynomial ring $\mathbb Z_2[x]/x^{n+1}$, with a single generator $x^i$ in each dimension $i\leq n$.

Edit: Actually, even if you only care about homology, Poincare duality forces you to enter the world of cohomology. It states that $H_k(M)\cong H^{n-k}(M)$ for a closed $n$-manifold $M$, so there's no way to avoid cohomology. Then, for example, Poincare Duality, together with the universal coefficient theorems, tells you how to compute the homology of a manifold, even if you only know only the answer up to half the dimension. But you need cohomology to do it.