Geometrically, why do line bundles have inverses with respect to the tensor product?
Geometrically, why do line bundles have inverses with respect to the tensor product?
Here my thoughts on the problem so far, please excuse their scatteredness.
I know algebraically, it is just because they are locally modules generated by $1$ element. Basically, it is just the fact that if one has a finitely generated module $M$ over a local ring $R$, there exists $M^{-1}$ with $M \otimes M^{-1} = R$ only when $M$ is free of rank $1$. This is not hard to prove.
Geometrically, we are taking a line bundle; we are sort of "straightening the lines" to make it trivial. In fact, line bundles are just trivial neighborhoods with transition functions to $\mathbb{A}^1$. We just take the reciprocal of the transition function, and that gives us the inverse line bundle.
So one might think of the following: the tensor product corresponds to product of modules, and hence looks nice geometrically precisely when there is no "torsion" in a non-precise sence, i.e. locally free modules? Locally projective modules? Flat? (Projective is stronger than flat.) So those are morally vector bundles, and can be straightened consistently precisely when the rank equals $1$.
As a counterthought to the above, the above isn't exactly true. "Straightening" only really makes sense in the rank $1$ case because the structure sheaf is itself a line bundle and not a higher vector bundle. So one can not, for obvious reasons, have a rank $2$ vector bundle that inverts another rank $2$ vector bundle. But in the case of a rank $1$, it is straightening the bundle, which can be done. For higher dimensional vector bundles, we can think of an "inverse" as $O^r$, but even then not all guys would be invertible (i.e. anything that does not split probably should not be invertible like this).
Everything I have so far is rather verbose. On an intuitive level with respect to the original question, we can think of nontrivial line bundles as structures that force their sections to have zeros or poles. If we have a section that is regular and has no vanishing everywhere, we can use it to trivialize the line bundle. A nice geometric picture of this is the line bundle of the Möbius strip over the circle.
Alternatively, we can always construct an inverse by taking the dual of a line bundle and then using the adjoint properties of the tensor product to show its an inverse. If we take a section, there will have to be some point in which the section crosses over the zero section. If we can find some section that has poles where the sections have zeros, their tensor product, which will have sections behaving like the product of the $2$ original sections will be regular.
So now, we have to visualize what taking the dual means, or sending zeros to poles. Geometrically, this is kind of like flipping over our line bundle and gluing everything together at infinity. This makes the zeros become poles. But this is all still very algebraic. Anyways, we certainly know how it works for higher dimensional things because the tensor product multiplies the dimension of the vector space.
At this point, one might wonder, what do I mean by "if we take a section, there will have to be some point in which the section crosses over the zero section." Why does a global section have to intersect the zero section?
Think about the Möbius strip. Draw a circle in the middle; this is isomorphic to the circle. Thus, the Möbius strip is a line bundle over the circle (well, we need to extend the Möbius strip out to infinity). Now, draw some line along the Möbius strip that tries to avoid intersecting the zero section (the circle we drew). This is impossible because when we wrap around the Möbius strip, we will be on the other side of the Möbius strip, and thus, to be a well-defined section, we must cross the zero section. This line represents a section and thus, every section has a zero. This is how we know that this line bundle is not trivial. So every section vanishes.
In fact, we know that a line bundle is trivial if it has nonvanishing section. Since we can construct a trivialization by considering the map from $X$ (where $X$ is the variety or scheme we are working over) $L$ (the line bundle) given by $L$ maps to $X \times \mathbb{A}^1$ by $(x, y)$ maps to $(yf(x))$, where $f$ is the nonvanishing regular section. This gives an isomorphism. Hence, the zeros or poles of the section really gives us the interesting information of a line bundle.
This is really the significance of this Picard group construction. To any set of poles and zeros, i.e. a divisor we may associate a line bundle $O(D)$, and this gives an isomorphism if we mod out by linear equivalence (i.e. multiplication by rational sections over the variety). Hence, the line bundle is uniquely determined by zeros and poles up to adding the zeros and poles achievable by rational functions on $X$.
In general, the statement "nontrivial if and only if vanishing sections" is obvious by monodromy. And one could formally understand the tensor/Hom adjunction used in the Picard construction. But what does it mean really?
The Picard group, the way I think of it, is the set of all divisors. Group of all divisors finite linear combinations of coding simon $1$ subvarieties, i.e. points if we are working over a curve. And these correspond to zeros and poles of functions.
To which we could ask, is this accurate? Isn't the Picard group more analogous to the class group? And even then, that is not always an isomorphism.
To which we would respond, yes that is true. The class group which is the group we are describing above quotients out by $Z(f)$, which is us looking at the valuation of $a$, rational function $f$ at the set of all divisors, i.e. we determine its order of vanishing at all the different possible zeros and poles. But one is correct that it is not always an isomorphism. However, for smooth complete curves, it most certainly is, and that is what most of this theory is used for.
We already have some issues if we do not work over algebraically closed fields because then we can not factor our polynomial all the way.
(Wikipedia gives the construction explicitly of line bundles $\implies$ class group element, and the thing about nonvanishing global sections being trivial makes it clear why it is well-defined up to principal divisors.)
The bad behavior comes when it is not a factorial scheme. So we ask, what is an example of a scheme where the germs are not unique factorization domains? We will have issues... for example, look at Weil and Cartier divisors, as Weil does not always imply Cartier.
Now, in spite of all this rambling, I feel like I still do not have a deep geometric understanding as to why line bundles have inverses with respect to the tensor product. It's quite possible I'm missing a relatively simple way of thinking about it. Could someone assess my statements and tell me if they are correct and/or the right way to think about this problem, and possibly contribute some of their own intuitions/explanations? Thanks in advance.
Solution 1:
I'm going to make this more geometric and less algebraic. But it all translates to the algebro-geometric setting of divisors if you wish.
You should think of a line bundle as a twisted product, and tensor product means that you concatenate or superimpose the twists. For example, thinking of the Möbius strip as a real line bundle $\mathscr L \to S^1$, then $\mathscr L\otimes\mathscr L$ corresponds to the Möbius strip with two half-twists. You can think of complex line bundles analogously.
To make this rigorous, it's best to think about the transition function construction of a line bundle. We take an appropriate open cover $U_i$ over which the line bundles $\mathscr L$ and $\mathscr L'$ are trivial, and take transition functions $\phi_{ij}$ and $\phi'_{ij}$ on $U_i\cap U_j$. Then the tensor product has transition functions $\phi_{ij}\phi'_{ij}$ (which then concatenates or superimposes the respective twists).
It now follows that the dual or inverse line bundle $\mathscr L^*$ has transition functions $1/\phi_{ij}$. That is, it twists precisely the opposite of the original bundle, and when you tensor the two bundles you have no twist at all.
(By the way, this intuitive geometric notion of twist aligns with the tensor product notion in commutative algebra as giving extension of scalars.)
Solution 2:
To my mind, "invertible with respect to the tensor product" is already the correct definition of a line bundle, in full generality. If that isn't the definition you're using I assume you're using something like "locally free of rank $1$," so let me say something about this.
Intuitively you should think of a line bundle as literally a bundle of lines; that is, a bundle of $1$-dimensional vector spaces. (This is why line bundles have something to do with maps to projective spaces: projective spaces are, by definition, moduli spaces of lines.) Among vector spaces, $1$-dimensional vector spaces are precisely the invertible ones with respect to the tensor product, and so among, say, vector bundles, it should be plausible that line bundles are precisely the invertible ones as well.
One way to make this precise is to show that invertibility is a local condition, as follows.
Claim: Let $M$ be an $\mathcal{O}_X$-module on a locally ringed space $(X, \mathcal{O}_X)$. The following conditions on $M$ are equivalent:
- The evaluation map $M \otimes \text{Hom}(M, \mathcal{O}_X) \to \mathcal{O}_X$ is an isomorphism.
- $M$ is invertible.
- $M$ is locally free of rank $1$.
Proof. $1 \Rightarrow 2$: $\text{Hom}(M, \mathcal{O}_X)$ supplies the inverse.
$2 \Rightarrow 3$: Localization is monoidal, so the localization of an invertible module remains invertible. Invertible modules over a ring are finitely generated and projective (there are various ways to prove this), so over a local ring they are free; hence if $M$ is invertible then it is locally free. Since rank is multiplicative under tensor product, it is locally free of rank $1$.
$3 \Rightarrow 1$: whether a morphism of $\mathcal{O}_X$-modules is an isomorphism can be checked locally, and locally a free module of rank $1$ clearly satisfies the condition that the evaluation map is an isomorphism, since all three modules involved are free of rank $1$.
Solution 3:
If I had to give a tl;dr to my incoherent drivel above, it would probably be as follows.
$V \otimes V^*$ has a canonical basis if and only if $V$ is $1$-dimensional. We have a circle. We draw the circle the opposite direction.