Non-integrability of distribution arising from 1-form and condition on 1-form

Solution 1:

This is false. The contact condition is often known as being maximally non-integrable.

Given an integrable distribution $\xi$ defined by a 1-form $\alpha$, use that $$d\alpha(X,Y) = Y\alpha(X) - X\alpha(Y) - \alpha([X,Y]).$$ So if $X,Y \in \xi$, then $d\alpha(X,Y) = 0$. So in particular $\alpha \wedge d\alpha = 0$ everywhere.

For 3-manifolds, the contact condition is equivalent to non-integrability, because non-integrability is the same thing as $\alpha \wedge d\alpha \neq 0$. For higher-dimensional manifolds the contact condition can be reinterpreted as "$d\alpha|_{\xi}$ is non-degenerate" - about as non-integrable as you can get, given that integrable is equivalent to $d\alpha|_{\xi} = 0$".

Solution 2:

Posting this to sum up all the stuff that came out in the huge comment discussion under Mike's answer, a discussion which needs 4 screenshots, 1 2 3 and 4, to fit in them. Will accept Mike's answer for the patience he must have used to go on with that discussion :). What emerged was the following.

  1. I forgot a "maximally" in my contact condition. "maximally non-integrable" means "as far as can be from being integrable", in a sense that will be made clear in the following points.
  2. If you define integral manifolds as those whose tangent space is the subspace of the t.s. of the ambient manifold, and not just a subspace of that as Boothby does -- and Boothby is apparently the only one doing that --, then completely integrable, integrable and involutive are equivalent, since that c.i. and involutive are equivalent is Frobenius's theorem, proved on Boothby and Lee, and that integrable implies involutive can be proven -- Proposition 14.3 on p. 358 of Lee.
  3. Thus if the distribution given by the zeros of $\alpha$ is integrable, it is involutive, and $d\alpha$ is zero, as: $$d\alpha(X,Y)=X\alpha(Y)-Y\alpha(X)-\alpha([X,Y]).$$ I actually haven't yet seen a proof of this, as far as I remember, but it shouldn't be hard. Will try soon. Of course, if $d\alpha=0$, the distribution is involutive, thus integrable.
  4. So integrability is equivalent to $d\alpha=0$, and non-integrability to $d\alpha\neq0$. How far can you get from $d\alpha$ being zero? By having it nondegenerate on $\xi$. That is why this is termed as $\xi$ being maximally non-integrable. This is the definition of contact condition, and of contact form.
  5. And now the main serving of the meal: this is equivalent to $\alpha\wedge(d\alpha)^k\neq0$. Mike told me to try proving this myself, and I did. First of all, we write the wedge product out explicitly: $$\alpha\wedge(d\alpha)^k(v_1,\dotsc,v_{2k+1})=\sum_{\sigma\in S_{2k+1}}\operatorname{sgn}\sigma\cdot\alpha(v_{\sigma(1)})d\alpha(v_{\sigma(2)},v_{\sigma(3)})\cdot\dotso\cdot d\alpha(v_{\sigma(2k)},v_{\sigma(2k+1)}).$$ Now assume $d\alpha|_\xi$ is nondegenerate. Then it is easy to prove by induction that we can find a symplectic basis for $d\alpha$ on $\xi$, so $d\alpha$ is represented by the matrix $J_0$ having a block of zeros on the TL and BR corners, the identity on the BL corner and minus the identity on the TR corner, where all blocks are $k\times k$. We then complete this symplectic basis to a basis for the whole tangent space by adding a vector outside of $\xi$. Plug these into that wedge product, and the surviving terms all have $\sigma(1)=2k+1$, where $v_{2k+1}$ is outside $\xi_q$ and $v_i$ is the symplectic basis. So the result of plugging in these vectors into the wedge product is $\alpha(v_{2k+1})$ times the $k$-th power of the canonical symplectic form applied to the symplectic basis. Now applying the $k$th power of the canonical symplectic form to the symplectic basis yields, as is trivially seen, a sum of terms that are all either $1$ or $-1$. The terms in question differ from each other in three possible ways:

    1) The sign of $\sigma$;

    2) The order of the arguments inside the factors;

    3) The order of the factors.

    Let us see how altering the second two alters the first one. If I swap factors (3), the permutation $\sigma$ is altered by way of composing with the two transpositions that swap the factors. To be more explicit, if I have $\omega(v_1,v_3)\omega(v_2,v_4)$ and I want to swap those factors, I need but compose $\sigma$ with the permutation $(1,2)(3,4)$. This has even sign, so $\sigma$ keeps its sign, and the factor also does, so no change. If I swap arguments, I get a minus sign from the factor, but another one from the sign of $\sigma$, which is composed with a transposition. Again, no sign change. So they all have the same sign, and we are done. Next, suppose the wedge product is nonzero. This implies we have $2k+1$ linearly independent vectors for which the wedge product applied to them gives a nonnzero result. Exactly one of them is outside $\xi$, so again by the above expression we have a sum of terms with the same argument given to $\alpha$. One of those terms is nonzero, which means that if $v_i$ are those vectors and $v_{2k+1}\notin\xi$, then for each $i\leq2k$ there exists $j\leq2k$ such that $\omega(v_i,v_j)\neq0$. This is not a symplectic basis, but almost: with a couple normalizations it becomes one. So $d\alpha|_\xi$ is nondegenerate, as it admits a symplectic basis.

  6. As a bonus, if $\omega$ is a 2-form, nondegeneracy is equivalent to $\omega^k\neq0$. The argument is similar to the above: use a similar expression for $\omega^k$ applied to $2k$ vectors, if $\omega^k$ is nonzero then there exist $2k$ vectors for which one term is nonzero, which means almost a symplectic basis, and if $\omega$ is nondegenerate then we have the symplectic basis, and for the canonical symplectic form the $k$th power is nonzero simply by applying it to the basis. The expression for the $k$th power is: $$\omega^k(v_1,\dotsc,v_{2k})=\sum_{\sigma\in S_{2k}}\operatorname{sgn}\sigma\cdot\omega(v_{\sigma(1)},v_{\sigma(2)})\cdot\dotso\cdot\omega(v_{\sigma(2k-1)},v_{\sigma(2k)}).$$

Update: I tried to prove the formula for $\alpha([X,Y])$, but I seem to have disproven it. I am sure there must be something wrong in what I've done but I just can't see what. I did everything locally. Locally, I have a chart, a basis of the tangent space which is $\partial_i$, the "canonical" coordinate basis, and a basis of the dual of the tangent, $dx_i$. $dx_i(\partial_j)=\delta_{ij}$. Locally, $\alpha=\alpha_idx_i$, with the repeated indices convention. $d\alpha$ can be written as: $$d\alpha=dx_i\wedge\partial_j\alpha_idx_i=(\partial_j\alpha_i-\partial_i\alpha_j)dx_j\wedge dx_i.$$ Now, if I plug in $X,Y$, I get: \begin{align*} d\alpha(X,Y)={}&(\partial_j\alpha_i-\partial_i\alpha_j)(dx_j(X)dx_i(Y)-dx_i(X)dx_j(Y))={} \\ {}={}&\partial_j\alpha_idx_j(X)dx_i(Y)-\partial_j\alpha_idx_i(X)dx_j(Y)-\partial_i\alpha_jdx_j(X)dx_i(Y)+\partial_i\alpha_jdx_i(X)dx_j(Y)={} \\ {}={}&\partial_j\alpha_iX_jY_i-\partial_j\alpha_iX_iY_j-\partial_i\alpha_jX_jY_i+\partial_i\alpha_jX_iY_j. \end{align*} The first term up there is $X\alpha(Y)$, the second one is $-Y\alpha(X)$, so the rest should be $-\alpha([X,Y])$. So I wrote the commutator out: $$[X,Y]=[X_i\partial_i,Y_j\partial_j]=X_i\partial_i(Y_j\partial_j)-Y_j\partial_j(X_i\partial_i)=X_iY_j\partial_i\partial_j+X_i(\partial_iY_j)\partial_j-Y_jX_i\partial_j\partial_i-Y_j(\partial_jX_i)\partial_i.$$ The mixed derivatives cancel out, and the rest is: $$[X,Y]=X_i(\partial_iY_j)\partial_j-Y_j(\partial_jX_i)\partial_i.$$ Apply $\alpha$ to it: \begin{align*} \alpha([X,Y])={}&\alpha_kdx_k[X_i(\partial_iY_j)\partial_j-Y_j(\partial_jX_i)\partial_i]={} \\ {}={}&\alpha_kX_i(\partial_iY_j)dx_k(\partial_j)-\alpha_kY_j(\partial_jX_i)dx_k(\partial_i)={} \\ {}={}&\alpha_kX_i(\partial_iY_j)\delta_{jk}-\alpha_kY_j(\partial_jX_i)\delta_{ik}={} \\ {}={}&\alpha_jX_i(\partial_iY_j)-\alpha_iY_j(\partial_jX_i). \end{align*} Which is evidently not the same as above. What am I doing wrong here?

Update 2: I tried an altogether different approach, and failed again. I am copying it for the record, and also because the terrible habit I have of using $i,j$ as indices might have had me mess indices up and get a wrong result, which of course won't happen on the computer. I tried using Cartan's formula: $$\mathcal{L}_X\alpha=\iota_Xd\alpha+d(\iota_X\alpha),$$ since evidently: $$d\alpha(X,Y)=(\iota_Xd\alpha)(Y)=(\mathcal{L}_X\alpha-d(\iota_X\alpha))(Y).$$ Let us write out the commutator. Suppose $X=X_i\partial_i,Y=Y_i\partial_i$. Then: \begin{align*} [X,Y]={}&[X_i\partial_i,Y_j\partial_j]=X_i(\partial_iY_j)\partial_j+X_iY_j\partial_i\partial_j-Y_j(\partial_jX_i)\partial_i-Y_jX_i\partial_j\partial_i=X(Y_j)\partial_j-Y(X_i)\partial_i={} \\ {}={}&(X(Y_i)-Y(X_i))\partial_i. \end{align*} Let us start from the second term. Suppose $\alpha=\alpha_idx_i$. Then: $$d(\iota_X\alpha)(Y)=d(\alpha(X))(Y)=\partial_j(\alpha_iX_i)dx_j(Y)=(\partial_j\alpha_i)X_iY_j+(\partial_jX_i)\alpha_iY_j=Y(\alpha(X)).$$ OK, I had a wrong minus sign over here. I had gotten $Y(\alpha(X))-2\alpha_iY(X_i)$. But then there must be something wrong in the next bit too. Let me see. $$\mathcal{L}_X\alpha=X_i\partial_i(\alpha_jdx_j)=X_i\partial_i(\alpha_j)dx_j+X_i\alpha_j\partial_i(dx_j).$$ Interpreting $\partial_i$ as a vector field, $\partial_i(dx_j)$ would be a Lie derivative, so I use Cartan's formula once more: $$\mathcal{L}_X\alpha=X_i\partial_i(\alpha_j)dx_j+X_i\alpha_j(\iota_{\partial_i}ddx_j+d(\iota_{\partial_i}dx_j)).$$ Now $ddx_j=0$, and $\iota_{\partial_i}dx_j=dx_j(\partial_i)=\delta_{ij}$, so: $$\mathcal{L}_X\alpha=X_i\partial_i(\alpha_j)dx_j+X_i\alpha_jd(\delta_{ij}),$$ OK, that can't be right. Or maybe it is. Let us go on and see what we get. That means the second term is 0. Now we finally insert $Y$: $$(\mathcal{L}_X\alpha)(Y)=X_i(\partial_i\alpha_j)Y_j=X(\alpha_j)Y_j=X(\alpha_jY_j)-X(Y_j)\alpha_j.$$ Is that last term $\alpha([X,Y])$? Remember how $[X,Y]=(X(Y_i)-Y(X_i))\partial_i$. Then: $$\alpha([X,Y])=\alpha((X(Y_i)-Y(X_i))\partial_i)=\alpha_jdx_j((X(Y_i)-Y(X_i))\partial_i)=\alpha_j(X(Y_j)-Y(X_j)).$$ So I am missing half of this above. What is wrong above?

Update 3: Chi la dura, la vince (He conquers who endures). I was stubborn enough to try a third time. We have written before that: $$\alpha([X,Y])=\alpha_i(X(Y_i)-Y(X_j)).$$ We can easily see the following: \begin{align*} X(\alpha(Y))={}&X(\alpha_i)Y_i+X(Y_i)\alpha_i, \\ Y(\alpha(X))={}&Y(\alpha_i)X_i+Y(X_i)\alpha_i,$$ \end{align*} this boils down to writing the arguments of $X,Y$ and $X,Y$ themselves explicitly, I think we've done that above as well. Let us then compute the RHS of our claim: \begin{align*} X(\alpha(Y))-Y(\alpha(X))-\alpha([X,Y])={}&X(\alpha_i)Y_i+\underline{X(Y_i)\alpha_i}-Y(\alpha_i)X_i-\overline{Y(X_i)\alpha_i}-\alpha_i(\underline{X(Y_i)}-\overline{Y(X_j)})={} \\ {}={}&X(\alpha_i)Y_i-Y(\alpha_i)X_i. \end{align*} For the LHS, I must first stress I have an erroneous definition of $d\alpha$. $d\alpha\neq(\partial_i\alpha_j-\partial_j\alpha_i)dx_i\wedge dx_j$. It is NOT a sum over all combinations of $i,j$, but a sum over $i<j$. To have all possible combinations, I must add a half in front of everything. I will now compute the LHS finally prove the equality. Let us see: \begin{align*} 2d\alpha(X,Y)={}&(\partial_j\alpha_i-\partial_i\alpha_j)(dx_j(X)dx_i(Y)-dx_i(X)dx_j(Y))={} \\ {}={}&X_jY_i\partial_j\alpha_i-X_jY_i\partial_i\alpha_j-X_iY_j\partial_j\alpha_i+X_iY_j\partial_i\alpha_j={} \\ {}={}&Y_iX(\alpha_i)-X_jY(\alpha_j)-X_iY(\alpha_i)+Y_jX(\alpha_j)={} \\ {}={}&2Y_iX(\alpha_i)-2X_iY(\alpha_i), \end{align*} which unless I'm much mistaken is exactly twice the RHS.

We try and we fail, we try and we fail, but the only true failure is when we stop trying.

Says the gypsy in the sphere in "The Haunted Mansion". Well, lucky I didn't stop trying :).