A connection between vector space operations and intuitionistic connectives?

Solution 1:

Here's a perspective that unifies all the previous answers. Unsurprisingly, it is based in category theory.

The internal logic of a symmetric monoidally closed category is multiplicative intuitionistic linear logic (MILL). A monoidal category is one that has a suitable notion of tensor product, $\otimes$ (with unit $I$). It is symmetric if we have a natural isomorphism, $\sigma_{AB}:A\otimes B\cong B\otimes A$, such that $\sigma_{BA}\circ\sigma_{AB}=id_{A\otimes B}$. It is monoidally closed if we have $\text{hom}$ objects characterized by the tensor-hom adjunction. I'll write $A\multimap B$ for $\text{hom}(A,B)$.

An archetypal example of a symmetric monoidally closed category is $\mathbf{Vect}_k$, the category of $k$-vector spaces for a field $k$. Moving back to an arbitrary category, in the special case where $\otimes = \times$, the categorical product, we talk of a cartesian monoidal category. A cartesian monoidally closed category is called a cartesian closed category. In this case, the internal hom is the exponential object. For example, $\mathbf{Set}$ is a cartesian closed category with $X\multimap Y$ written $Y^X$, the set of functions from $X$ to $Y$. The internal logic of an arbitrary cartesian closed category is (a fragment of) intuitionistic propositional logic (or better minimal logic). $\mathbf{Vect}_k$ is not cartesian closed though.

The Galois connections that Stephen A. Meigs mentions generalize directly to adjunctions in symmetric monoidally closed categories. First, we have the tensor-hom adjunction itself: $(-)\otimes B \dashv B\multimap(-)$. This states that $\mathsf{Hom}(A\otimes B,C)\cong\mathsf{Hom}(A,B\multimap C)$ natural in $A$ and $C$ (and in fact also $B$, but this goes beyond what the adjunction immediately states). (Also, $\mathsf{Hom}(A,B)$ is the hom-set of arrows from $A$ to $B$, not to be confused with the internal hom, $\text{hom}$ which is an object of the category itself. As the name suggests, though, the internal hom "internalizes" many aspects of hom-sets.) The symmetry also gives us the following adjunction $((-)\multimap C)^{op}\dashv (-)\multimap C$ or $\mathsf{Hom}(A,B\multimap C)\cong\mathsf{Hom}(B,A\multimap C)$. As mentioned before, and as suggested by the notation, these adjunctions correspond to the logical rules for $\otimes$ and $\multimap$ in linear logic, which, in the cartesian case, correspond to $\land$ and $\to$ in (non-linear) propositional logic (minimal, intuitionistic, or classical).

Now MILL doesn't contain $(-)^\perp$. We can, however, do what they do in minimal logic which is to define the "negation" $\neg A$ (so $A^\perp$ for us) as merely $A\to R$ ($A\multimap R$) for some arbitrary but fixed proposition $R$. One natural choice is to choose $R=I$, the unit of the tensor product. (A more natural choice as far as negation is concerned is $R=0$ or $R=\bot$ but we have not assumed that structure.) All the rules you mention generalize in linear algebra from statements about $V^\perp$ to statements about $\text{hom}(V,W)$, i.e. we view $V^\perp$ as $\text{hom}(V,k)$ (where $k$ is the underlying field). $k$ is actually the unit for the tensor, i.e. $k=I$. This corresponds, for a general symmetric monoidally closed category, to defining $A^\perp\equiv A\multimap I$. You can verify almost all the laws you mention. One way to handle $\lor$ is to add categorical coproducts of which the direct sum of vector spaces, $\oplus$, is an example. (Note, how we handle disjunction is one of the places where intuitionistic and classical perspectives start to diverge significantly.) In this case, we have $(A\oplus B)\multimap C \cong (A\multimap C)\times (B\multimap C)$ as just a (somewhat disguised) special case of the very general and highly useful fact that right adjoints preserve limits, in this case taking products (in the opposite category, so coproducts) to products. We can make these statements about the subobjects $A\cap B$ and $A\cup B$ (given we have the appropriate structure to interpret these) which fits a bit better.

All of this is part of a larger pattern connecting logic, type theory, category theory, and computation.

Solution 2:

I don't know the details, but I've always thought something like this was part of the motivation for linear logic -- where the negation is even notated $p^\bot$ rather than $\neg p$, and we have connectives $\otimes$ and $\oplus$ that seem to correspond to tensor product and direct sum of vector spaces in a nice way.

In particular, intuitionistic linear logic seems to be closer to linear algebra than classical linear logic is.

Solution 3:

Here's the connection I see (though I'm not an expert, but what you noticed has always fascinated me):

Consider the fragment of intuitionistic propositional logic that doesn't have $\to$ (but it does have $\neg$, which is here a primitive so not $\to \bot$)

Then we can interpret any formula of our fragment in a pre-Hilbert space (real or complex), given any assignment of subspaces to propositional variables. $\neg$ is interpreted as $^\bot$, $\land$ as $\cap$ and $\lor$ as $+$.

I think if you add $\to$ to your fragment and perform proofs in (a reduced - see later why) sequent calculus and at the end get $H\vdash A$ with $H,A$ not containing $\to$, then the subspace associated to $H$ will be contained in the subspace associated to $A$.

In particular, $\neg\neg\neg p\vdash \neg p$ for each $p$ yields (c) and $p\vdash \neg\neg p$ yields (b) (if these are in fact provable from the reduced sequent calculus in question, I would say that they are: again, see below. I'm not really sure for vector spaces but by analogy with rings I would say that it's the case)

The problem is of course, we can't interpret $\to$ which is somewhat sad.

Let me give another example where one may in fact interpret $\to$, but we don't always get intuitionistic logic (however a very interesting point is that we see how linear logic gets two different "and"'s)

Consider a commutative (unital) ring $R$. We'll actually interpret our logic in the set of ideals of $R$ in the following way: we assign an ideal to each propositional variable (denoted $[p]$), then $[\bot] = 0, [\top] = R$, and then define inductively $[A\to B] = ([B]: [A]) = \{x\in R\mid \forall y \in [A], xy\in [B]\}$, $[A\land B] = [A] [B]$ (note that here we have a choice between the product of ideals and the intersection: in this way we may see where linear logic gets its different "and"s), $[A\lor B] = [A]+[B]$ (I didn't give a clause for $\neg$ because here we may define $\neg A := (A\to \bot)$)

This structure doesn't necessarily satisfy intuitionistic logic (that is, we may have intuitionistically provable sequents $H\vdash A$ such that $[H]$ is not included $[A]$), but it does satisfy some basic rules, in particular it satisfies "implication introduction": if $p\land q \vdash r$ then $p\vdash q\to r$ is a valid rule (assume $IJ \subset K$ and pick $x\in I$, then given $y\in J$, $xy\in IJ \subset K$ so $xy\in K$ so $x\in (K:J)$); it also satisfies "if $p\vdash q$ then $r\land p\vdash r\land q$ (assume $I\subset J$. Then $KI\subset KJ$); and a weak form of modus ponens : $p\land (p\to q) \vdash q$ (obvious from the definition of $[A\to B]$) - I call it weak modus ponens because it is weaker than "$p\vdash r\to q$ and $p\vdash r$ implies $p\vdash q$" (this would correspond to the other "and"); and finally "$p\land q$ and $q\land r$ implies $p\vdash r$" (trivial)

In particular, from this one may prove $\neg\neg\neg p \vdash \neg p$ in the following way : $p\land \neg p\vdash \bot$ (definition of $\neg$ and weak modus ponens) thus $p\vdash \neg\neg p$ (implication introduction), thus $\neg\neg\neg p\land p \vdash \neg\neg\neg p \land \neg\neg p$. Moreover $\neg\neg\neg p \land \neg\neg p\vdash \bot$ (definition of $\neg$) so $\neg\neg\neg p\land p \vdash \bot$ (transitivity), which yields $\neg\neg\neg p \vdash \neg p$ (implication introduction)

This yields a famous equation : $\mathrm{Anh}^3 = \mathrm{Anh}$ ($\mathrm{Anh}$ is the annihilator of an ideal). Of course we gained pretty much nothing from the previous proof compared to the usual ring-theoretic proof, but the point is that if we already know the logic that these rings satisfy and know some of its basic properties (such as $\neg\neg\neg p\vdash \neg p$) then one may apply it to rings and get this equation: in other words this equation follows from formal stuff (note that all the deduction rules I used are derivable from usual intuitionistic logic)

One may study these logics that arise in rings: in particular one can give characterizations of when this logic contains intuitionistic logic, classical logic. An interesting point is that we have the possibility of having two "and"s which yields the possibility of interpreting linear logic (or in fact affine logic)

I got a bit far from vector spaces here, but the point is that we may study interpretations of logics in algebraic structures, and see which properties of said algebraic structures are "purely formal" in a sense.