What is the intuition behind the "par" operator in linear logic?

Solution 1:

⅋ also had me baffled for a long time. The intuition I've arrived at is this: You have both an A and a B, but you can't use them together.

Examples of this (considered as entities in a computer program) include:

  • A⊸A (= $A⅋A^⊥$) is a function value; you can't get rid of it by connecting its input to its output.
  • $A⅋A^⊥$ is a communication channel (future): you can put a value into one end and retrieve it from the other. It is forbidden, however, to do both from the same thread (risk of deadlock, leading to ⊥).

(The computer program analogy may not make sense for everybody, of course - but this has been my approach to the topic, and my basis for intuitions.)

You can read the axioms in an up-right-down manner. The axiom for ⅋ introduction,

$$ \frac{ Δ_1,A ⊢ Γ_1 \quad Δ_2,B ⊢ Γ_2 }{ Δ_1,Δ_2, A⅋B \ ⊢ \ Γ_1,Γ_2 } $$

can thus be read as "given a value of type A⅋B, you can divide your remaining resources into two, transform them into new ones, and combine the results." And A⅋B can not be split in any other way - A and B must become separate and cannot take part in the same transformations.

Solution 2:

Please see Linear Logic in wikipedia: the symbol ⅋ ("par") is used to denote multipicative disjunction, whereas ⊗ is used to denote multiplicative conjunction. They are duals of each other. There is also some discussion of this connective.

Variously, one can denote the "par" operation using the symbol $ \sqcup $, which is described in the paper (pdf) "Linear Logic Displayed" as "join-like."

The logical rules for ⊗ and ⅋ are as follows:

If Γ: A: B Δ ⊢Θ, then Γ: A⊗B: Δ ⊢ Θ; conversely, if Γ ⊢ Δ: A and ⊢ B: Θ, then Γ ⊢ Δ: A⊗B: Θ.

Dually, if Γ ⊢ Δ: A: B: Θ, then Γ ⊢ Δ: A⅋B: Θ; conversely, if A: Γ ⊢Δ and Θ: B ⊢, then Θ: A⅋B: Γ ⊢ Δ.

Multiplication distributes over addition if one is a conjunction and one is a disjunction:

  • A⊗(B⊕C) ≡ (A⊗B)⊕(A⊗C) (and on the other side);

    (multiplicative conjunction distributes over additive disjunction)

  • A⅋(B&C) ≡ (A⅋B)&(A⅋C) (and on the other side);

    (multiplicative disjunction over additive conjunction)

Also:

  • A⊗$0$ ≡ $0$ (and on the other side);
  • A⅋⊤≡⊤ (and on the other side).

Linear implication $A\multimap B$ corresponds to the internal hom, which can be defined as (A⊗$B^⊥)^⊥$. There is a de Morgan dual of the tensor called ‘par’, with A⅋B=($A^⊥⊗B^⊥)^⊥$. Tensor and par are the ‘multiplicative’ connectives, which roughly speaking represent the parallel availability of resources.

The ‘additive’ connectives & and ⊕, which correspond in another way to traditional conjunction and disjunction, are modeled as usual by products and coproducts.

For a nice exploration in linear logic: see this:


(GAME SEMANTICS):

Game semantics for linear logic was first proposed by Andreas Blass (Blass (1992).) The semantics here may or may not be the same as proposed by Blass.

We can interpret any proposition in linear logic as a game between two players: we and they. The overall rules are perfectly symmetric between us and them, although no individual game is. At any given moment in a game, exactly one of these four situations obtains: it is our turn, it is their turn, we have won, or they have won; the last two states continue forever afterwards (and the game is over). If it is our turn, then they are winning; if it is their turn, then we are winning. So there are two ways to win: because the game is over (and a winner has been decided), or because it is forever the other players turn (either because they have no move or because every move results in its still being their turn).

This is a little complicated, but it's important in order to be able to distinguish the four constants:

In ⊤, it is their turn, but they have no moves; the game never ends, but we win.

Dually, in 0, it is our turn, but we have no moves; the game never ends, but they win.

In contrast, in 1, the game ends immediately, and we have won.

Dually, in ⊥, the game ends immediately, and they have won.

The binary operators show how to combine two games into a larger game:

In A&B, is is their turn, and they must choose to play either A or B. Once they make their choice, play continues in the chosen game, with ending and winning conditions as in that game.

Dually, in A⊕B, is is our turn, and we must choose to play either A or B. Once we make our choice, play continues in the chosen game, with ending and winning conditions as in that game.

In A⊗B, play continues with both games in parallel. If it is our turn in either game, then it is our turn overall; if it is their turn in both games, then it is their turn overall. If either game ends, then play continues in the other game; if both games end, then the overall game ends. If we have won both games, then we have won overall; if they have won either game, then they have won overall.

Dually, in A⅋B, play continues with both games in parallel. If it is their turn in either game, then it is their turn overall; if it is our turn in both games, then it is our turn overall. If either game ends, then play continues in the other game; if both games end, then the overall game ends. If they have won both games, then they have won overall; if we have won either game, then we have won overall.

So we can classify things as follows:

In a conjunction, they choose what game to play; in a disjunction, we have control. Whoever has control must win at least one game to win overall.

In an addition, one game must be played; in a multiplication, all games must be played.

To further clarify the difference between ⊤ and 1 (the additive and multiplicative versions of truth, both of which we win); consider A⅋⊤ and A⅋1. In A⅋⊤, it is always their move (since it is their move in ⊤, hence their move in at least one game), so we win just as we win ⊤. (In fact, A⅋⊤≡⊤.) However, in A⅋1, the game 1 ends immediately, so play continues as in A. We have won 1, so we only have to end the game to win overall, but there is no guarantee that this will happen. Indeed, in 0⅋1, the game never ends and it is always our turn, so they win. (In ⊥⅋1, both games end immediately, and we win. In A⊗1, we must win both games to win overall, so this reduces to A; indeed, A⊗1≡A.)

Negation is easy: To play A ⊥, simply swap roles and play A.

A game is valid if we have a strategy to win (whether by putting the game in a state where we have won or by guaranteeing that it is forever their turn). The soundness and completeness of this interpretation is the theorem that A is a valid game if and only if ⊢A is a valid sequent. (Recall that all questions of validity of sequents can be reduced to the validity of single propositions.)

Solution 3:

When I was doing research in interaction nets and linear logic, my personal intuition for the connectives was via a traditional Hintikka game semantics (I think different from Blass).

To me $A⅋B$ can be interpreted as meaning "Given a refutation of $A$, I can prove $B$ and given a refutation of $B$ I can prove $A$".

This has some nice features: first, it brings out the multiplicative nature of the connective (the "and" in the middle) and second it is clear why $A⅋A^⊥$ is true. Clearly if I am given a refutation of $A$ I can refute $A$. There is nothing non-constructive about it. Third it does not require use of parallel execution (though of course that can be an advantage as well).

In my interpretation tensor $A⊗B$ means "I can prove $A$ and I can prove $B$".

In the above "prove" means "win as verifier" and if there is a context it includes the assumption that I use the context linearly. E.g. if there is an $A$ in the context I will use it exactly once.

I don't recall anyone ever explaining the connectives that way precisely, but it may be equivalent to some other formulation.

EDIT Henning is right I think that the better way to put it is "Given a refutation of $A$, I can prove $B$ or given a refutation of $B$ I can prove $A$".

Par is an assertion that both options are possible and makes no commitment to either, but the surrounding proof context may select one or the other.

Solution 4:

$\DeclareMathOperator{\par}{⅋}$I found it helpful to generalize the resource analogy to deferred promises of resources, similar to the notion of "promises" or "futures" in asynchronous programming. Thus, $\mathit{Apple}$ is not necessarily a real apple, but could be a promise of an apple from someone else.

This distinction makes it possible to interpret the negation $\mathit{Apple}^\bot$ as an obligation to surrender an apple to someone else.

In general, a proposition $A$ in linear logic may contain a mixture of promises and obligations. So something like

$\mathit{Pie}^\bot \otimes \mathit{Apple}$

represents the obligation to provide a pie and, alongside, the promise of an apple.

Distinction between $\par$ and $\otimes$

The interpretation of par ($\par$) is intimately related to the duality of promises and obligations. For example, one may interpret

$\mathit{Flour}^\bot \par \mathit{Cake}$

as the obligation to provide flour with the promise of cake, such that they may be contigent on one another. The chef might require the flour to bake the cake, so you cannot expect to have the cake before flour has been given.

The presence of contingency in the fulfillment of these promises or obligations is what sets $\par$ apart from $\otimes$.

  • With $A \otimes B$, you have set of promises and obligations $A$, and a set of promises and obligations $B$, and neither set is contingent on the other. This means $A$ and $B$ must have been constructed through totally independent mechanisms.

  • With $A \par B$, the fulfillment of promises in $A$ may be contingent on certain obligations in $B$ and vice versa. From a software perspective, one could think of $A$ and $B$ as independent threads of execution, with obligations acting as the sending end of channels and promises as the receiving end.

Dually, to prevent cycles of mutually dependent promises, which could cause a deadlock, operations on $A \par B$ must be disjoint: they are limited to those that can act on $A$ independently of $B$ and vice versa. For example, you can transform $A$ into $C$ and $B$ into $D$:

$$A \par B, A \multimap C, B \multimap D \vdash C \par D$$

However, you cannot in general act on both $A$ and $B$ at the same time:

$$A \par B, A \otimes B \multimap C \nvdash C$$

Introduction

As noted previously, $\par$ closely related to obligations (negation). In fact, the only way to create nontrivial $\par$ out of thin air is by creating an obligation:

$$\vdash A^\bot \par A$$

Once created, you are free to transform either side of $\par$ as long as you do them disjointly of the other side.

Note that the contingency does not have to be one-sided; it is totally possible to have mutual contingency as long as there is no risk of cycles:

$$\vdash (\mathit{Flour}^\bot \par \mathit{Cake}) \par (\mathit{Flour} \otimes \mathit{Cake}^\bot)$$

Elimination

To get rid of a $\par$, you have to fulfill all the obligations (and use up all the promises) on either the left or right side of $\par$:

$$A \par B, B \multimap \bot \vdash A$$

Here, $\bot$ is the identity element of $\par$ and signifies the state of having no obligations remaining so you can slack off indefinitely. It can be created by connecting a promise with an obligation:

$$A, A^\bot \vdash \bot$$

Contrast with $\mathbf{1}$, which signifies a promise of nothing, but you still need to continue working to fulfill your obligations.

Solution 5:

The vending machine analogy by Tanner Swett have its beauty but the answer contains some error. Let me correct it.

First of all, let me say that the propositions in Linear Logic is NOT vending machines at all, they are resources, or tokens for the vending machine. So let me say that $Y$ is for 1 Yan, and $D$ is for 1 Dollar.

For any tokens we can define a "bill" token, which means you owe someone some resource. So $D^\bot$ is a token says you owe someone 1 Dollar.

Of cause, it is common to combine tokens. so $D ⊗ Y$ is one Dollar an one Yan. This is the easiest case to understand complicated tokens.

In some shops you will get vouchers that says you can choose to get a lollipop or a chocolate. Similarly we can say $Y \& D$ is a voucher you can claim a Yan or a Dollar, but not both (of cause, many people prefer a Dollar, but if in some cases people you want to talk to accepts only Yan...)

The other example I want to show is a lottery ticket. With one you may get a TV set, or a toy car, or nothing. So $Y ⊕ D$ is like a lottery ticket: it can be a Yan, or a Dollar, but you have to be lucky to get a Dollar.

Now, where is the vending machine then? It is the sequents. So $Y \vdash\ D$ is a vending machine takes a Yan and give you a Dollar back. The main point here is that once we (proved/established) a sequent/vending machine, we can use it as many times as we can, as long as we have enough tokens to feed them.

So far so good. But we have not touched $Y ⅋ D$ yet. Of cause this is some kind of tokens. Can we claim a Yan from it? Not quite sure. But let's try.

$$\frac{\displaystyle\frac{\displaystyle\frac{}{D\ \vdash\ \ D}{\;init}}{D^\bot, D\vdash}{\;(\cdot)^\bot L}\ \ \ \frac{}{Y \vdash Y}{init}}{D^\bot, Y \par D \vdash\ Y} \;\par\!L$$

By using some existing vending machines ($init$) and wire them up using rules, we created a vending machine that accepts a bill for a dollar, and a mystery $Y ⅋ D$ token, claimed a Yan successfully.

Emmm. Looks like strange ya? Did anything like this happen in real life? Let's say $Y$ is now represent your house, and $D$ your mortgage bank account, now the vending machine will happily pay off your mortgage loan and give you directly the house!

So what does $Y ⅋ D$ mean? In real life, this looks like some trust fund contract that manages some assets. Without your agreement, they cannot touch the assets; but you also not able to use any of them unless you allow them to handle the rest by themselves.

Futher thoughs

In the above I have limited to use one proposition on the right hand side. This is to simplify the meaning of the vending machine. But to understand Linear Logic better, I need to explain what is it mean by multiple/none positions on the right hand side means.

One further step of the above derive can lead to a vending machine $Y ⅋ D \vdash Y, D$. In the explain above we see that the key point is that $Y ⅋ D$ do manages both resources represented by $Y$ and $D$, but have to be used separately. So $Y ⅋ D \vdash Y, D$ need to mean that claim the resource $Y$ and $D$ to **different ** individuals.