# linear dependent types

Just as the Deutsche Demokratische Republik was named after exactly what it was not, so there are a number of papers whose titles emphasize the combination of linear types and dependent types which are actually about keeping the two very carefully separated, in spite of their presence within the same system. Let me be at pains to approve of the motivations behind that work and the systems developed therein, which are entirely fit for their intended purposes. But my purposes are different.

In particular, I wish to develop a linear dependent type system in which the use-exactly-once policy is guaranteed for the run-time behaviour, but the types are still free to mention linear values arbitrarily. Types aren’t around at run-time. Usage in types doesn’t count as *consumption*: rather, it is *contemplation*. I can contemplate the pint of Brixton Porter I enjoyed last night and tell you how much I enjoyed that pint, even though I cannot drink that pint again. This is nothing new: researchers in program logics have traded in named quantities which are not the current values of actual program variables for decades. Let’s get us some of that action. And as Neel wisely suggested, the key is to use monoidally structured worlds. I doubt Dominic Orchard, Tomas Petricek and Alan Mycroft will be fainting with amazement, either. Nor will Marco Gaboardi.

Where I do need to be a bit of a religious dependent type theorist is in never divvying up the variables in the context: you just can’t presume to do that sort of thing when the types of more local variables depend on more global variables. However, what you’re (I hope I don’t assume too much) used to calling a context, I now call a *pre*-context.

G ::= . | G, x : T

Now, each precontext G has an associated set of G-contexts, annotating each variable with some resource w (because a resource is sort of a world).

. is a .-context W, wx : T is a (G, x : T)-context if W is a G context

Now, w inhabits the set {0, 1, *} where * means “lots”, or “hrair” if you speak rabbit (for once the reference is to *Richard* Adams), and they’re monoidal with rabbit addition.

+ | 0 1 * -- ------- 0 | 0 1 * 1 | 1 * * * | * * *

Pre-contexts G tell you which things may be contemplated. G-contexts tell you how many of each G-thing is available to be consumed. Note that G-contexts are monoidal pointwise, and we may write 0G for the neutral element.

Typing judgments are also marked with a resource, saying how many of the terms we’re consuming.

Here’s pre-context validity. I write :> and <: respectively for reverse and

forward membership epsilons.

0G |- 0 Type :> S -------------- --------------------- . |- valid G, x : S |- valid

Any resource marking of a valid pre-context G yields a valid G-context.

Now, guess what! There’s a set of quantifiers, which today I’ll write as funny infix arrows -q

**-x**intersection, or ‘for an unknown…’**-o**lollipop, or ‘given just one…’**->**dependent function (a.k.a. Π), or ‘given plenty…’

And, following my modus operandi from *worlds*, we have an action which I’ll write prefix -qw

-qw | 0 1 * ---- ------- -x | 0 0 0 -o | 0 1 * -> | 0 * *

which tells you how many copies of the inputs you need to make however many of the outputs you

want. I know, it looks a bit like multiplication.

Now we can give the (deliberately 1971-inconsistent, to be repaired in some other post) rules. As ever, I work bidirectionally. Today, for convenience, I don’t bother with beta-redexes and presume substitution acts hereditarily. S <= T is a placeholder for the notion of subtyping induced by cumulativity, but for now it should just amount to beta-eta-equality.

G, x : S, G' |- valid W |- w f <: (x : S) -q T W' |- -qw S :> s ------------------------------- -------------------------------------------------- 0G, wx : S, 0G' |- w x <: S W + W' |- w f s <: t W, -qwx : S |- w T :> t W |- w e <: S S <= T ------------------------------------ ---------------------------- W |- w (x : S) -q T :> \ x -> t W |- w T :> s G |- valid 0G, 0x : S |- 0 Type :> T ------------------------ -------------------------------- 0G |- 0 Type :> Type 0G |- 0 Type :> (x : S) -q T

Key observations:

- Type construction and related contemplative acts require no run-time resource.
- The only two-premise rule requires you to split your resources two ways, but all the variables remain in the context, hence available for contemplation (which costs nothing).
- If W is a G-context and W |- w T :> t, then 0G |- 0 T :> t, and similarly with e <: S.

Think of 0 as the ethereal world where the types dwell in eternity. All constructions, however constrained on this earth, have intuitionistic λ-calculus souls which dwell amongst the types, where the police dogs all have rubber teeth and the jails are all made of tin, and you can bust right out again just as soon as they throw you in. And when the application rule substitutes T[s/x], it is the *soul* of s which replaces the bound x in the type T, even if s lives in world 1.

There’s plenty more work to do, of course. I need to think about datatypes, too. I’m fond of the linear paramorphism (the tool of choice for sorting algorithms)

para : (F (mu F & T) -o T) -> mu F -o T

which says that you may either leave each child untouched or turn it into a T, but not both, when you are turning a whole node into a T. What the hell does that become in the dependent case? We’ll have a dependent version of &, so we can write

indn : ((fd : F ((d : mu F) & P d)) -o P (in (F fst fd))) -> (d : mu F) -o P d

It’s ok for the type of the snd projection to depend on the fst, because even when you choose to take snd on earth, the fst that could have been still lives ethereally.

Interesting times. Oh, is that my dinner? I’d better stop contemplating this and consume that.

Edit: hopefully, repaired typing rules mangled by wordpress seeing tags where there weren’t any

Is it me or did wordpress mangle the typing rules?

The ones for variables, Type and -q formation are ok, but not the rest.

I see what you mean. The pointy signs in the premises of the application rule were mistaken for a tag, and after that, all was lost. Hopefully fixed now.

Same problem here as saizan 😦

It seems that you’re discarding the presence of _inherently_ linear types.

The world I would like to see is the universe of linear types with subuniverse of types equipped with “intuitionistic structure”, allowing to discard or clone their inhabitants freely. It is the structure of symmetric comonoid T with coidentity [discard : T -o Unit], comultiplication [clone : T -o (T, T)] and symmetry guaranteeing that any translation of intuitionistic lambda term T -> Y into a linear lambda term T -o Y by making use of clone and discard yields the same results (perhaps up to coherent equivalence in sense of HoTT) whatever wiring of clones and discard one uses. One can easily check that for types A and B equipped with intuitionistic structure, their multiplicative product (A, B) and additive sum Either(A, B) carry natural intuitionistic structures themselves and act as usual pair and disjoint sum. So I would like the subuniverse of intuitionistic types be the usual universe of (ones favourite flavour of) MLTT. But by using the nonstandard connectives & and ⅋ one can get out of this subuniverse.

For closed symmetric monoidal categories (that have to be used to model linear type theories) there is a natural equivalent of being locally closed (http://mathoverflow.net/questions/205902/) which essentially means that for all its types with “intutionistic structure” we may form dependent products, so the approach discribed above seems to have some reasonable categorical semantics.

The fine idea of your post for me is that types without eliminators should automatically belong to the intuitionistic subuniverse because they are erasable (don’t carry any runtime meaning). These automatically include universes of types, but it should be also possible to form such types Ghost(T) for all linear types T. These should be types with one non-consuming constructor ⧘_⧙ : T -x Ghost(T) and no eliminators, which guarantees that the values of the type Ghost(T) can be only used for indexing and alike. Unfortunatelly I don’t yet have any idea how to express this notion categorically.

This gives an insight about how the Id-types of such type theory should work. There should be no separate “multiplicative” and “additive” Id-types, but one single Id type _=_ : (Ghost(T), Ghost(T)) -> Type. These Id-types won’t be automatically equipped with intuitionistic structure, but with a structure of a grouppoid-cogrouppoid (weak partial Hopf algebra or something like that), allowing to discard isomorphisms and fork any inhabitants, but not guaranteing cocommutativity of forks.