Skip to content

Ming — a typecheckerchecker (episode 0)

December 10, 2018

Over the festive period, largely to celebrate even having time to, I’m minded to write a typecheckerchecker, with which I’ll then write a typechecker or two. The plan is to operationalise ideas about how bidirectional type systems work to guide their construction in a way which guarantees their metatheoretic properties.

It’s called Ming, because it wants to rule the universe, because I don’t mind the implementation being a bit smelly, and because the holidays have historically been a time for old serials.

The central observation is that one cannot write down a rule without incurring a proof obligation, and that there is some structure as to the appropriate means of discharging that proof obligation. We don’t write down any old rubbish, you know.

syntax

When you haven’t decided what the object language is, it’s probably a good plan not to design a fancy syntax for it in advance. Now, I have been working on ways to specify fancy grammars with funky scoping rules, by way of exploring possibilities for my new PL class, but I don’t want to get waylaid by such considerations. I can cope with the rustic charms of something distinctly Lisp-like, albeit with binding managed more carefully.

Syntaxes are built from

  • atoms, which are just identifiers, in their role as nullary canonical forms
  • cons cells, which are [term, term], being binary canonical forms
  • bindings, which are identifier.term
  • parentheses, which are (paren-stuff), embedding object-level elimination forms
  • braces, which are {brace-stuff}, embedding meta-level forms

There are several classes of formula we encounter, giving rise to different sorts of paren-stuff and brace-stuff.

Let us adopt the usual Lisp conventions (even if the punctuation is weird) that [t, [blah]] may be written [t blah], and that [] is the nil atom. There’s a cheap and nasty parser which allows [,term] as an ugly way of writing term, and I’m totally up for that.

terms

For your basic object language terms, you take brace-stuff to be empty, and paren-stuff to be given by eliminations:

  • variable uses: identifier
  • actually eliminating: elim term
  • radicals: [term : term]

We might thus write the type of the polymorphic identity function as something like [Pi Set X.[Pi (X) x. (X)]] so that Pi and Set are atoms, but (X) is a use of the bound variable X.

patterns and thinnings

For the meta-language of patterns, you take paren-stuff to be empty, but you allow brace-stuff to consist of the binding-sites of metavariables: meta thinning.

The job of the thinning is to say which of bound variables which are lexically in scope a given meta may actually depend on. Thinnings may be given by inclusion: you give either a nonempty sequence of the variables in scope for the meta, or say nothing to allow them all. Thinnings may be given by exclusion: you write – followed by a nonempty sequence of the variables excluded from scope for the meta, or – unaccompanied to exclude them all.

The pattern for a general dependent function type might then be [Pi {S} x.{T}], or [Pi {S} x.{T x}] if you want to be super explicit about the permitted dependency. The special case for nondependent functions is then [Pi {S} x.{T – x}], or just [Pi {S} x.{T -}] if you feel casual.

The names of the metas in a pattern should be distinct. From a given pattern, we can compute the names and arities of its metas. E.g., in [Pi {S} x.{T}], S is a meta with arity 0 and T is a meta with arity 1. Meanwhile, in [Pi {S} x.{T – x}], S and T both have arity 0.

Pattern matching is easy: atoms patterns match atoms, cons patterns match cons-cells componentwise, pattern metas match anything in the image of their associated thinning (i.e., anything which uses only the variables permitted).

Thinnings have pullbacks…

[Hang on a minute, I shouldn’t be casually categorical. A thinning is an order-preserving embedding of one variable context into a non-strictly larger variable context. Dually, you take the larger context and you decide which of those are actually available. Think of it as a bit-vector, if you like: I do. Put differently, a thinning witnesses inclusion. If you know that variable sets Q and R can both be embedded in S (witnessed by thinnings), then you can find a set P embedded in both Q and R such that any other set O which is also embedded in both Q and R is embedded in P. That’s the pointwise conjunction of the bit-vectors.]

…which tells us how to unify two patterns, p0 and p1, to compute the pattern p such that a term matches both p0 and p1 if and only if it matches p, should such a p exist. The pullbacks do the job that Miller calls ‘pruning’: if you have to satisfy ‘may depend only on these’ and ‘may depend only on those’, your only chance is to depend only on the intersection.

The fact that patterns have a sensible unification algorithm gives us a fighting chance of inverting typing rules, which is crucial to seeing why they make sense.

expressions and candidates

An expression has paren-stuff given by candidates, which are what you get when you replace term by expr in the elim grammar. But there is also nontrivial brace-stuff, allowing you to instantiate a meta with a sequence of cands commensurate with its arity, written meta/cand…/cand. I.e., elim is the grammar of formulae talking about terms and cand is the grammar of formulae talking about elims.

The β-reduct of [x.{t} : [Pi {S} x.{T}]] s can be given by the candidate [{t/[{s} : {S}]} : {T/[{s} : {S}]}].

The bidirectional discipline is very particular about where in rules we shall write expressions and where we shall write expressions. Which is just as well, because we have not a cat in Hell’s chance of a sensible unification algorithm for expressions. We shall always find out the meanings of metas in expressions by matching their binding occurrences in patterns.

there is a sanity clause

Every judgement form classifies its places as

  • input (about which some promise must be made by those who would seek to know the judgement)
  • subject (about which no promise can be made, because that is why we seek to know the judgement)
  • output (about which some promise must be made by those who claim to have established the judgement)

E.g., type checking asks whether a given input type accepts a given subject term, but type synthesis asks whether a given subject term can yield the output of its type.

I repeat

A rule is a server for its conclusion and a client for its premises.

and as such, inputs create obligations for clients and outputs create obligations for servers.

In a rule conclusion:

  • Inputs are patterns. When we give a judgement form, we should say what the client must promise about each input by specifying a judgement where that input is the subject. The metas in the input patterns are considered validated, in the sense given by this promise. (E.g., when you are checking a type accepts a term, the type is an input, and you should be sure that it really is a type. If it’s not a type and the checker says yes, that’s your fault and your problem.)
  • Subjects are patterns. For how else can we check things compositionally, except by matching to find their pieces? Subject metas are not considered validated at first.
  • Outputs are expressions. When we give a judgement form, we should say what the server must promise about each output by specifying a judgement where that input is the subject. (E.g., type synthesis must promise to synthesize types and not just any old rubbish.) The metas in scope for these outputs are all the metas arising from the conclusion inputs, the validate conclusion subject metas, and the premise output metas.

In a rule premise:

  • Inputs are expressions. We must establish the client promises for those expressions. Only validated metas are available for these expressions.
  • Subjects are metas occurring in the subject patterns of the rule conclusion. This subject meta is subsequently considered validated, in the sense given by this premise. We may validate only the pieces of the main thing we are validating. We must never, for example, revalidate something given by a conclusion input, and we should never have to: the client’s promise should be enough.
  • Outputs are patterns. We may analyse what a premise returns to us and consider the metas in those patterns suitably validated.

type formation and context extension

Judgements are expressible in our Lisp-like syntax, with suitable variations for the places as specified above. E.g., we might have [type T] where T is a subject. Whenever J is a judgement, so is [bind S x.J], where S is an input validated by [type S]. I’m now struggling a little with two layers of meta: J may mention x.

It is thus reasonable to give [type [pi {S} x.{T}]] as a rule conclusion, with no obligation. The premise [type {S}] validates S in a sense which allows us to propose [bind {S} x.[type {T}]] as the second premise.

I should add that it is compulsory, when validating a subject pattern meta, to extend the context with variables for that meta to capture. Again, we have a name and a thinning, but the thinning points the other way: the variables demanded must be amongst those in scope.

type checking

We might also have [check T t] as a judgement, where T is an input validated by [type T] and t is the subject.

It becomes reasonable to give [check [pi {S} x.{T}] y.{t}] as a rule conclusion and give [bind {S} y.[check {T/y} {t}]] as the premise, carefully capturing x. Why? The client has promised us [type [pi {S} x.{T}]] from which inversion yields [type {S}] and [bind {S} x.[type {T}]]. We are thus entitled to bind an {S}, enabling us to validate that {T/y} is a type by stability under substitution (I jump ahead, but that will hold, in an appropriate sense, by construction.)

type synthesis

Type synthesis rules have a very particular form. For free, we can synthesize the types of variables (by looking them up) and of radicals (by checking that the given type really is a type which really accepts the given term, then returning the given type). The rules we have to specify deal with the genuine elim forms.

All of those rules look the same:

  • We give a pattern for the type of the thing being eliminated, aka the head. That’s a output from the main premise, notionally.
  • We give a pattern for the eliminator. That’s the subject.
  • We give a bunch of premises.
  • We give an expression for the type of the whole elimination, depending on the value of the head.

For functions, that’s

  • [Pi {S} x.{T}]
  • {s}
  • [check {S} {s}]
  • f.{T/[{s}:{S}]}

For pairs, we have

  • [Sg {S} x.{T}]
  • car
  •  
  • p.{S}

which then allows us

  • [Sg {S} x.{T}]
  • cdr
  •  
  • p.{T/p car}

showing why dependency on the head value is necessary.

Note that we have not been forced to allow metas to stand for anything other than terms.

β-contraction scheme

A β-contraction scheme is specified by four things:

  • a pattern for the type of the eliminated term, e.g., [Pi {S} x.{T}]
  • a pattern for the eliminated term, e.g., x.{t}
  • a pattern for its eliminator, e.g., {s}
  • an expression for the reduct, e.g., {t/[{s}:{S}]}

There must be a synthesis rule we can invert to find the types of the radical (which we may further invert to find our assumptions about its metas), the reduct (e.g., {T/[{s}:{S}]}), and the metas in the eliminator. We acquire the rewriting behaviour given informally as [x.{t} : [Pi {S} x.{T}]] {s} ~> [{t/[{s}:{S}]} : {T/[{s}:{S}]}]. We must discharge the obligation that the reduct has the demanded type, which follows in our example by stability under substitution.

If all β-rules are given this way, then the fact that patterns have no paren-stuff means that the only way we can lose confluence is if the same term is a redex in two incompatible ways. There is no chance that contracting one redex will destroy any proper sub-redex. It’s clear that we can generate the obligations for the resolution of critical pairs by pairwise unification of β-rules, which may very well yield no obligations.

more to come

It’s late and I’m both rambling and starving, so I’d better stop here. There’s plenty to drill down into, later. The repo for this hacking project is https://github.com/pigworker/Ming/.

Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: