Skip to content

observational type theory (delivery)

January 8, 2015

There is quite some diversity in views about the purposes that types serve. At some level, they describe the layout of data in memory. At another level, they describe mathematical notions of stuff. Sometimes, they describe a representation of data (which might serve multiple purposes) and document the particular purpose for which they are intended. E.g., Haskell’s (Maybe x) type extends x with an extra value whose purpose is to indicate failure to compute an x; to extend some semigroup x with a neutral element thus forming a monoid is a different purpose, and thus it is represented in Haskell by a different type. The essential tension is that it is both valuable to be able to work up to isomorphism, but at the same time to make administrative distinctions between isomorphic types in order to make additional administrative distinctions or give a key to particular associated structure. Is the purpose of types to determine structure (up to structure preserving maps) or to give a concrete taxonomy of structure?

Homotopy Type Theory takes the former point of view, but Observational Type Theory takes the latter. HoTT equality characterises the structure preserving maps which translate between views of the same structure, requiring nontrivial computation, even when the underlying data have the same representation: as Augustus de Morgan would have told you, negation is a structure preserving map from the Booleans to the Booleans. When OTT makes two types equal, their closed values are already in some “run-time” sense compatible: there is no information required to explain how to transport values between equal types because equality enforces coincidence of type description.

The essence of observational type theory is to define compatibility of types, and the associated notion of coercion.

(S : Type)  (T : Type) : Prop
coe (S : Type) (T : Type) [S <-> T] (s : S) : T

There’s more to it than that, of course, but that’s the intersection between the ‘theorem-proving’ world of equality and the ‘programming’ world of types. The OTT interpretation of types makes proofs irrelevant in a strong and lazy sense. In a moment, I’ll tell you what this Prop business and these square brackets are about. But let me at least document the intention that at run-time, when computing with closed values only, provably equal types should be sufficiently compatible that every coercion can just be treated as the identity. (HoTT equality sometimes contains genuinely useful information about how to compute structure preserving maps, and thus cannot be so readily disposed of. That’s not to say we could never detect and exploit dullness of coercion in HoTT. In our haste towards the univalent future, I am anxious not to give any of the ground that I have gained.)

What is Prop? Is it the subuniverse of Type consisting of what HoTT people call “mere propositions”? No. It’s a strictly (or rather, lazily) smaller subuniverse than that. You are never permitted to ask if an inhabitant of a Prop is canonical. That’s how we get to treat all open values in Props as judgmentally equal. We don’t just declare that proofs are irrelevant: we make damn sure that they actually are. It is never necessary to normalize proofs. Think about how good old eqJ worked in the previous post: operationally, it fires only if its proof argument normalizes to refl, and that’s too strict for the new regime. (One could (and once upon a time, I did) make a proof-irrelevant but intensional version of eqJ which tests whether the proposed equation holds judgmentally, rather than whether its proof is specifically refl. But the whole point is to escape the judgmental equality.)

Which things are Props? Fundamentally, these things

                                             Type :> S     x : S |- Prop :> P
  ----------------     ---------------     ------------------------------------
    Prop :> Zero         Prop :> One               Prop :> (x : S) -> P

We can, of course, construct a binary product,

  P * Q  =  (x : Two) -> if x then P else Q

and pragmatically, it might be nice to make it primitive and use tuples rather than functions. But that’s irrelevant. We have 0 and 1 closed under arbitrary products, so we’re never going to scale the dizzy heights of having a bit. And we’re never going to have any operations which compute strictly. We do, however, have absurdity elimination

  naughtE [Zero] : T

which strictly refuses to compute. Proofs thus play the role of το δαιμωνο&endsigma; of Socrates, the ‘divine sign’ which never told him what to do, but intervened to advise him against action he might otherwise have undertaken (notably, and ineffectually, the execution of the ten generals after the Battle of Arginusae, a tragic tale of tabloidesque kneejerkery which rings painfully true today, but I digress).

What about these square brackets? You’ll note that it is my habit to name value arguments in round brackets. Well, in the declarations of functions, I prefer to write propositions in square brackets without naming the proofs, and I prefer not to give those proofs explicitly. I feel entitled to let proofs of One remain trivial and proofs of P * Q be found componentwise (another reason for making conjunction primitive). I expect a machine to discover the correct way to instantiate universal quantifiers with computationally inert terms by matching propositions by, then matching variables with other inert things. (It’s not reasonable to guess concrete values for variables, as that requires running computation backwards: in the inert case, the computation didn’t run forwards in the first place.) In effect, I expect elaboration to do simple finitary proof search.

Now, what is this <-> relation? Let’s figure it out a bit at a time. We’ll see what we need it to be when we try to implement coe.

  ((x : S) -> T) <-> ((x' : S') -> T')  ==  ?
  coe ((x : S) -> T)) ((x' : S') -> T') f  =  \ x' -> ?

We’re going to need to ship x’ from S’ to S, apply f, then ship the result back to T’. That’s a clue.

  ((x : S) -> T) <-> ((x' : S') -> T')  ==  (S' <-> S) * ?T
  coe ((x : S) -> T)) ((x' : S') -> T') f  =  \ x' -> let x = coe S' S x' in coe T T' (f x)

However, we need to make sure that ?T justifies the second coercion. That is, we need to establish that x : S and x’ : S’ are basically the same value, even though they have different types. It is not inconsiderably unexpected, and yet entirely consistent with our prospectus, that we did not need a notion of value equality at all until the point at which it became clear that we needed a heterogeneous notion of value equality. (Spelling note: ‘heterogenous’ is the straight special case of ‘erogenous’.) We shall need

  (a : A) == (b : B) : Prop
  [coe A B a == a]             -- it doesn't matter what the proof of this is

and then we can take

  ((x : S) -> T) <-> ((x' : S') -> T')
    = (S' <-> S) * (x : S) -> (x' : S') -> [x == x'] -> (T <-> T')
  coe ((x : S) -> T)) ((x' : S') -> T') f  =  \ x' -> let x = coe S' S x' in coe T T' (f x)

Of course, we must now define a == b. We have some choices. Does it mean

a and b have equal types and equal values

? That’s what I said in my thesis. And that’s what you get if you define heterogeneous equality ‘inductively’. And it does hang together on that basis. But I’ve changed my mind about equality (see what I did there?). I now think a == b means

as soon as you learn that a and b have equal types, you will know they have equal values

That means we don’t have to make == pack up the evidence for <->.

There is a third alternative. Take

  [A <-> B] (a : A) == (b : B): Prop

That is, we could make the formation of a heterogeneous equation require some a priori explanation of value compatibility. That’s the dependent version of the second, conditional interpretation given above. And it’s the only thing to do for HoTT, where the ‘basis for comparison’ itself contains nontrivial information about how to bring two values into a comparable representation. Why not do this for OTT? Proof-irrelevance. What? Proof-irrelevance allows non-dependent constructions the same effectiveness as their dependent counterparts. It’s enough to be logically consistent. Who cares which proof justifies what as long as there are enough of them about? But it’s good to be aware that the proof of a == b will never do you any good unless you know that A <-> B; HoTT strengthens that “that” to a why and necessitates the bookkeeping of the evidence.

(Historical note about my 1998 implementation of dependent pattern matching. Back then, I used a homogeneous intensional equality, equipped with both J and K rules. Heterogeneous equality was implemented as homogeneous equality after coercion. The unification step for reflexive equations required K, as the dependent equations needed those proofs to be refl. I demonstrated it at the TYPES98 meeting, organised by Thorsten and friends at Kloster Irsee (clear evidence that Thorsten can organise a pissup in a brewery — I was there and it was good), with all our favourite vector-zip examples working…eventually. I never wrote it up. When I started writing it up, I found myself trying to justify why comparability should be an accident of judgmental equality in the first place, and I just couldn’t. My conscience compelled me to invent John Major (recently defeated) equality, which allowed me to give a much simpler non-proof-dependent presentation of dependent pattern matching. Note to self: not all of the acetates for that talk were cut up to glaze a tomato-growing indoor greenhouse; maybe scan the survivors (the graphical shtick was based on the Saul Bass credits for Otto Preminger’s Anatomy of a Murder (James Stewart, Lee Remick, Ben Gazzara); Bass’s cover art for the Duke Ellington soundtrack LP of that movie has influenced my walls ever since)?)

So, going with the easy option, it’s

(f : (x : S) -> T) == (f' : (x' : S') -> T')
  = (x : S) -> (x' : S') -> [x == x'] -> f x == f' x'

That is, functions are equal if they yield equal outputs from equal inputs. Extensionality!

And you’ll notice that, as long as we’re shipping functions from one function type to another, it’s obvious how to get going with a λ. The moment it’s clear from the types that we’re in trouble, we arrange to give up:

  ((x : S) -> T) <-> Type  =  Zero

etc.

We do, of course, take

  (A : Type) == (B : Type)  =  (A <-> B)

but we’re quite free to allow

  (P : Prop) == (Q : Prop)  =  ([P] -> Q) * ([Q] -> P)

as the (nonexistent) run-time representations of P-proofs and Q-proofs are trivially compatible.

In order to explain the ‘run-time’ compatibility intuition behind observational equality, it may be necessary to make the what-time distinction clearer, e.g., via world systems. Have I done these things in the wrong order? Is that the problem?

Of course, functions (with only non-dependent elimination, but hence extensionality) and types (with no typecase) do not put any grit in the oyster. Throw in the Booleans.

  Two <-> Two  =  One
  coe Two Two b = b
  (tt : Two)  (tt : Two) = One
  (tt : Two)  (ff : Two) = Zero
  (ff : Two)  (tt : Two) = Zero
  (ff : Two)  (ff : Two) = One

But that’s not awfully exciting. Throw in the naturals.

  Nat <-> Nat  =  One
  coe Nat Nat n = n
  (zero : Nat)  <-> (zero : Nat)   = One
  (zero : Nat)  <-> (suc _ : Nat)  = Zero
  (suc _ : Nat) <-> (zero : Nat)   = Zero
  (suc m : Nat) <-> (suc n : Nat)  = (m : Nat) == (n : Nat)

The Leibniz property of equality that you know and love (if x equals y, replacing x by y preserves constructability) is a matter of function extensionality. If P : X -> Type, then equal elements x and x’ of X give us equal types P x and P x’, hence transportability between the two.

But where are the dependent datatypes? Well, there’s a problem if you define overly intensional predicates like

  data Favourite : (Nat -> Nat) -> Set where
    fave : Favourite (const 42)

in that we might define

  deep : Nat -> Nat
  deep zero     = 42
  deep (suc n)  = deep n

so that deep is not judgmentally (const 42), although the two can be proven to agree on all inputs, by induction. There is a canonical value of type Favourite (const 42) but not one of type Favourite deep, so there is no way that an extensional theory can have the canonicity property.

The fix, of course, is to do what the GADT people did, and define inductive families explicitly upto propositional equality. And then of course you can transport them, by transisitivity.

But it’s tomorrow, already, and this was supposed to be today’s blogpost. It’s so over.

Advertisements
6 Comments leave one →
  1. glaebhoerl permalink
    January 12, 2015 2:07 am

    proofreading stuff:
    wordpress appears to have eaten the biarrows in the first diagram, in the fourth-to-last, and between S S’, T T’, A B in very many places
    there’s a small oddity in the vicinity of Socrates
    “That means we don’t have to make == pack up the evidence for .” <- something may be not quite right

  2. saizan permalink
    March 25, 2015 10:22 am

    Have you ever tried to specify what == means when the types differ?
    The runtime compatibility intuition is still there, and I guess one could fix some untyped model of computation and show that computation would not get stuck when it shouldn’t.

    However it would be more satisfying to do this at an higher level.

  3. April 5, 2015 4:10 pm

    “Im a moment…” -> “In a moment…”

Trackbacks

  1. brains splattered all over the walls | pigworker in a space

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: