Skip to content

traffic-dependent session types

January 4, 2015

This lump of Agda is a talk I gave at the Scottish Programming Languages Seminar, some Wednesday last term at Heriot-Watt, in a brief oasis of research activity. I originally wrote it in my notebook coming back from Arran on the ferry, sometime in 2013 (or was it earlier?), chatting with Peter Hancock. As for how to talk about the idea, that took a little longer. It crystallised during Phil Wadler’s lecture on session types at last summer’s Advances in Programming Languages summer school, also at Heriot-Watt. It’s often a matter of the right question, and as so often in my professional life, the right question was asked by James McKinna. James asked something like

Given that types are used to classify things, what are the things that session types classify?

If you’ve heard Phil give his Propositions-as-Sessions talk (or if you are Phil, and you’ve been in the habit of giving it), you may have noticed that at the level of the grammar in Phil’s chat (e.g., “A (x) B means send A, then behave like B”) the answer to James’s question isn’t thunderingly concrete (is it the things which do the behaving that are the things which can be sent?). I don’t want to get at Phil: I can hardly throw stones at informality in communication style. Formally (and it wasn’t hard to cajole this from Phil), the session types are classifying channels: “send A” means send (on my channel) the identity of another channel which will act with type A; “then behave like B” means that after we send the channel, we should act, on our own channel like type B. Although they look as if they should be symmetrical, tensor (asciified (x)) and its dual, par (asciified >8), are askew with Phil’s interpretation in a way which makes a lot of sense when you think about the communication involved.

To misquote Robert Wyatt, ‘Well I askew…’ in that, as a dependent type theorist, I’m used to dependency pushing rightward in the sigma-types and pi-types we use for dependent pairs and functions. I write :> to mean ‘admits’ (I’d use \ni in LaTeX).

    A :> a    B[a/x] :> b                 x : A  |-  B :> b
  -------------------------         ----------------------------
     (x : A) * B :> a, b              (x : A) -> B :> \ x -> b

To give a value in (x : A) * B, you give a value in A, then you give a value in the appropriate instance of B (as chosen by yourself). To give a value in (x : A) -> B, you wait to receive a value in A, then you reply with a value in the appropriate instance of B (as chosen by your caller). Sounds a lot like session types to me. Moreover, the skew of dependency chimes with the idea that structure of a communication depends in some way on the choices which gets made as it progresses. Are Phil’s tensor and par the degenerate non-dependent special cases of some sigma and pi? If so, that might be good, because we could recover the other two operators, + and &, in the usual (if you’re a dependently typed programmer) way, namely

  A + B  =  (x : Two) *   if x then A else B
  A & B  =  (x : Two) ->  if x then A else B

That is, respectively, “I send a bit, telling you which of A or B I have chosen to follow.” and “You send a bit, telling me which of A or B you have decided that I must follow.”. Foundationally, “transmit a bit and fork on it” is clearly minimal and generic; pragmatically, the requirement to fork on data one bit at a time and at the moment of transmission is a bit steep. I might like to send, e.g., the dimensions of a bitmap and then the correctly-sized matrix of the bits. Dependent types and session types seem made for each other. I have in the past been so bold as to say

If you’re not doing session types with dependent types, then you’re doing them wrong.

and these days, I have even more of an idea what I mean by that. Session types are supposed to capture some patterns of dependency. There has to be a connection. but what?

The trouble is, if you just fiddle about on the whiteboard trying to ‘dependent-up’ the non-dependent rules, it’s not so easy. Look at the rule for forming a pair: we need to understand what B[a/x] means. The same thing will happen when we apply a function. Now we really need the answer to James’s question. When a dependent type theorist asks “what are the things that session classify?”, they mean ‘what are the a’s which gets substituted for the x in B?’.

One way to not answer this question is to decide that, because we know how to make types depend on values, we will restrict the domains of dependency to value types. In linear logic terms, that requires some sort of bangability restriction. In some dependently typed models of sessions (e.g., Edwin Brady’s), the domain of the sigma and pi operators are not session types. That’s a pragmatic way to avoid thinking, but it doesn’t get us tensor and par, and it makes the technology more complicated, not less. (People who expect dependent types to be a source of complexity have not understood them: we introduce dependent types to simplify the task of coping with complexity.) Let’s answer the damn question.

What does x stand for, in B? Is it…

  • a channel for communicating according to A?
  • a process which communicates according to A?

or what? What should session types depend on?

It’s not clear how to substitute a channel for a variable, or how that might mean we get dependency on values when in the special cases where it’s clear values are what’s transmitted. It’s also a bit worrying to consider session types depending on the parties to the communication. The key thing is that rest of the protocol can depend on what has been transmitted so far. So it’s neither of the above. It is

  • the traffic of a communication according to A?

I’ve told you a bit about induction-recursion, so I can cut to the chase. The traffic of a session is exactly the record of its transmissions. That is, we need at least a universe of types which close up the type of signals over the formation of dependent pairs. But we don’t just need to know what the traffic consists of — it’s not just a dependent record type — we need to know which party is responsible for which bits of the traffic. In a two-party protocol, we just need to know when the roles swap. I presume that (Signal, [_]s) is a universe of types with transmissible values and define Sessions and their traffic as follows.

  Session : Set                                   [(S : Session)]t : Set
  ::= ! (X : Signal)                              [ ! X    ]t = [ X ]s
    | sg (S : Session) (T : [ S ]t -> Session)    [ sg S T ]t = (s : [ S ]t) * [ T s ]t
    | op (S : Session)                            [ op S   ]t = [ S ]t

The session type is a record type marked up with role-swapping; the traffic consists of records, ignoring the role-swapping.

Now we need to say what it is to communicate in accordance with such a session type. I define the two roles.

Play (S : Session) (G (s : [ S ]t) : Set) : Set
Oppo (S : Session) (H (s : [ S ]t) : Set) : Set

Crucially, I cut myself some slack. The above G and H are traffic-dependent postconditions which the player and opponent must achieve. My definition calculates the precondition for achieving those postconditions as a result of the communication.

Play (! X)    G = (x : [ X ]s) * G x  -- choose a happy signal
Play (sg S T) G = Play S \ s -> Play (T s) \ t -> G (s , t)
Play (op S)   G = Oppo S G

Oppo (! X)    H = (x : [ X ]s) -> H x  -- put up with whatever signal
Oppo (sg S T) H = Oppo S \ s -> Oppo (T s) \ t -> H (s , t)
Oppo (op S)   H = Play S H

Traffic-dependent postconditions are just enough to let us say that our goal for the first component of a pair is to be able to carry on with the second (with the goal of achieving our original postcondition). I’ve had my basic training in Bundy-jumping, and I know how to get an induction to happen by generalizing a goal with “sink”-variables (G and H, respectively), whose job is to absorb the difference between the induction hypotheses and the induction conclusion.

Let there be communication and happiness!

traffic (S : Session) (p : Play S G) (o : Oppo S H) : (s : [ S ]t) * G s * H s
traffic (! X)     (x , g) h = x , g , h x
traffic (sg S T) p       o with traffic S p o
... | s , p' , o' with traffic (T s) p' o'
... | t , g , h = (s , t) , g , h
traffic (op S) p o with traffic S o p
... | s , h , g = s , g , h

The file I linked at the top gives an example development of a matrix-transmitting session. There’s plenty of head scratching still to do.

If we’re interested in multi-party sessions, the idea that traffic is a record does not change, but the types of the parties do. As well as sigma for sending and pi for receiving, we need an intersection type to quantify over signals which we neither send nor receive. So we need something like worlds.

But where have we reached? We can indeed define

  A (x) B  =  sg A \ _ -> B
  A  +  B  =  sg Two \ x -> if x then A else B
  pi A B   =  sg (op A) B
  A  >8 B  =  pi A \ _ -> B
  A  &  B  =  pi Two \ x -> if x then A else B

The linearity has vanished into the characterization of processes as tree-like strategies for communication: paths in trees are linear. We’ve doubled down on Phil’s askew interpretation of the linear operators by making them dependent quantifiers.

But the processes classified by the session types are not the things which the variables stand for. This is most certainly not a linear dependent type theory. I’ve also been thinking about what that might mean, with dependency on the linearly typed things (like, how would you use types as a language of specification for linearly typed functions?). Session types may correspond to linear types, but dependent session types (dependency on traffic) are not the same thing as linear dependent types (dependency on the programs themselves). Realising that these concepts are potentially distinct is crucial to figuring out what they might entail.

Advertisements
5 Comments leave one →
  1. January 4, 2015 1:30 pm

    Conor,

    This is an area I’ve been very interested in for a little while now. A few questions (apologies for my ignorance – I’m still getting up to speed on a lot of the recent work). My perspective is very much from the actor-side of the fence.

    The basic stated goal of session types is to allow verification that some process corresponds to the session type / protocol. But that leaves many other aspects open. For example: does communication have to be sync or can it be async too? Can you extend this to give freedom from deadlocks – a single process taking part in multiple sessions with various interleavings can normally still deadlock. If you require processing of each message received to be total, can you then reason globally / transitively about the number of messages generated as a result of the receipt of a single message by a process and thus make arguments about progress and liveness?

    Whilst all the session types themselves are likely to have to be defined statically, does the above treatment also require the processes / channels to be defined statically too? Many distributed systems only get interesting when the number of processes / channels are determined dynamically by user actions.

  2. January 5, 2015 12:17 am

    I have to say I’m really impressed (and slightly intimidated!) by your recent blogging rate, but I still have comments, speaking as a producer of the kind of dependent linear type system you don’t like. 🙂

    From my pov, if you have some resourceful notion of programming (memory, time, communication, protocols, whatever) in mind, you should design a linear type theory for it, even though it will basically never be the fundamental type theory underpinning that resourceful notion of computation. This is because linearity gifts you a tremendous amount of plumbing for free in its syntax, which means that any program which fits the linear discipline (in practice, most of them) will typecheck without any horrible proof burden at all. Basically, linear lambda calculus is possibly the very best DSL syntax I know, and I want easy access to it when doing dependent programming.

    If you want to write programs which are well-behaved with respect to the underlying resource, but are not naively linearly typed, then you need worlds. In particular, you want a Kripke structure whose worlds form a monoid of some sort, and IMO the right approach to this is basically some version of hybrid logic. In his PhD thesis Alex Simpson laid out the general plan of attack — ie, have ordinary contractible hypotheses, but which are labelled with the world, which controls when you can use the variable. Jason Reed, in his PhD thesis, applied this to LF to construct Hybrid LF, giving encodings of linear logic and BI as examples.

    IMO, the very feature which makes hybrid logic weird and unaesthetic to proof theorists (ie, reifying the Kripke structure into the typing judgements), is just what makes it appealing for dependent programming (now we can program up any Kripke structure we like and get DSL syntax for for free).

Trackbacks

  1. brains splattered all over the walls | pigworker in a space
  2. linear dependent types | pigworker in a space
  3. compositional processes for dependent sessions | pigworker in a space

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: