# irish induction-recursion

This is a bit of a shaggy dog story. Where to begin? I suppose I should remind you what definition by induction-recursion (IR) lets you accomplish. I’ll show you the usual coding scheme for IR definitions, and then I’ll show you the ‘Irish’ coding scheme which I came up with, not realising at the time that I might be doing more than bureaucratic shuffling of parts. It turns out that the status of Irish IR is not obvious.

The idea behind IR is that we can make an inductive definition of a datatype at the same time as we define a function which computes by recursion over that type: the recursive function can be involved in the types of the data constructors. E.g., defined contexts and environments as follows

Context : Set1 Env (G : Context) : Set ::= nil Env nil = One | snoc (G : Context)(S : Env G -> Set) Env (snoc G S) = (g : Env G) * S g

Each successive context entry is a type depending on the prior context — given as a function from environments. The notion of environment is essential to the datatype declaration, even though it is being defined just in time. An environment for the empty context is trivial. An environment for a context extended is the dependent pair of an environment for the original context and a value in the type that extends the context ((s , t) : (x : S) * T whenever s : S and t : T[s/x]).

The Context type above is large, in that its values pack up sets, but the real power of IR lies in its ability to make *small* encodings of large things. Here, for example, is the universe closed under the Boolean type and the dependent function space:

U : Set El (T : U) : Set ::= two El two = Two | pi (S : U)(T : El S -> U) El (pi S T) = (s : El S) -> El (T s)

Here, U is a small set, but its values encode sets, with El telling you of what the elements of U’s encoded sets consist. We need to use El when accounting for the value-dependency in pi-types. Without IR, you can still give an encoding of such a universe, but it’ll be large.

In fact, if you know what you’re doing, you can give a small encoding of an entire predicative hierarchy using IR: it’s an astonishingly powerful means of definition. But what is it? Which definitions are allowed?

As ever, the constructors of an IR type amount to the initial algebra for an endofunctor, but not in Set. Borrowing Peter Hancock’s terminology, let Fam X = (I : Set) * I -> X. Fam X is the type of small coding schemes for Xs. Above, (U, El) : Fam Set. IR types are given as initial algebras for an endofunctor on some Fam X. For that to make sense, I have to equip Fam X with morphisms. A morphism from (I, f) to (J, g) is some h : I -> J such that f = g . h. So, if X is small, then Fam X amounts to the slice category, Set/X, but the whole sodding point is that X usually isn’t small. I’ll make use of

Code (F : Fam X) : Set decode (F : Fam X)(i : Code F) : X Code (I, f) = I decode (I, f) i = f i

Now, the question is *which* functors will give rise to IR definitions. In 1999, Peter Dybjer and Anton Setzer gave a coding scheme for a collection of functors which do just that. Below, DS X Y is an encoding of functors from Fam X to Fam Y. (It’s a little easier to see what’s going on if we consider Fam-functors in general, not just endofunctors.)

DS (X : Set1)(Y : Set1) : Set1 ::= iota (y : Y) -- end of data, deliver y | sigma (S : Set) (T : S -> DS X Y) -- non-recursive field of type S | delta (H : Set) (T : (H -> X) -> DS X Y) -- substructures branching over H

Let me encode the above example universe:

UEl : DS Set Set UEl = sigma {two, pi} \ two -> iota Two | pi -> delta One \ S -> delta (S ()) \ T -> iota ((s : S ) -> T s)

That is, we first offer the choice of “two” or “pi”. If “two” is chosen, the encoded type is Two. If “pi” is chosen, we ask for one recursively chosen code for the pi-type domain, and we get S : One -> Set as its decoding, so we can ask for an (S ())-indexed family of codes to give the dependent range of the pi-type, then deliver the set (s : S ()) -> T s. Or so I say…

We need to say which functors are encoded and make sure the above code really gets the intended interpretation. To do so, it will help to define the corresponding operations on Fam Y.

The constant family encodes just one element of Y with a trivial code.

ONE (y : Y) : Fam Y ONE y = One, \ _ -> y

We can build up right-nested Fam Y encodings by dependent pairing over some set S. Once you have chosen an S, you can see the structure of the rest of the codes and how to decode them.

SIGMA (S : Set) (T : S -> Fam Y) : Fam Y SIGMA S T = ((s : S) * Code (T s)), \ (s, t) -> decode (T s) t

Now, if you have some (I, f) : Fam X, you can use values in I as codes for values in X. Any old function g : H -> I gives us a bunch of codes, and the composition f . g : H -> X gives us the corresponding bunch of X-values.

DELTA (F : Fam X) (H : Set) (T : (H -> X) -> Fam Y) : Fam Y DELTA (I, f) H T = ((g : H -> I) * Code (T (f . g))), \ (g, t) -> decode (T (f . g)) t

Crucially, whatever Fam X we plug in, whatever its type of codes, we know we’ll be able to decode them to X.

Now we can give the action on objects.

[(T : DS X Y)] : Fam X -> Fam Y [iota y] F = ONE y [sigma S T] F = SIGMA S \ s -> [T s] F [delta H T] F = DELTA F H \ k -> [T k] F

The action on morphisms is dictated by the structures thus given: given a translation between two encodings of X, we can lift it to a translation between the corresponding encodings of Y.

Cranking through our example, we get

Code ([UEl] (U, El)) = (x : {two, pi}) * case two -> One pi -> (S : One -> Set) * (T : S () -> Set) * One decode ([UEl] (U, El)) (two, ()) = Two decode ([UEl] (U, El)) (pi, S, T, ()) = (s : El (S ())) -> El (T s)

so our definition has worked out the way we planned.

The eagle-eyed will have spotted that DS X is a monad, indeed it’s a free monad with iota as return and a >>= operator which grafts more node structure on at each iota.

Moreover, we can define

dsI : DS X X dsI = delta One \ f -> iota (f ())

such that

Code ([dsI] (I, f)) = (i : I) * One decode ([dsI] (I, f)) (i, ()) = f i

i.e., [dsI] is isomorphic to the identity functor on Fam X.

But here’s the biggie. Is there a composition operator on DS

dsC (F : DS Y Z) (G : DS X Y) : DS X Z

such that [dsC F G] is isomorphic to the functor composition [F].[G]? The usual suspects (Ghani, Hancock, Altenkirch, me…) have been banging our heads off this for a few years now. We just don’t know. We suspect that there isn’t such a thing, but we haven’t been able to find a concrete example of codes F and G whose composite definitely cannot be encoded.

Now, once upon a time, I was trying to make the presentation of DS X Y more like the universe of functors Chapman, Dagand, Morris and I built in *The Gentle Art of Levitation*. I started out like this — it’s an IR-definition itself.

Irish (X : Set1) : Set1 Info : Irish X -> Set1 ::= idi Info idi = X | con (A : Set) Info (con A) = A | sigma (S : Irish X) Info (sigma S T) = (s : Info S) * (T : Info S -> Irish X) Info (T s) | pi (S : Set)(T : S -> Irish X) Info (pi S T) = (s : S) -> Info (T s)

Irish is a set of codes for structures involving a Fam X. Info tells us the information we are sure to have in those structures, whatever Fam X we plug in: we know more than nothing about them, because we know they decode to X. Info has to be large, because X is large.

Of course, I have to tell you what structure you get when you plug in a Fam X. We get node structures, small sets, from which we can compute Info.

Node (T : Irish X) (F : Fam X) : Set info (T : Irish X) (F : Fam X) (t : Node T F) : Info T Node id F = Code F info idi F i = decode F i Node (con A) _ = A info (con A) _ a = a Node (sigma S T) F = (s : Node S F) * info (sigma S T) F (s, t) = s', info (T s') F t Node (T (info S F s)) F where s' = info S F s Node (pi S T) F = (s : S) -> info (pi S T) F f = \ s -> info (T s) F (f s) Node (T s) F

Node can be small, because we now use our Fam X to give us a small encoding for X-values. We compute Info from Node by deploying the decoder which comes with the Fam X. We now have that

F : Fam X => (Node T F, info T F) : Fam (Info T)

But we want a Fam Y. All we need for that is a function from Info T to Y. That is

irish (T : Irish X) (d : Info T -> Y) (F : Fam X) : Fam Y irish T d F = Node T F, (d . info T F)

In fact, it’s quite a lot like the way we write IR definitions informally. We give a description of the data, and we write a decoder for it. I should, of course, give the action on morphisms (which is postcomposition in the relevant places), and check that we actually get a functor (which of course we do, because postcomposition respects identity and composition).

Let me work through the running example again

U : Irish Set U = sigma (con {two, pi}) \ two -> con One | pi -> sigma idi \ S -> pi S \ _ -> idi El : Info U -> Set El (two, ()) = Two El (pi, S, T) = (s : S) -> T s

and irish U El is the endofunctor we need.

It’s not hard to see that irish idi id is exactly the identity functor on Fam X. But there’s more.

If we have F : Irish Y, that’s a description of node structures that a Fam Y plugs into, but if we have the bits for a functor from Fam X to Fam Y, we should be able to get a structure that a Fam X plugs into, just

by grafting stuff in place of F’s idi markers. At the same time, we need the forgetful map which discards the extra detail from the refined structure, giving us just the Info for the outer layer that corresponds to F.

subC (F : Irish Y)(G : Irish X)(d : Info G -> Y) : Irish X fogC (F : Irish Y)(G : Irish X)(d : Info G -> Y) : Info (subC F G d) -> Info F subC idi G d = G fogC idi G d g = d g subC (con A) G d = con A fogC (con A) G d a = a subC (sigma S T) G d = fogC (sigma S T) G d (s, t) = sigma (subC S G d) \ s' -> s', fogC (T s') G d t subC (T (fogC S G d s)) G d where s' = fogC S G d s subC (pi S T) G d = fogC (pi S T) G d f = \ s -> pi S \ s -> subC (T s) G d fogC (T s) G d (f s)

Putting these pieces together, we see that

irC0 (F : Irish Y)(f : Info F -> Z)(G : Irish X)(g : Info G -> Y) : Irish X irC1 (F : Irish Y)(f : Info F -> Z)(G : Irish X)(g : Info G -> Y) : Info (irC0 F f G g) -> Z irC0 F f G g = subC F G g irc1 F f G g = f . fogC F G g

gives us the tools to compose our functors. Of course, one must do the work to show that we don’t just get something with the right type to be composition. It’s not that difficult. Intuitively, substituting the placeholder for substructures in the node description is exactly how to design a composite structure.

So, Irish IR corresponds quite neatly to the way we write IR definitions anyway, and it describes a class of functors between Fams which is readily closed under composition. It’s very easy to check that every DS X Y gives rise to an (F : Irish X) * Info F -> Y. Dybjer-Setzer iota gives Irish con One, Dybjer-Setzer sigma gives Irish sigma-con, Dybjer-Setzer delta gives Irish sigma-pi-idi. What’s the problem?

Well, when you try to translate Irish IR codes to Dybjer-Setzer codes, it’s far from obvious how to do it. If we could do it, we’d be able to compose DS functors, via the Irish composition. But there’s an alternative possibility, which seems more likely: it may well be the case that Irish IR codes capture a larger class of functor than Dybjer-Setzer codes. That might even be a good thing, but it would certainly prevent us appealing to Dybjer and Setzer’s set-theoretic model to show consistency. We need a pile of fresh work to show the more generous notion makes sense. It could be too good to be true!

Although I’ve been using an informal notation in this blogpost, I’ve managed to do the whole construction in Agda. Can’t for the life of me remember where I put it. But it raises the question of what notion of IR Agda actually implements. Entertainingly, Irish/Info has a perfectly sensible Dybjer-Setzer encoding as an inductive-recursive definition in Set1 (which means it also has an Irish IR encoding, so we can play the levitation game). Agda’s positivity checker doesn’t complain when we tie the knot. I don’t think we can really deduce much about the consistency of such constructions from the fact that Agda fails to forbid them.

Zooming out a bit, let’s see what we’ve got. We have a compositional first-class presentation of inductive-recursive definition. It would fit neatly in a closed kernel for some dependently typed programming language. But we still have nagging questions about its status. And, of course, there are other things we might like to do with datatypes.

## Trackbacks