Skip to content

hasochistic containers (a first attempt)

June 6, 2015

There was a bit of chat on Twitter about polynomials and strictly positive functors, which provoked me to think a bit about how much of the theory of containers (in the sense of Abbott, Altenkirch and Ghani) I could cook up in modern day Haskell. It turns out that we can make a plausible stab at the basics, but the wheel falls off when we try to get to more advanced things.

What is a container?

Informally, a container is a functor with a “shapes and positions” presentation: the contained values are given as the image of a function from positions, but the type of positions depends on the choice of the shape. Finite lists of things, for example, can be seen as functions from an n-element set to things, once you’ve chosen the shape n, otherwise known as the length of the list. If a functor fis a container, then its shape set will be isomorphic to f (), or what you get when you choose boring elements that just mark their position. It’s the dependency of the position set on the shape that makes the concept a little tricky to express in Haskell, but if we turn on {-# LANGUAGE KitchenSink #-}, we can get some way, at least.

I define a datatype whose only purpose is to pack up the type-level components of a container.

data (<|) (s :: i -> *) (p :: i -> *) = Dull

where the existential i is the type-level version of shapes, implicitly chosen in each value. Now, s gives the value-level presentation of the type-level shape: it could be a singleton type, giving no more or less information than the choice of i, but it’s ok if some type-level shapes are not represented, or if shapes contain some extra information upon which positions do not depend. What’s important is that indexing enforces compatibility between the shape choice (made by the producer of the container) and the position choices (made by the consumer of the container when elements get projected) in p.

It’s a bit of a hack. I’d like to pack up those pieces as a kind of containers, but I can’t do it yet, because new kinds without promotion hasn’t happened yet. I’ll have to work with types of kind * which happen to be given by <|, saying what components specify a container. Let us now say which container is thus specified.

data family Con (c :: *) :: * -> *
data instance Con (s <| p) x = forall i. s i :<: (p i -> x)

It’s not hard to see why these things are functors. If the container’s element-projector gives one sort of thing, you can make them another sort of thing by postcomposing a one-to-another function.

instance Functor (Con (s <| p)) where
  fmap h (s :<: e) = s :<: (h . e)

Given that fmap acts by composition, it’s easy to see that it respects identity and composition.

Pause a moment and think what Con (s <| p) is giving you. Informally, we get ∃i.(s i)*x(p i), writing the GADT’s lurking existential explicitly and writing the function type in exponential notation. Reading ∃ as summation, shapes as coefficients and positions as exponents, we see that containers are just power series, generalized to sum over some kind i of type-level things. Polynomials are just boring power series.

Nat, Fin and the ListC container

Let’s just make sure of the list example. We’ll need natural numbers and their singletons to make the shapes…

data Nat = Z | S Nat
data Natty :: Nat -> * where
  Zy :: Natty Z
  Sy :: Natty n -> Natty (S n)

…and the finite set family to make the positions.

data Fin :: Nat -> * where
  Fz :: Fin (S n)
  Fs :: Fin n -> Fin (S n)

The idea is that Fin n is a type with n values. A function in Fin n -> x is like an n-tuple of x-values. Think of it as a flat way of writing the exponential xn We have an empty tuple

void :: Fin Z -> x
void z = z `seq` error "so sue me!"

and a way to grow tuples

($:) :: x -> (Fin n -> x) -> Fin (S n) -> x
(x $: xs) Fz      = x
(x $: xs) (Fs n)  = xs n

And now we’re ready to go with our list container.

type ListC = Natty <| Fin

Let’s show how to get beween these lists and traditional lists. When I’m working at the functor level, I like to be explicit about constructing natural transformations.

type f :-> g = forall x. f x -> g x

Now we can define, recursively,

listIsoOut :: Con ListC :-> []
listIsoOut (Zy   :<: _) = []
listIsoOut (Sy n :<: e) = e Fz : listIsoOut (n :<: \ i -> (e . Fs))

If the length is zero, the list must be empty. Otherwise, separate the element in position 0 from the function which gives all the elements in positive positions. To go the other way, give a fold which makes use of our functions-as-tuples kit.

listIsoIn :: [] :-> Con ListC
listIsoIn = foldr cons nil where
  nil               = Zy   :<: void
  cons x (n :<: e)  = Sy n :<: (x $: e)

Container Morphisms

A polymorphic function between containers has to work with an arbitrary element type, so there’s nowhere the output container can get its elements from except the input container. What can such a function do? Firstly, it can look at the input shape in order to choose the output shape; secondly, it should say where in the input container each output position will find its element. We obtain a representation of these polymorphic functions in terms of shapes and positions, without trading in elements at all.

data family Morph (f :: *) (g :: *) :: *
data instance Morph (s <| p) (s' <| p')
  = Morph (forall i. s i -> Con (s' <| p') (p i))

That is, each input shape maps to an output container whose elements are input positions, like a kind of plan for how to build some output given some input. To deploy such a morphism, we need only map input positions to input elements.

($<$) :: Morph (s <| p) (s' <| p') ->
         Con (s <| p) :-> Con (s' <| p')
Morph m $<$ (s :<: e) = fmap e (m s)

The representation theorem for container morphisms asserts that the polymorphic functions between containers are given exactly by the container morphisms. That is, the above has an inverse.

morph :: (Con (s <| p) :-> Con (s' <| p')) -> Morph (s <| p) (s' <| p')
morph f = Morph $ \ s -> f (s :<: id)

Note that if s :: s i, then s :<: id :: (s <| p) (p i) is the container storing in every position exactly that position. You can check…

  morph f $<$ (s :<: e)
= {- definition -}
  fmap e (Morph (\ s -> f (s :<: id)) $>$ s)
= {- definition -}
  fmap e (f (s :<: id))
= {- naturality -}
  f (fmap e (s :<: id))
= {- definition -}
  f (s :<: (e . id))
= {- right identity -}
  f (s :<: e)

…and…

  morph (Morph m $<$)
= {- definition -}
  Morph $ \ s -> Morph m $<$ (s :<: id)
= {- definition -}
  Morph $ \ s -> fmap id (m s)
= {- functor preserves identity -}
  Morph $ \ s -> m s
= {- eta contraction -}
  Morph m

…or you can deploy the Yoneda lemma.

  (s <| p) :-> (s' <| p')
= {- type synonym -}
  forall x. (s <| p) x -> (s' <| p') x
~= {- data definition -}
  forall x. (exists i. (s i, p i -> x)) -> (s' <| p') x
~= {- curry -}
  forall x. forall i. s i -> (p i -> x) -> (s' <| p') x
~= {- reorder arguments -}
  forall i. s i -> forall x. (p i -> x) -> (s' <| p') x
~= {- Yoneda -}
  forall i. s i -> (s' <| p') (p i)
= {- data family -}
  Morph (s <| p) (s' <| p')

It’s a fun exercise to show that reverse can be expressed as a Morph ListC ListC without going via the representation theorem.

Closure Under the Polynomial Kit

We can define the kit of polynomial functor constructors as follows.

newtype I         x = I {unI :: x}
newtype K a       x = K {unK :: a}
newtype (:+:) f g x = Sum {muS :: Either (f x) (g x)}
newtype (:*:) f g x = Prod {dorP :: (f x , g x)}

They are Functor-preserving in the only sensible way.

instance Functor I where
  fmap h (I x) = I (h x)
instance Functor (K a) where
  fmap h (K a) = K a
instance (Functor f, Functor g) => Functor (f :+: g) where
  fmap h = Sum . either (Left . fmap h) (Right . fmap h) . muS
instance (Functor f, Functor g) => Functor (f :*: g) where
  fmap h = Prod . (fmap h *** fmap h) . dorP

But we can also show that containers are closed under the same operations.

For the identity, there is one shape and one position, so we need the unit singleton family.

data US :: () -> * where
  VV :: US '()
type IC = US <| US

Wrapping up an element in a container can happen in just one way.

iWrap :: x -> Con IC x
iWrap x = VV :<: const x

It is now easy to show that Con IC is isomorphic to I

iIsoIn :: I :-> Con IC
iIsoIn (I x) = iWrap x

iIsoOut :: Con IC :-> I
iIsoOut (VV :<: e) = I (e VV)

For constant polynomials, there are no positions for elements, but there is useful information in the shape. Abbott, Altenkirch and Ghani take the shape type to be the constant and the the position set to be everywhere empty. To follow suit, we’d need to use the singleton type for the constant, but that’s more Haskell work than necessary (unless you import Richard Eisenberg’s excellent library for that purpose). We can use () unit as the type-level shape and store the constant only at the value level.

data KS :: * -> () -> * where
  KS :: a -> KS a '()

Again, the position set must be empty

data KP :: u -> * where
kapow :: KP u -> b
kapow z = z `seq` error "so sue me!"
type KC a = KS a <| KP

We can put an element of the constant type into its container.

kon :: a -> Con (KC a) x
kon a = KS a :<: kapow

We thus obtain the isomorphism.

kIsoIn :: K a :-> Con (KC a)
kIsoIn (K a) = kon a

kIsoOut :: Con (KC a) :-> K a
kIsoOut (KS a :<: _) = K a

For sums, you pick a branch of the sum and give a shape for that branch. The positions must then come from the same branch and fit with the shape. So we need the type-level shape information to be an Either and make value-level things consistent with the type-level choice. That’s a job for this GADT.

data Case :: (i -> *) -> (j -> *) -> (Either i j) -> * where
  LL :: ls i -> Case ls rs (Left i)
  RR :: rs j -> Case ls rs (Right j)

Now, the sum of containers is given by consistent choices of shape and position.

type family SumC c c' :: * where
  SumC (s <| p) (s' <| p') = Case s s' <| Case p p'

That is, the choice of value-level shape fixes the type-level shape, and then the positions have to follow suit. If you know which choice has been made at the type level, you can project safely.

unLL :: Case s s' (Left i) -> s i
unLL (LL s) = s
unRR :: Case s s' (Right j) -> s' j
unRR (RR s') = s'

In turn, that allows us to define the injections of the sum as container morphisms.

inlC :: Morph (s <| p) (SumC (s <| p) (s' <| p'))
inlC = Morph $ \ s -> LL s :<: unLL
inrC :: Morph (s' <| p') (SumC (s <| p) (s' <| p'))
inrC = Morph $ \ s' -> RR s' :<: unRR

Now we’re ready to show that the container sum is isomorphic to the functorial sum of the two containers.

sumIsoIn :: (Con (s <| p) :+: Con (s' <| p')) :-> Con (SumC (s <| p) (s' <| p'))
sumIsoIn = either (inlC $<$) (inrC $<$) . muS

sumIsoOut :: Con (SumC (s <| p) (s' <| p')) :-> (Con (s <| p) :+: Con (s' <| p'))
sumIsoOut (LL s  :<: e) = Sum (Left (s :<: (e . LL)))
sumIsoOut (RR s' :<: e) = Sum (Right (s' :<: (e . RR)))

Now, for products of containers, you need a pair of shapes, one for each component, so the type-level shape also needs to be a pair.

data ProdS :: (i -> *) -> (j -> *) -> (i, j) -> * where
  (:&:) :: ls i -> rs j -> ProdS ls rs '(i, j)

An element position in such a container is either on the left or on the right, and then you need to know the position within that component.

data ProdP :: (i -> *) -> (j -> *) -> (i, j) -> * where
  PP :: Either (lp i) (rp j) -> ProdP lp rp '(i , j)
unPP :: ProdP lp rp '(i , j) -> Either (lp i) (rp j)
unPP (PP e) = e

The product is then given by those pieces, and the projections are container morphisms.

type family ProdC c c' :: * where
  ProdC (s <| p) (s' <| p') = ProdS s s' <| ProdP p p'
outlC :: Morph (ProdC (s <| p) (s' <| p')) (s <| p)
outlC = Morph $ \ (s :&: _) -> s :<: (PP . Left)
outrC :: Morph (ProdC (s <| p) (s' <| p')) (s' <| p')
outrC = Morph $ \ (_ :&: s') -> s' :<: (PP . Right)

Pairing is implemented by either on positions.

pairC :: Con (s <| p) x -> Con (s' <| p') x -> Con (ProdC (s <| p) (s' <| p')) x
pairC (s :<: e) (s' :<: e') = (s :&: s') :<: (either e e' . unPP)

Again, we get an isomorphism with functorial products.

prodIsoIn :: (Con (s <| p) :*: Con (s' <| p')) :-> Con (ProdC (s <| p) (s' <| p'))
prodIsoIn (Prod (c, c')) = pairC c c'

prodIsoOut :: Con (ProdC (s <| p) (s' <| p')) :-> (Con (s <| p) :*: Con (s' <| p'))
prodIsoOut c = Prod (outlC $<$ c, outrC $<$ c)

So, the polynomials are, as expected, containers.

W-types

The least fixpoint of a container is what Per Martin-Löf calls a W-type.

newtype W c = In (Con c (W c))

Lots of our favourite datatypes are W-types. E.g., unlabelled binary trees:

 
type Tree = W (SumC (KC ()) (ProdC IC IC))

Define the constructors like this.

leaf :: Tree
leaf = In (inlC $<$ kon ())
node :: Tree -> Tree -> Tree
node l r = In (inrC $&rt;$ pairC (iWrap l) (iWrap r))

But there are functors which are not containers: the continuation monad is the classic example. The element type always stays right of the arrow. Some people like to classify the polarity of parameter occurrences in a type operator as “positive” or “negative”. A top level occurrence is positive. Sum and product preserve polarity. Function types preserve polarity in the target but flip polarity in the domain. A type operator whose parameter occurs only positively will be a covariant functor; if the parameter occurs only negatively, it will be a contravariant functor. A “strictly positive” occurrence is not only positive: the even number of times its polarity has been flipped is zero. A type operator whose parameter occurs only strictly positively will be a container. Least fixpoints of functors have recursive “fold operators”, but least fixpoints of containers guarantee the existence of induction principles: the difference between the two matters when you’re dependently typed.

Hancock’s Tensor

Here’s an operation you can define on containers, but not on Haskell functors more generally. Peter Hancock defines the tensor of two containers thus

type family TensorC c c' :: * where
  TensorC (s <| p) (s' <| p') = ProdS s s' <| ProdS p p'

It’s a bit like a product, in that shapes pair up, but when we look at the positions, we don’t make a choice, we pick a pair. Think of the two components as coordinates in some sort of grid. Indeed, consider what TensorC ListC ListC might be. It’s the container which gives you the type of rectangular matrices: “lists of lists-all-the-same-length”.

Roland Backhouse wrote a paper a while back deriving properties of natural transformations on “F-structures of G-structures-all-the-same-shape”, but he couldn’t give a direct mathematical translation of that idea as an operation on functors, only by restricting the composition F.G to the unraggedy case. Hancock’s tensor gives us exactly that notion for containers.

You can degenerate tensor into functor composition…

newtype (f :.: g) x = C {unC :: f (g x)}

layers :: Con (TensorC (s <| p) (s' <| p')) :-> (Con (s <| p) :.: Con (s' <| p'))
layers ((s :&: s') :<: e) = C (s :<: \ p -> s' :<: \ p' -> e (p :&: p'))

…but you don’t have to do it that way around, because you can transpose a tensor, thanks to its regularity:

xpose :: Morph (TensorC (s <| p) (s' <| p')) (TensorC (s' <| p') (s  (s' :&: s) : (p :&: p')

Fans of free monads may enjoy thinking of them as the least fixpoint of the functorial equation

Free f = I :+: (f :.: Free f)

If f is a container Con (s < p), you can think of s as describing the commands you can issue and p as the responses appropriate to a given command. The free monad thus represents an interactive mode session where at each step you decide whether to stop and report your result or to issue another command, then continue with your session once you have the response.

What’s not so well known is that the free applicative is given exactly by replacing composition with tensor. The free applicative gives you a batch mode session, where your commands are like a deck of punch cards: the sequence is fixed in advance, and you report your result once you have collected your lineprinter output, consisting of all the responses to the commands.

Container Composition?

We have tensor for containers, but what about composition? Abbott, Altenkirch and Ghani have no difficulty defining it. The shape of a composite container is given exactly by an “outer” container whose elements are “inner” shapes. That way, we know the shape of the outer structure, and also the shape of each inner structure sitting at a given position in the outer structure. A composite position is a dependent pair: we have to find our way to an inner element, so we first pick an outer position, where we will find an inner structure (whose shape we know), and then we pick an inner position in that structure.

So now, we’re Haskelly stuffed. We need to promote Con itself (functions inside!). And we need its singletons. GHC stops playing.

How will the situation look when we have Π-types (eliminating the need for singletons) and the ability to promote GADTs? I don’t know. We’ll still need some higher-order functions at the type level.

Winding Up

Containers are an abstraction of a particularly well behaved class of functors, characterized in a way which is very flexible, but makes essential use of dependent types. They’re a rubbish representation of actual data, but they allow us to specify many generic operations in a parametric way. Rather than working by recursion over the sum-of-products structure of a datatype, we need only abstract over “shapes” and “positions”.

E.g., when positions have decidable equality, a container is (infinitely) differentiable (smooth?): you just use the usual rule for differentiating a power sequence, so that the shape of the derivative is a shape paired with a position for the “hole”, and the positions in the derivative are the positions apart from that of the hole. When you push that definition through our various formulae for sums and products, etc, the traditional rules of the calculus appear before your eyes.

Similarly, a traversable container is one whose position sets are always finite, and hence linearly orderable. One way to achieve that is to factor positions through Fin: effectively, shape determines size, and you can swap out the functional storage of elements for a vector.

I was quite surprised at how far I got turning the theory of containers into somewhat clunky Haskell, before the limits of our current dependently typed capabilities defeated me. I hope it’s been of some use in helping you see the shapes-and-positions structure of the data you’re used to.

Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: