Skip to content

online trait-based marking

August 12, 2015

In order to provide useful feedback rapidly on those assessment items whose motivation is at least partly formative, I’m thinking about a variety of new dodges to try next year.

I expect I’ll have about 100 first year students taking the class I pretend is an introduction to hardware but which is actually an introduction to functional programming and semantics. There’s a lab session each week in which I experiment on them, and processing the data resulting from those experiments is a major headache. Feedback needs to be rapid if it’s to inform remedial action.

My workflow is pretty poor. So far, the test questions have been done on paper, even though each student is at a computer. My comrades and I mark those pieces of paper, but then it’s not over, because I have to type in all the scores. It takes too long. It often goes wrong.

solution traits

One method I adopted last year was to identify solution traits. Few mistakes are peculiar to one individual. Many good or bad properties of solutions, or traits, as I call them, are shared between many submissions. I learned to save time by giving traits a coding system. Markers just write trait codes on the scripts; the meaning of those codes is given once, on the web page associated with the assessment item. Markers also delivered a score for each part of the exercise and a total score. We tried to map traits to scores consistently, but that’s not always easy to do in one pass. Backtracking was sometimes required. If we had multiple markers, we’d share an office, and the shared trait description list would grow on the whiteboard as new things came up. I discovered that I could be bothered to type in both the score and the trait list for each student, but it was quite a bit of work.

The idea of solution traits as salient aspects of student submissions struck me as something from which we could extract more work. I ought to be mining trait data. Maybe later.

from traits to scores

Markers should not score solutions directly. If we can be bothered to classify traits, we should merely identify the subset of recognized traits exhibited by each question part. Separately, we should give the algorithm for computing a score from that trait subset. That way, we apply a consistent scoring system, and we can tune it and see what happens. Here’s the plan. Each trait is a propositional variable. A scoring scheme is given by a maximum score and a list of propositional formulae each paired with a tariff (which may be positive or negative). The score is the total of the tariffs associated with formulae satisfied by the trait set, capped below by 0 and above by the indicated maximum. We should have some way to see which solutions gain or lose as we fiddle with the scheme.

hybrid online marking jobs

Each marking job is presented to the marker as a web page showing the question, the candidate solution, the specimen solution, and the list of checkboxes corresponding to the traits identified so far. It may be that some magic pixie helpers (be they electronic or undergraduate) have precooked the initial trait assignment to something plausible. That is, online marking jobs have the advantage that we can throw compute power at them whenever solutions can be algorithmically assessed, but we don’t have to construct exercises entirely on that basis. If all is well, the marker need merely make any necessary adjustments to the trait assignment and ship it. Problems may include the need to add traits, so that should be an option, or to request a discussion with the question setter and flag the job for revisiting after that discussion. Concurrent trait addition may result in the need to merge traits: i.e., to define one trait as a propositional formula in terms of the others, with conflicting or cyclic definitions demanding discussion. Oh transactions.

making marking jobs online

How do I get these marking jobs online in the first place? Well, the fact that the students do the problems in labs where each has a computer is some help. Each exercise has a web page, and whenever it makes sense to request a solution which fits easily into a box in an HTML form, we can do that, whether it’s to be marked by machines or by people. But there may be components of solutions which are not so easily mediated: diagrams, mainly. I have previous at forcing students to type ASCII art diagrams in parsable formats, much to their irritation, but I would never dream of making such a requirement under time pressure. I need a way to get 100 students to draw part of an assignment on paper, then make that drawing appear as part of the online marking job with the minimum of fuss.

banners and scanners

I prepare the paper on which they will draw. It has a printed banner across the top, which consists of three lines of text, each with one long word and sixteen three-letter words, lined up in seventeen columns of fixed-pitch text. The three long words in the left column vary with and identify the task. The 48 three-letter words are fixed for the whole year. Each student has an identity code given by one three-letter word from each line, and the web page for the exercise reminds them what this code is. Each position in each line stands for a distinct number in 0..15, and the sum of the three positions is 0 (mod 16), so a clear indication of the chosen word in just two of the lines is sufficient to identify the student. I can print out a master copy of the page, then photocopy it 100 times and hand the pages out. The student individuates their copy by obliterating their assigned words, e.g., by crossing them out (more than once, please).

I collect in the pages at the end and I take them to the photocopy room, where the big fast photocopier can deploy its hidden talent: scanning. I get one enormous pdf of the lot. Unpacking the embedded images with pdfimage, I get a bitmap for each page. Using imagemagick, I turn each bitmap into both a jpeg (for stuffing into the marking job) and a tiff (cropped to the banner) which I then shove through tesseract, a rather good and totally free piece of optical character recognition software, well trained at detecting printed words in tiffs. The long words present in the scanned text tell me which exercise the jpeg belongs to; the short words missing from the scanned text tell me (with high probability) whose solution it is. Solutions not individuated by this machinery are queued for humans to assess, but experiments so far leave me hopeful. The workflow should be: stack paper in the document feeder, select scan-to-email, select the relevant daemon’s email address, press go. And we’re ready to bash through the marking jobs online.

mark retrieval by students

Once we markers have done all we can and it’s time to give the students their feedback, we push the release button which bangs the associated dinner gong. Online, the students are faced with a marking job, showing the question, their solution, the specimen solution, and an empty checkbox trait list, all in a form with a submit button. We oblige them to mark their own work in order to be told their given score. When they hit the submit button, they get to see their score, and on which traits their marker’s opinion diverges from their own. If they are at least two-thirds in agreement, a small donation is made to their reputation score (as distinct from their score in the topic of the assessment).

pixie helpers of the carbon kind

Tutorial homeworks can and should also give rise to online marking jobs in just the same way. Part of the exercise can be a web form and the rest done by uploading an image. There are scanners in the lab, but we can also arrange to allow submission of images by email from a phone or a tablet. Once a student has submitted a solution, they become available for marking jobs on the exercise, starting with their own, but also for other people. After the tutorial, each student should certainly confirm their self-assessment, but preferably also revisit any other marking they did prior to the tutorial. Reflection is reputation-enhancing. Peer-to-peer marking is reputation enhancing. Note that tutors will have the ability to eyeball homework submissions (if only to detect their absence) but are not paid to spend time marking them.

Expert students who have nothing to gain by taking a test (because they already have full marks in the test’s topic) may, if they have free time at the right time, collect reputation by marking their colleagues’ test submissions. The results they deliver must be moderated, but they do at least help to precook marking jobs for the official markers. Of course, reasonable accuracy is necessary for payment.

traits in relationship to the curriculum graph

An exercise (or parts thereof) should be associated with a topic and constitute evidence towards mastery of said topic. If some parts of an exercise conform to a standard scheme, that’s also useful information. Some traits will relate to the topic (e.g., typical misunderstandings) and some to the scheme, so we gain a good contribution to the trait set for a brand new question just by bother to make those links. We might also seek to identify regions in the teaching materials for the relevant topic which either reinforce positive traits or help to counteract negative ones: crowdsourcing such associations might indeed be useful.

the workflow is a lot, but it isn’t everything

My main motivation is to try to improve the efficiency of the marking process in order to give feedback more rapidly with less effort but undiminished quality. Thinking about technological approaches to the management of marking is something I enjoy a great deal more than marking itself, so I often play the game despite the little that’s left of the candle. I should watch that. But the shift to marking as an online activity also opens up all sorts of other possibilities to generate useful data and involve students constructively in the process. It’s a bit of an adventure.

curriculum higraphs

August 12, 2015

I’m a teacher, too, remember? If you’re here for the research, this post might fill a much needed gap in your life.

In a typical university curriculum, we arrange the classes (or modules or courses or whatever you call them) in a dependency graph: to do this class, you should already have done that class, and knowledge from that class will be assumed, etc. I find it useful to push them same sort of structure down a level, the better to organise the broad topics within a class, and even sometimes to structure the process of learning a topic. “Where am I in this picture?” is the question the students should ask themselves: we should make it easy for them to find the answer.

what’s the picture?

Suppose the hierarchichal structure is given like a file system where each node is a directory. In a real file system, we might indeed represent a node as a directory, but not everything which is a directory will necessarily correspond to a node: each node will need a file which maps its internal curriculum, indicating which subdirectories are subnodes and, for each subnode, which other subnodes are immediate prerequisites. What do I mean by “A is a prerequisite of B?”. I mean “Mastery of A is necessary for study of B to be sensible.” There should be no cycles in that graph: mutual relevance does not imply any prerequisite status.

Every such hierarchy can be flattened by giving each node an entry point (a prerequisite of all subnodes) and an exit point (whose prerequisites are the entry point and all subnodes), then linking a node’s entry point from the exit points of its prerequisites. It may also be helpful (but it’s certainly not vital) to indicate for each subnode its external prerequisites, being those nodes elsewhere (neither a sibling nor an ancestor) on which it depends. These should be consistent with the internal curricula, in the sense that the flattening must remain acyclic. Note that we can have D/A/X -> D/B/Y and D/B/W -> D/A/Z,
but if so neither A nor B may be considered a prerequisite of the other.

Colour schemes: if you have mastered a node, it’s safety-blue; if you have mastered the prerequisites for a node but not the node itself, it’s activity-green; if you haven’t yet mastered the prerequisites for a node, it’s danger-red. Red/green distinctions are not ideal for colourblind people, so we should write the captions on green nodes in a more emphatic font: they constitute the frontier where progress can be made.

why bother?

If we plot the curriculum graph, to whatever level of detail we can locally muster, we give some structure to the businesses of teaching, learning and assessment. Learning is what the students do: we can have no direct effect on learning, except perhaps surgery, a fact which sometimes goes astray in discussions of ‘learning enhancement’. Teachers can have an impact on the environment in which learners learn, but ultimately we’re passive in their learning process, and they’ll take whatever they take from the experiences we give them. Learners are going to make a journey through the curriculum. Teaching is how we try to propel them on that journey. Assessment is how we and they determine where they have reached on that journey. If we associate teaching and assessment activities with nodes in the curriculum graph, we make clearer their specific utility in the learning process.

Crucially, by identifying the prerequisition structure, we focus attention in from the whole picture to the student’s frontier of green nodes where they can sensibly study but have not yet achieved mastery. Now, by aligning teaching materials to nodes, the students can identify those which are immediately relevant to their progress, and by aligning assessment to nodes, the students can tell whether progress is happening. My purpose is simply to make it as clear as possible where each student is stuck and what they can do about it.

Teaching takes a variety of forms. We might write notes. We might give lectures. We might offer lab sessions or tutorials. We can arrange ad hoc help sessions. We should expect the delivery of lectures to be an admissible linearisation of the prerequisition structure: that is, lectures should also have a frontier. The students should be able to compare their own frontier with the lecture frontier, and with a fuzzed out version of the cohort’s frontiers. It’s a danger sign if a student’s frontier is strictly behind a lecture: they aren’t best placed to get much from it, at least not at first. If notes are available online in advance of lectures, we acquire the means to cue advance reading and to detect it. The more tightly aligned assessment is to the curriculum, the easier it is to prescribe remedial activities.

Moreover, at the beginning of a lecture, I should very much like to know how the class divides as red/green/blue for that topic. Too much red (or, less likely, overwhelming blue) and I should maybe reconsider giving that lecture. Of course, you only get useful tips like that if you’re doing enough assessment (little and often is better than in overwhelming clumps) to gauge the frontier accurately.

what’s a student’s online view?

When a student visits a node, they should see its hierarchy of ancestors and its immediate subnode structure. The status of all of these nodes should be clear. The rest of the information before them should answer as readily as possible the question ‘What can I usefully do?’. They should see a chronological programme of activities for that node, first into the future and then into the past: lectures may have reading associated; tutorials may have homework handins associated. A node may have a number of assessment items associated, some of which may be active: the list of all assessment items at or below the current node should be readily accessible. Every piece of teaching material should be accompanied by the means to record engagement and comprehension, and to leave a note for the attention of staff.

more later

I want to stop writing for now and publish the story so far. I haven’t said much about assessment mechanisms or what mastery might consist of: these issues I have also been thinking about. But I hope I’ve at least given some reason to believe that there is something to be gained by refining the graph structure of the curriculum when it comes to modelling the progress made by learners and promoting useful activity, for us and for them.

Hannah’s sweets and Archie’s socks

June 7, 2015

This question, from a GCSE maths exam, has been causing a stir. There’s been quite a bit about it in the media, although you’ll struggle to find anyone who can be arsed to print part (b). Why is that?

There are n sweets in a bag.
6 of the sweets are orange.
The rest of the sweets are yellow.

Hannah takes at random a sweet from the bag.
She eats the sweet.

Hannah then takes at random another sweet from the bag.
She eats the sweet.

The probability that Hannah eats two orange sweets is 1/3.

(a) Show that n2 – n – 90 = 0

(b) Solve n2 – n – 90 = 0 to find the value of n

What makes this question so hard? Might we ask it differently? What’s going on? At undergraduate level (first years, mostly) I’ve been setting and marking exams: my lot are not much older than the pupils puzzling with Hannah. It’s good to try to think about these things. (It’s just possible my niece sat that exam, which is an extra cue to be less reflexive and more reflective.)

Correspondingly, I have a variety of crackpot theories, but before I waste your time with them, let me trouble you with some more respectable theory from David Perkins, specifically his paper Beyond Understanding.

It’s not just a matter of what you know. What does what you know enable you to do? Perkins characterises an escalating scale of what I might call knowledge weaponisation (which remind me of the old joke about solving, stating or colouring in the Navier-Stokes equations):

  1. possessive knowledge can be recited and applied to routine calculations
  2. performative knowledge can be called upon more flexibly to solve problems
  3. proactive knowledge can be deployed outside its domain of obvious applicability

The “Hannah’s sweets” puzzle clearly demands more than possessive knowledge of basic probability, algebraic manipulations of fractions, and solving quadratics. Given part (a), part (b) should be a routine problem. It’s part (a) that sticks out as a bit peculiar, pulling an equation like a rabbit from a hat then asking the students to find a hatful of rabbit shit. It doesn’t demand anything they don’t know, but it isn’t a routine calculation.

If I take my specs off, metaphorically I mean, the detail becomes indistinct but the general story arc remains perceptible. The question goes like this

  1. There are some unknown quantities, described in words. If they are not represented by named variables, then introduce named variables for them.
  2. There are some mathematical facts about those quantities trapped in a slab of prose. Scan the prose for quantities you can represent as formulae. Extract constraints on those formulae.
  3. Deduce the values of the unknown quantities. Solve the constraints by whatever means is appropriate.

That’s like lots of questions. Once upon a time, there might have been a question like this.

Archie own six black socks, some number of pink socks and no other socks. He is too lazy to pair them up before he puts them in his sock drawer all in a jumble. He always gets dressed in the dark, picking two socks at random. The day after laundry day, when all the socks are available, he typically wears a pair of black socks one third of the time. How many pink socks does Archie have?

My question, with its stereotyped male protagonist and bias against pink, probably needs a bit of rethinking before we can issue it to the youth of today. But it starts by introducing a mystery quantity, gives us some chat which constrains that quantity, then finishes by asking us to find that quantity. That is, it’s coded as an instance of that standard problem type. Moreover, it doesn’t name a variable, let alone confront us with a quadratic formula, and it doesn’t specifically invoke the concept of probabilty. Socks motivate the interest in “two the same” better than sweets, but they might predispose us to guess that they are found in even numbers. The latter is a good red herring, and besides, Archie has probably lost the odd sock here and there. But I digress. The issue I’d like to open is the relative difficulty, from a human perspective, of “Hannah’s sweets” and “Archie’s socks”.

You see, my crackpot theory is that “Hannah’s sweets” is knocked off an older question uncannily like “Archie’s socks”, but when revising it, the examiners named the number of sweets and added the intermediate goal to establish the quadratic constraint in an attempt to make the problem require less initiative. If so, I think they also made it more intimidating. However, they also made the last phase of the problem, part (b), completely routine, in the hope that people without the initiative for part (a) would still collect some marks. Whether students notice that they can do part (b) without a clue for part (a) is another matter. Many students stop doing a question at the first part which causes trouble and read only on demand, which means that in “problem” questions, they miss the clues in the later question parts for what might constitute useful information in the prose.

The media have, by and large, not printed part (b) of the question. They make it look like the question is “there are some sweets; prove an equation”, rather than “there are some sweets; find out how many”. See, for example, Alex Bellos’s piece in the Grauniad. Why is this? Crackpot theory time again.

  1. There’s one photo of the question paper which seems to show up a lot. The Guardian credits its source as Twitter. It cuts off after part (a). The story is clearly less time-consuming to deliver just as Twitter churnalism: bothering to find out what the whole question might have been is extra work and who likes doing that? (I got part (b) from the BBC, which shows what you can achieve with a mandatory licence fee.)
  2. The impact is to make a dishonest protest about the extent to which the problem is undermotivated. That shows good politician skills on the part of the original poster, and is a good way to increase the sensation-value of the story. Many secondhand reports compound the problem by publishing not this image but its transcription in plain text, dropping the (a) label to make it look like the question stops at the quadratic equation. They also write n^2 rather than n2, necessitating a comment on notation, making it look as if the original question introduced weird notation on the fly, when in fact the extra bend is introduced in quotation.

Especially in its curtailed form, “Hannah’s sweets” looks superficially weirder than “Archie’s socks” because some al-gebra terrorism has been added and the purpose has been taken away.

What both questions have in common is that they are in code. They must be decoded before the algebra can begin. “Hannah’s sweets” uses weaker encryption than “Archie’s socks”. I’m afraid that’s because I wrote “Archie’s socks”: here are some more excerpts from my archive. I quote them selectively, not to give you whole questions, just a sense of my style of distraction.

Disco Mary is rummaging through a collection of old electronic spare parts. She finds a multicolour lamp which has three Boolean inputs, labelled red, green and blue. … The green input signal is connected to the output of a T flip-flop. The blue input signal is connected to the output of another T flip-flop. … Mary’s favourite song is “What A Blue Sunset” by Ray Dayglo and the Thunderclaps, so she decides to wire the lamp and flip-flops to make a repeating sequence of colours, changing with each clock cycle: cyan, blue, yellow, red, then back to cyan and round again.

Madame Arlene teaches the Viennese Waltz. When she is teaching beginners, she finds that she has to shout “1, 2, 3, 1, 2, 3,. . . ” repeatedly for ever, to keep her pupils in time. When she teaches advanced classes, she doesn’t need to shout, because they listen to the music. … Madame Arlene builds a shouting machine and wires it into her music player: it gets a 2-bit unsigned binary number as its input and a clock signal generated by the player in time with the music. At each tick, the shouting machine checks its input: if it gets 0, it shouts nothing; otherwise it shouts the number it gets. … The challenge is to generate the input signal for the shouting machine, so that both counting and silent behaviours can happen.

A control panel has four switches on it, named S, T, U and V, respectively. Each switch sends a 0 signal when its handle points downward and a 1 signal when its handle points upward. This is the current setting: [S down, T up, U down, V up] The control panel is wired both to the lock of a safe and to a burglar alarm. The setting on the control panel represents a number in 4-bit two’s complement binary notation. … To open the safe, you need to construct a circuit which connects the switches S, T, U, V to the release control R which will set R = 1 if and only if the correct combination is entered. An informant has discovered that the correct combination is -6. … [alarm circuit diagram] … You can flick only one switch at a time. You need to flick switches in sequence to change from the current setting to the setting with the correct combination. You must not set off the alarm.

Professor Garble is a researcher in multicore programming techniques, attempting to explain a recent trend in processor performance. ‘Moore’s Law is finished! That’s why processor clock speed has levelled off. And that’s why processors have exponentially increasing numbers of cores.’ Tick the box or boxes for whichever of the following is true. … [] He is correct in none of the above ways.

That’s just the sort of thing that occurs to me in the bath.

All of these questions require you to extract a model of what is going on from some chat. That’s the skill I am trying to test. I make the chat blatantly spurious partly to be clear that it is a “decode the puzzle” question, but mostly because I am habitually facetious. It occurs to me that maybe exams should be no place for facetiousness from them or from me: why should I have a laugh when they’re not having quite so much fun? This question style is at least routinely visited upon them in the course of the class: if you have paid attention to past papers, it is exactly what you expect. Still, I can see how it could be exclusionary, like an in-joke that you detect but don’t get. I think perhaps that I should do less of that stuff in exams and more in class, where there’s less pressure and we can afford a bit of a laugh while learning to decode problems.

But I really do digress. The point about encoded problem questions is that you need to recognize when you are being told Something Important, and what that something is. In that respect, it’s a lot like doing a cryptic crossword: crossword clues use a stylised language that takes time and practice to acquire. I was taught to do crosswords by my father’s colleagues, who always appointed me the writer-inner for the lunchtime crossword and were happy to indulge my queries: what signalled an anagram, an inclusion, a pun, and so on. In the same way, I instinctively decode the declaration “The probability that Hannah eats two orange sweets is 1/3.” as the instruction “Write a formula for the probability that Hannah eats two orange sweets and set it equal to 1/3.”. It’s familiarity with this sort of code which pushes puzzles like “Hannah’s sweets” back down Perkins’s scale of Ps. And that’s teachable. When I see part (a), I’m a bit spooked, and I think “Why are they asking me to deduce this equation? I already have an equation? Why are they not just asking me what n is? Ah, that’s part (b). Oh well, I expect I should be able to deduce the part (a) equation from mine by doing a wee bit of algebra.”. I’m already on course, and part (a) threatens to throw me off it: that’s why I think “Archie’s socks” is easier. I’m reminded of Whitehead’s remark:

It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.

I agree, sending for the thought-cavalry is a desperate measure, but it would be a shame if an examination intended to assess knowledge and reward the performative (or even proactive) were entirely devoid of decisive moments. For “Hannah’s sweets”, the thought-cavalry can be avoided if you recognize the way in which you are being instructed. Moreover, thought-cavalry tactics, when needed, are greatly assisted by the key extra exam-puzzle knowledge that the question contains a sufficiency of clues: we don’t get that luxury in real problem-solving. I often tell students that my role is to be simultaneously Blofeld and Q: the fact that they are in a James Bond movie means that there is necessarily a strategy to escape Blofeld’s menaces with Q’s gadgets. I do not expect them to die. In fact, I’m trying to arrange their survival. In that sense, “Hannah’s sweets” already comes with the expectation that whatever information is packed in the opening prose must be sufficient to ensure the equation demanded of us: the game is to unpack it.

We could present “Hannah’s sweets” in a more decoded form. Here’s a kind of stream-of-consciousness translation.

Hannah has a number of sweets. It doesn’t matter that they are sweets or that Hannah is called Hannah. What matters is that there are things and we are going to find out how many: call that n. 6 of the sweets are orange and the rest are yellow. There are two different sorts of thing: “orange” and “yellow” are arbitrary labels whose only role is to be distinct. You are told that there are 6 orange sweets, but not how many yellow sweets (perhaps call that y, so n = 6 + y). Hannah selects two sweets at random without replacement. It doesn’t matter whether she eats them or throws them at pigeons. It does matter that the two selections are random, and that the second selection is made from one fewer than the first. Their randomness tells you that you can base probability on proportion and that selections are independent, so you can compute the probability of a particular pair selection by multiplying the probabilities of the separate selections. You are told the probability of a particular outcome: she selects two orange sweets with probability 1/3. The probability of getting two oranges clearly depends on n: write down a formula for that probability and set it equal to 1/3. Rearrange that equation (by clearing fractions) to obtain the quadratic equation n2 – n – 90 = 0, then factorize the equation to obtain two candidate solutions, only one of which makes sense.

I think it’s reasonable to teach that decoding skill and to expect school pupils to acquire it. The persistent complaint that they haven’t seen anything like it on a past paper seems wide of the mark, and more worryingly as a perceived entitlement to be tested only on possessive knowledge. We should be clear in our rejection of that entitlement. But we should also acknowlege that the boundaries between Perkins’s kinds of knowledge is fluid, and that our responsibility as teachers is to rearrange their positions relative to the student by acting on both, in the direction marked “progress” by Whitehead.

hasochistic containers (a first attempt)

June 6, 2015

There was a bit of chat on Twitter about polynomials and strictly positive functors, which provoked me to think a bit about how much of the theory of containers (in the sense of Abbott, Altenkirch and Ghani) I could cook up in modern day Haskell. It turns out that we can make a plausible stab at the basics, but the wheel falls off when we try to get to more advanced things.

What is a container?

Informally, a container is a functor with a “shapes and positions” presentation: the contained values are given as the image of a function from positions, but the type of positions depends on the choice of the shape. Finite lists of things, for example, can be seen as functions from an n-element set to things, once you’ve chosen the shape n, otherwise known as the length of the list. If a functor fis a container, then its shape set will be isomorphic to f (), or what you get when you choose boring elements that just mark their position. It’s the dependency of the position set on the shape that makes the concept a little tricky to express in Haskell, but if we turn on {-# LANGUAGE KitchenSink #-}, we can get some way, at least.

I define a datatype whose only purpose is to pack up the type-level components of a container.

data (<|) (s :: i -> *) (p :: i -> *) = Dull

where the existential i is the type-level version of shapes, implicitly chosen in each value. Now, s gives the value-level presentation of the type-level shape: it could be a singleton type, giving no more or less information than the choice of i, but it’s ok if some type-level shapes are not represented, or if shapes contain some extra information upon which positions do not depend. What’s important is that indexing enforces compatibility between the shape choice (made by the producer of the container) and the position choices (made by the consumer of the container when elements get projected) in p.

It’s a bit of a hack. I’d like to pack up those pieces as a kind of containers, but I can’t do it yet, because new kinds without promotion hasn’t happened yet. I’ll have to work with types of kind * which happen to be given by <|, saying what components specify a container. Let us now say which container is thus specified.

data family Con (c :: *) :: * -> *
data instance Con (s <| p) x = forall i. s i :<: (p i -> x)

It’s not hard to see why these things are functors. If the container’s element-projector gives one sort of thing, you can make them another sort of thing by postcomposing a one-to-another function.

instance Functor (Con (s <| p)) where
  fmap h (s :<: e) = s :<: (h . e)

Given that fmap acts by composition, it’s easy to see that it respects identity and composition.

Pause a moment and think what Con (s <| p) is giving you. Informally, we get ∃i.(s i)*x(p i), writing the GADT’s lurking existential explicitly and writing the function type in exponential notation. Reading ∃ as summation, shapes as coefficients and positions as exponents, we see that containers are just power series, generalized to sum over some kind i of type-level things. Polynomials are just boring power series.

Nat, Fin and the ListC container

Let’s just make sure of the list example. We’ll need natural numbers and their singletons to make the shapes…

data Nat = Z | S Nat
data Natty :: Nat -> * where
  Zy :: Natty Z
  Sy :: Natty n -> Natty (S n)

…and the finite set family to make the positions.

data Fin :: Nat -> * where
  Fz :: Fin (S n)
  Fs :: Fin n -> Fin (S n)

The idea is that Fin n is a type with n values. A function in Fin n -> x is like an n-tuple of x-values. Think of it as a flat way of writing the exponential xn We have an empty tuple

void :: Fin Z -> x
void z = z `seq` error "so sue me!"

and a way to grow tuples

($:) :: x -> (Fin n -> x) -> Fin (S n) -> x
(x $: xs) Fz      = x
(x $: xs) (Fs n)  = xs n

And now we’re ready to go with our list container.

type ListC = Natty <| Fin

Let’s show how to get beween these lists and traditional lists. When I’m working at the functor level, I like to be explicit about constructing natural transformations.

type f :-> g = forall x. f x -> g x

Now we can define, recursively,

listIsoOut :: Con ListC :-> []
listIsoOut (Zy   :<: _) = []
listIsoOut (Sy n :<: e) = e Fz : listIsoOut (n :<: \ i -> (e . Fs))

If the length is zero, the list must be empty. Otherwise, separate the element in position 0 from the function which gives all the elements in positive positions. To go the other way, give a fold which makes use of our functions-as-tuples kit.

listIsoIn :: [] :-> Con ListC
listIsoIn = foldr cons nil where
  nil               = Zy   :<: void
  cons x (n :<: e)  = Sy n :<: (x $: e)

Container Morphisms

A polymorphic function between containers has to work with an arbitrary element type, so there’s nowhere the output container can get its elements from except the input container. What can such a function do? Firstly, it can look at the input shape in order to choose the output shape; secondly, it should say where in the input container each output position will find its element. We obtain a representation of these polymorphic functions in terms of shapes and positions, without trading in elements at all.

data family Morph (f :: *) (g :: *) :: *
data instance Morph (s <| p) (s' <| p')
  = Morph (forall i. s i -> Con (s' <| p') (p i))

That is, each input shape maps to an output container whose elements are input positions, like a kind of plan for how to build some output given some input. To deploy such a morphism, we need only map input positions to input elements.

($<$) :: Morph (s <| p) (s' <| p') ->
         Con (s <| p) :-> Con (s' <| p')
Morph m $<$ (s :<: e) = fmap e (m s)

The representation theorem for container morphisms asserts that the polymorphic functions between containers are given exactly by the container morphisms. That is, the above has an inverse.

morph :: (Con (s <| p) :-> Con (s' <| p')) -> Morph (s <| p) (s' <| p')
morph f = Morph $ \ s -> f (s :<: id)

Note that if s :: s i, then s :<: id :: (s <| p) (p i) is the container storing in every position exactly that position. You can check…

  morph f $<$ (s :<: e)
= {- definition -}
  fmap e (Morph (\ s -> f (s :<: id)) $>$ s)
= {- definition -}
  fmap e (f (s :<: id))
= {- naturality -}
  f (fmap e (s :<: id))
= {- definition -}
  f (s :<: (e . id))
= {- right identity -}
  f (s :<: e)

…and…

  morph (Morph m $<$)
= {- definition -}
  Morph $ \ s -> Morph m $<$ (s :<: id)
= {- definition -}
  Morph $ \ s -> fmap id (m s)
= {- functor preserves identity -}
  Morph $ \ s -> m s
= {- eta contraction -}
  Morph m

…or you can deploy the Yoneda lemma.

  (s <| p) :-> (s' <| p')
= {- type synonym -}
  forall x. (s <| p) x -> (s' <| p') x
~= {- data definition -}
  forall x. (exists i. (s i, p i -> x)) -> (s' <| p') x
~= {- curry -}
  forall x. forall i. s i -> (p i -> x) -> (s' <| p') x
~= {- reorder arguments -}
  forall i. s i -> forall x. (p i -> x) -> (s' <| p') x
~= {- Yoneda -}
  forall i. s i -> (s' <| p') (p i)
= {- data family -}
  Morph (s <| p) (s' <| p')

It’s a fun exercise to show that reverse can be expressed as a Morph ListC ListC without going via the representation theorem.

Closure Under the Polynomial Kit

We can define the kit of polynomial functor constructors as follows.

newtype I         x = I {unI :: x}
newtype K a       x = K {unK :: a}
newtype (:+:) f g x = Sum {muS :: Either (f x) (g x)}
newtype (:*:) f g x = Prod {dorP :: (f x , g x)}

They are Functor-preserving in the only sensible way.

instance Functor I where
  fmap h (I x) = I (h x)
instance Functor (K a) where
  fmap h (K a) = K a
instance (Functor f, Functor g) => Functor (f :+: g) where
  fmap h = Sum . either (Left . fmap h) (Right . fmap h) . muS
instance (Functor f, Functor g) => Functor (f :*: g) where
  fmap h = Prod . (fmap h *** fmap h) . dorP

But we can also show that containers are closed under the same operations.

For the identity, there is one shape and one position, so we need the unit singleton family.

data US :: () -> * where
  VV :: US '()
type IC = US <| US

Wrapping up an element in a container can happen in just one way.

iWrap :: x -> Con IC x
iWrap x = VV :<: const x

It is now easy to show that Con IC is isomorphic to I

iIsoIn :: I :-> Con IC
iIsoIn (I x) = iWrap x

iIsoOut :: Con IC :-> I
iIsoOut (VV :<: e) = I (e VV)

For constant polynomials, there are no positions for elements, but there is useful information in the shape. Abbott, Altenkirch and Ghani take the shape type to be the constant and the the position set to be everywhere empty. To follow suit, we’d need to use the singleton type for the constant, but that’s more Haskell work than necessary (unless you import Richard Eisenberg’s excellent library for that purpose). We can use () unit as the type-level shape and store the constant only at the value level.

data KS :: * -> () -> * where
  KS :: a -> KS a '()

Again, the position set must be empty

data KP :: u -> * where
kapow :: KP u -> b
kapow z = z `seq` error "so sue me!"
type KC a = KS a <| KP

We can put an element of the constant type into its container.

kon :: a -> Con (KC a) x
kon a = KS a :<: kapow

We thus obtain the isomorphism.

kIsoIn :: K a :-> Con (KC a)
kIsoIn (K a) = kon a

kIsoOut :: Con (KC a) :-> K a
kIsoOut (KS a :<: _) = K a

For sums, you pick a branch of the sum and give a shape for that branch. The positions must then come from the same branch and fit with the shape. So we need the type-level shape information to be an Either and make value-level things consistent with the type-level choice. That’s a job for this GADT.

data Case :: (i -> *) -> (j -> *) -> (Either i j) -> * where
  LL :: ls i -> Case ls rs (Left i)
  RR :: rs j -> Case ls rs (Right j)

Now, the sum of containers is given by consistent choices of shape and position.

type family SumC c c' :: * where
  SumC (s <| p) (s' <| p') = Case s s' <| Case p p'

That is, the choice of value-level shape fixes the type-level shape, and then the positions have to follow suit. If you know which choice has been made at the type level, you can project safely.

unLL :: Case s s' (Left i) -> s i
unLL (LL s) = s
unRR :: Case s s' (Right j) -> s' j
unRR (RR s') = s'

In turn, that allows us to define the injections of the sum as container morphisms.

inlC :: Morph (s <| p) (SumC (s <| p) (s' <| p'))
inlC = Morph $ \ s -> LL s :<: unLL
inrC :: Morph (s' <| p') (SumC (s <| p) (s' <| p'))
inrC = Morph $ \ s' -> RR s' :<: unRR

Now we’re ready to show that the container sum is isomorphic to the functorial sum of the two containers.

sumIsoIn :: (Con (s <| p) :+: Con (s' <| p')) :-> Con (SumC (s <| p) (s' <| p'))
sumIsoIn = either (inlC $<$) (inrC $<$) . muS

sumIsoOut :: Con (SumC (s <| p) (s' <| p')) :-> (Con (s <| p) :+: Con (s' <| p'))
sumIsoOut (LL s  :<: e) = Sum (Left (s :<: (e . LL)))
sumIsoOut (RR s' :<: e) = Sum (Right (s' :<: (e . RR)))

Now, for products of containers, you need a pair of shapes, one for each component, so the type-level shape also needs to be a pair.

data ProdS :: (i -> *) -> (j -> *) -> (i, j) -> * where
  (:&:) :: ls i -> rs j -> ProdS ls rs '(i, j)

An element position in such a container is either on the left or on the right, and then you need to know the position within that component.

data ProdP :: (i -> *) -> (j -> *) -> (i, j) -> * where
  PP :: Either (lp i) (rp j) -> ProdP lp rp '(i , j)
unPP :: ProdP lp rp '(i , j) -> Either (lp i) (rp j)
unPP (PP e) = e

The product is then given by those pieces, and the projections are container morphisms.

type family ProdC c c' :: * where
  ProdC (s <| p) (s' <| p') = ProdS s s' <| ProdP p p'
outlC :: Morph (ProdC (s <| p) (s' <| p')) (s <| p)
outlC = Morph $ \ (s :&: _) -> s :<: (PP . Left)
outrC :: Morph (ProdC (s <| p) (s' <| p')) (s' <| p')
outrC = Morph $ \ (_ :&: s') -> s' :<: (PP . Right)

Pairing is implemented by either on positions.

pairC :: Con (s <| p) x -> Con (s' <| p') x -> Con (ProdC (s <| p) (s' <| p')) x
pairC (s :<: e) (s' :<: e') = (s :&: s') :<: (either e e' . unPP)

Again, we get an isomorphism with functorial products.

prodIsoIn :: (Con (s <| p) :*: Con (s' <| p')) :-> Con (ProdC (s <| p) (s' <| p'))
prodIsoIn (Prod (c, c')) = pairC c c'

prodIsoOut :: Con (ProdC (s <| p) (s' <| p')) :-> (Con (s <| p) :*: Con (s' <| p'))
prodIsoOut c = Prod (outlC $<$ c, outrC $<$ c)

So, the polynomials are, as expected, containers.

W-types

The least fixpoint of a container is what Per Martin-Löf calls a W-type.

newtype W c = In (Con c (W c))

Lots of our favourite datatypes are W-types. E.g., unlabelled binary trees:

 
type Tree = W (SumC (KC ()) (ProdC IC IC))

Define the constructors like this.

leaf :: Tree
leaf = In (inlC $<$ kon ())
node :: Tree -> Tree -> Tree
node l r = In (inrC $&rt;$ pairC (iWrap l) (iWrap r))

But there are functors which are not containers: the continuation monad is the classic example. The element type always stays right of the arrow. Some people like to classify the polarity of parameter occurrences in a type operator as “positive” or “negative”. A top level occurrence is positive. Sum and product preserve polarity. Function types preserve polarity in the target but flip polarity in the domain. A type operator whose parameter occurs only positively will be a covariant functor; if the parameter occurs only negatively, it will be a contravariant functor. A “strictly positive” occurrence is not only positive: the even number of times its polarity has been flipped is zero. A type operator whose parameter occurs only strictly positively will be a container. Least fixpoints of functors have recursive “fold operators”, but least fixpoints of containers guarantee the existence of induction principles: the difference between the two matters when you’re dependently typed.

Hancock’s Tensor

Here’s an operation you can define on containers, but not on Haskell functors more generally. Peter Hancock defines the tensor of two containers thus

type family TensorC c c' :: * where
  TensorC (s <| p) (s' <| p') = ProdS s s' <| ProdS p p'

It’s a bit like a product, in that shapes pair up, but when we look at the positions, we don’t make a choice, we pick a pair. Think of the two components as coordinates in some sort of grid. Indeed, consider what TensorC ListC ListC might be. It’s the container which gives you the type of rectangular matrices: “lists of lists-all-the-same-length”.

Roland Backhouse wrote a paper a while back deriving properties of natural transformations on “F-structures of G-structures-all-the-same-shape”, but he couldn’t give a direct mathematical translation of that idea as an operation on functors, only by restricting the composition F.G to the unraggedy case. Hancock’s tensor gives us exactly that notion for containers.

You can degenerate tensor into functor composition…

newtype (f :.: g) x = C {unC :: f (g x)}

layers :: Con (TensorC (s <| p) (s' <| p')) :-> (Con (s <| p) :.: Con (s' <| p'))
layers ((s :&: s') :<: e) = C (s :<: \ p -> s' :<: \ p' -> e (p :&: p'))

…but you don’t have to do it that way around, because you can transpose a tensor, thanks to its regularity:

xpose :: Morph (TensorC (s <| p) (s' <| p')) (TensorC (s' <| p') (s  (s' :&: s) : (p :&: p')

Fans of free monads may enjoy thinking of them as the least fixpoint of the functorial equation

Free f = I :+: (f :.: Free f)

If f is a container Con (s < p), you can think of s as describing the commands you can issue and p as the responses appropriate to a given command. The free monad thus represents an interactive mode session where at each step you decide whether to stop and report your result or to issue another command, then continue with your session once you have the response.

What’s not so well known is that the free applicative is given exactly by replacing composition with tensor. The free applicative gives you a batch mode session, where your commands are like a deck of punch cards: the sequence is fixed in advance, and you report your result once you have collected your lineprinter output, consisting of all the responses to the commands.

Container Composition?

We have tensor for containers, but what about composition? Abbott, Altenkirch and Ghani have no difficulty defining it. The shape of a composite container is given exactly by an “outer” container whose elements are “inner” shapes. That way, we know the shape of the outer structure, and also the shape of each inner structure sitting at a given position in the outer structure. A composite position is a dependent pair: we have to find our way to an inner element, so we first pick an outer position, where we will find an inner structure (whose shape we know), and then we pick an inner position in that structure.

So now, we’re Haskelly stuffed. We need to promote Con itself (functions inside!). And we need its singletons. GHC stops playing.

How will the situation look when we have Π-types (eliminating the need for singletons) and the ability to promote GADTs? I don’t know. We’ll still need some higher-order functions at the type level.

Winding Up

Containers are an abstraction of a particularly well behaved class of functors, characterized in a way which is very flexible, but makes essential use of dependent types. They’re a rubbish representation of actual data, but they allow us to specify many generic operations in a parametric way. Rather than working by recursion over the sum-of-products structure of a datatype, we need only abstract over “shapes” and “positions”.

E.g., when positions have decidable equality, a container is (infinitely) differentiable (smooth?): you just use the usual rule for differentiating a power sequence, so that the shape of the derivative is a shape paired with a position for the “hole”, and the positions in the derivative are the positions apart from that of the hole. When you push that definition through our various formulae for sums and products, etc, the traditional rules of the calculus appear before your eyes.

Similarly, a traversable container is one whose position sets are always finite, and hence linearly orderable. One way to achieve that is to factor positions through Fin: effectively, shape determines size, and you can swap out the functional storage of elements for a vector.

I was quite surprised at how far I got turning the theory of containers into somewhat clunky Haskell, before the limits of our current dependently typed capabilities defeated me. I hope it’s been of some use in helping you see the shapes-and-positions structure of the data you’re used to.

One Herald Layout

May 16, 2015

Layout is a source of violent disagreement in programming languages. I’ve written about it before, in the context of Epigram. But now I’m even more overwhelmend than I was then, and I’m thinking about working on several languages, which makes me less inclined to think of their individual properties and concentrate on what I need. I’m certainly not pitching to solve everybody‘s layout problems once and for all: I’ll be lucky if I can even manage my own. Let’s try to boil the issues down.

Some lines are long. I grew up amongst the paraphernalia of the punchcard era, and for the most part, I used 80-column displays. To this day, when I’m hacking, I get uncomfortable if a line of code is longer than 78 characters, and I enjoy the way keeping my code narrow allows me to put more buffers of it on my screen. But however you play it, it’s far from odd to find that a logical line of code stretches wider than your window, so that it might be visually more helpful if it made more use of the vertical dimension. Indenting ‘continuation’ lines more than the ‘header’ line is a standard way to break the latter into pieces which fit.

Some lines are subordinate. Whether they are sublists of a list, or the equations of a locally defined function, or whatever, a textual construct sometimes requires a subordinate block of lines. It’s kind of usual to indent the lines which make up a subordinate block.

How do you tell whether an indented line is a continuation line or a header line within a subordinate block?

I’m trying to find a simple way to answer that question, and what I’m thinking is that I’d like a symbol which marks the end of ‘horizontal mode’, where indented lines continue the header, and the beginning of ‘vertical mode’, where indented lines (each in their own horizontal mode) belong to a subordinate block. My candidate for this symbol is -: just because it looks like a horizontal thing then some vertical things. I’m going to try to formulate sensible rules to identify the continuation and subordination structure.

An indentation level, or Dent, is an element of the set of natural numbers extended by bottom and top, with bottom < 0 < 1 < 2 < … j. An i-Block is a possibly empty sequence of j-Chunks each for some j > i. Within a given j-Chunk, each line is considered a continuation of the first (the header) until the first occurrence of -:, at which point the remainder of the j-Chunk is interpreted as a subordinated j-Block, with any text to the right of -: treated as a top-Line. A document is a bottom-Chunk.

And, er, that’s it. At least for the basic picture.

Higgledy piggledy
  boggle bump splat
Most of the post
  clusters close on the mat -:
  the phone bill
  the gas bill
  the lecce
  the junk
  the bags to dispose of
    old clothes from your trunk
The tide you divide
  to get into your flat
Will just gather dust
  if you leave it like that.

means

{Higgledy piggledy boggle bump splat; Most of the post clusters close on the mat {the phone bill; the gas bill; the lecce; the junk; the bags to dispose of old clothes from your trunk}; The tide you divide to get into your flat; Will just gather dust if you leave it like that.}

(Actually, it might make sense to allow a matching :- to act as an ‘unlayout herald’. The idea is that a Block is a bunch of Chunks and a Chunk is a bunch of Components, and a Component is either a lexical token or a subordinated Block. If a -: has no matching :-, it’s a subordinated Block Component at the end of its enclosing Chunk; the matching :- indicates the end of the subordinated Block Component, after which the Chunk continues.)

By way of an afterthought, why not take Dent to be the integers extended by bottom and top. A line which looks like this (with at least 3 dashes and any amount of whitespace either side)

/--------/

shifts the indentation origin to the left by number-of-dashes-plus-2, thus increasing the indentation of the leftmost physical column by the corresponding amount. A line like

\--------\

shifts the origin the other way, and if you overdo it, the leftmost physical column will have negative indentation, but not as negative as bottom. That’s one way to keep your subordinates from drifting too far to the right.

model the world; view your data; control their chaos

April 9, 2015

We’re heading into the time of year where institutional data integration miseries make a mockery of academic productivity as we scrabble to assemble the outcome of a variety of assessment processes into something that might resemble the basis for a judgment.

I share a module with a colleague (at Strathclyde we use the word “class”, but that might become confusing, given what follows). I do a lot more online assessment than he currently does, so it suits me to key all my student data by username. My colleague keys all his assessment data by registration number. Our institution’s Virtual Learning Environment keys students differently again, for exercises involving anonymous marking. All of these keys are just strings. How do we achieve coherence? Laboriously.

My part of the module is chopped up into topics. Each topic has associated classroom-delivered paper tests and some online materials.
The information about how students have performed in these various
components is managed rather heterogeneously. There might be one file for each paper test. Meanwhile, students each have their own directory recording their interaction with online materials, with a subdirectory for each topic, containing files which relate to their performance in individual assessment items. Some of these files have formats for which I am to blame; other file formats I have thrust upon me. I need to be able to find out who did how well in what by when. I need logic.

And I’m asking myself what I usually ask myself when I need logic: ‘How much of the logic I need can I get from types?’. I’m fond of decidable typechecking, and of various kinds of type-directed program construction (which I much prefer to program-directed type construction). Can we have types for data which help us to audit, integrate and transform them in semantically sensible ways? That’s the kind of problem that we dependent type theorists ought to be able to get our teeth into. But these everyday spreadsheet-this, database-that, log-file-the-other data are really quite unlike the indexed inductive tree-like datatypes which we are used lovingly to be crafting. “Beautiful Abstract Syntax Trees Are Readily Definable” was one of the names we thought about calling Epigram, until we checked the acronym. Dependent type theory is not just sitting on a canned solution to these real world data problems, ready to deploy. Quite a lot of headscratching will be necessary.

‘What’s a good type for a spreadsheet?’ is a reasonable question. ‘What’s a good dependent type for a spreadsheet?’ is a better question. ‘Upon what might a dependent type for a spreadsheet depend, and how much would that really have to do with spreadsheets per se?’ is a question which might lead to an idea. When you have diverse files and online sources all contributing information to some larger resource, we need to establish a broader conceptual framework if we are to work accurately with the individual components. The spreadsheets, database records, forms, etc, are all views or lenses into a larger system which may never exist as a single bucket of bits, but which me might seek to model.

So what I’m looking for is a dependently typed language of metadata. We should be able write a model of the information which ought to exist, and we should be able to write views which describe a particular presentation of some of the data. A machine should then be able to check that a model makes sense, that a view conforms to the model, and that the data is consistent with the view. Given a bunch of views, we should be able to compute whether they cover the model: which data are missing and which are multiply represented. The computational machinery to check, propagate or demand the actual data can then be constructed.

I had a thought about this last summer. Picking some syntax out of thin air, I began to write things like

class Student

class Module

for Module -:
  class Test

for Student, Module -:
  prop Participant

What’s going on? I’ve made four declarations: three “classes”, and one relation. A “class” is a conceptual variety of individuals. Classes can be (relatively) global, such as students or modules. Classes can be localized to a context, so that each module has its own class of tests.

The “for” construct localizes the declarations which are subordinated by indentation after the layout herald “-:”. It’s tidier to say that each module has a bunch of tests than that tests exist globally but each test maps to a module. Moreover, it means that tests in different modules need not share a keyspace.

A class is a finite enumeration whose elements are not known at declaration time. A prop is a finite enumeration whose elements are not known at declaration time, but it is known that there’s at most one element. There’s at most one way in which a student can be a participant in a module.

So far, I haven’t said anything about what these wretched individuals might look like. So,

for Student -:
  email     ! String
  username  ! String
  regNo     ! String
  surname   : String
  forenames : String

I’ve declared a bunch of things which ought to exist in the context of an individual student. The ones with “!” are intended to be keys for students. That’s to say any sensible view of student data should include at least one student key, but it doesn’t really matter which. Of course, with a little more dependently typed goodness, we could enforce formatting properties of email addresses, usernames and registration numbers…some other time. The point is that by introducing abstract notions of individual, outside of the way that those individuals can be keyed, we provide a hook for data integration.

I’m kind of saying what stuff should be stored in a “master record” for each student, but I don’t expect to know all the fields when I introduce the concept of a student.

Another thing that’s fun about bothering to introduce abstract classes of individual is that contextualization can be much more implicit. We do not need to name individuals to talk about stuff that’s pertinent to a typical individual, which means we can write higher-order things in a first order way and handle more of the plumbing by type-based lookup.

class Department

for Module -:
  department : Department
  moduleId   ! String
  class Test -:
    item    ! String
    max     : [0..]
    weight  : [0..]

for Student, Module, Participant, Test -:
  prop Present -:
    score : [0..max]

Here, I show how to associate a department with a module, after the fact. I also introduce tests for each module, each with a maximum score: the use of “-:” in the “class Test” declaration just elides an immediately subsequent “for Test -:”.

Correspondingly, for each student participating in a module (and those students might not be from the same department as the module), and for each test, it makes sense to wonder if the student showed up to the test and where in the range of possible marks they scored.

I should be able to write something like

Module/department

to mean, given a contextualizing department, just the modules for that department.

What’s a view of this information? It might be something like

one Module [moduleId]       | for Test [item
                            |           ----
                            |           max ]
----------------------------+--------------------------------
for Student, Participant    | val : Percentage
  [surname|foreNames|regNo] | if Present -:
                            |   [score]
                            |   val = weight * score / max
                            | else -:
                            |   ["A"]
                            |   val = 0

I’m sure we can negotiate over the two-dimensionality of the syntax (as long as we prioritise reading over writing), but that’s the picture. Scoping goes downward and rightward. The brackets show you what you actually see, which must exist in the given scope. The keyword “one” indicates that we are working within just one module, keyed by the given code. Meanwhile “for” requires us to tabulate all the individuals for whom an environment can be constructed to match the given context.

Meanwhile, the cells in the middle of the table will enable the computation of new local information, “val”. Presence or absence is signified by a score or the constant “A” (which is checkably distinct from all allowable scores), and the definition of “val” is given accordingly.

Note that I have not indicated whether this view is a query or a form. In fact, I have made sure it is valid in both roles. I’d like to be able to instruct the computer to initialize a spreadsheet with as much of this information as is available from other sources. I usually have to do that with cut and paste! After the fashion of pivot tables, I should be able to specify aggregations over rows and columns which are appropriate (e.g., the average score for each test, the total score for each student).

Lots of the ingredients for these tools are in existence already, and it’s clear that my knowledge of them is sadly lacking, having stumbled into the direction of data from the direction of dependent type theory. I seek to educate myself, and help in that regard is always appreciated. Of course, informally, I’m taught about some of the problems by the poor technology with which I face the mundane realities of my existence, and I understand that I can change me more easily than I can change the world. I don’t expect institutional buy-in (I’ll have a go, right enough), but I don’t need it. The point of the modelling language is to build myself a bubble with a permeable membrane: the things from outside the bubble can have sense made of them (by giving a view which describes the role of external data); the things constructed inside the bubble make sense intrinsically (because they were constructed in a model-directed way). Fewer strings! More things!

Edit: I should have included a link to
my slides on this topic, for a talk delivered at Microsoft Research and at York.

Warming up to Homotopy Type Theory

April 1, 2015

“Why do you hate homotopy type theory?” is question I am sometimes asked, but I never answer it, because the question has an inaccurate presupposition. I am not happy when people forget that function extensionality, a key benefit of HoTT, was already available in Observational Type Theory. I am not happy when people disregard the convenience of having a clearly delimited fragment of one’s propositions where proofs can be identified definitionally. I am not happy when people act as if homotopy type theory already works when, without an internal notion of computation which gives canonical forms (like OTT has), it doesn’t…yet. But I’m pretty sure it will acquire such a notion. So, for the avoidance of doubt, I do not hate homotopy type theory: I hate homotopy type theorists almost as much as I hate myself, which is why I have decided to become a homotopy type theorist as a kind of therapy. I’ve been thinking quite a lot, of late, about how to make homotopy type theory go, being as I am, an incorrigible tinkerer with computational mechanisms, and I think I’m onto something. By way of taking Ariadne’s advice, I thought I’d bruce out some ravings about where I think I’ve got to. I suspect I’m not going to be very helpful to people who aren’t already HoTT-headed, and probably not all that helpful to those who are, so let me manage expectations downwards: I’m not attempting pedagogy, I’m just thinking aloud.

As some of you may know, I’m from Northern Ireland, a place which naturally promotes homotopic (not that they would call it that, in case someone thought they were gay, given that malapropism is the fourth most popular national pastime after (in reverse order) homophobia, sectarianism and emigration) considerations as a consequence of the inconveniently large lake in the middle of it. “Who lives in the big blue bit?”(*), asked former Secretary of State, Sir Humphrey Atkins, when presented with a map of the place, shaded in accordance with sectarian affiliation. But I digress. Lough Neagh (which is pronounced something roughly like “Loch Nay”, giving us Northern Irish one more “ough” than the rest of yous) is the hole where the Isle of Man used to be until Fionn mac Cumhaill threw it at someone and missed.

Northern Ireland map

But the point is that if you’re going from Antrim to Enniskillen, you’ve got to go round Lough Neagh one way or the other, and no matter how much you stretch or divert your route, if you stay dry, you won’t deform one way into the other. And indeed, if you happen to be in Antrim and you ask for directions to Enniskillen, they’ll most likely tell you “If I was going to Enniskillen, I wouldn’t start from here.”. Much in the same way (upto deformation, I hope), if I was going to Homotopy Type Theory, I wouldn’t start from the Calculus of Inductive Constructions.

Why not? Because we start from the strange idea that equality is some sort of inductive definition

  Id (X : *)(x : X)(y : X) : *
  refl (X : *)(x : X) : Id X x x

which already places too much faith in the disappointing accident that is the definitional equality of an intensional type theory, and then we add an eliminator with a computation rule which nails our moving parts to said accident…

  J (X : *)(x : X)(P (y : X)(q : Id X x y) : *)(m : P x (refl X x))
    (y : X)(q : Id X x y) : P y q
  J _ _ _ m _ (refl _ _) = m

…right? What does that do? It says that whenever the proof of the equation is by reflexivity, the value m to be transported is already of the right type, so we don’t even need to think about what we have to do to it. If we are willing to consider only do-no-work transportation, we will never be able to escape from the definitional equality. (Note that the purpose of pattern matching q against refl is just to have a sound but not complete check that x is definitionally equal to y. If you like proof irrelevance (much more fun than K, for example), then you can just ignore q and decide definitional equality of x and y. I mean, if you’ve gone to the trouble of engineering a decidable definitional equality, you might as well get paid for it.)

But we don’t stick with definitional equality, and thank goodness for that. Observational Type Theory gives you structural equality on types and thus do-no-work-after-program-extraction transportation, but for open terms (and to be conservative over intensional type theory), we needed to refocus our efforts around the machinery of transportation, so that nontrivial explanations of equality result in nontrivial computations between types. That’s enough to get extensionality working. But univalence (the type of Antrim stuff is equal to the type of Enniskillen stuff if you have a way to transport it either way, such that a “there and back trip” makes no change to the stuff and orbits Lough Neagh a net total of zero times) is a whole other business, because now we can’t get away with just looking at the types to figure out what’s going on: we have to look at the particular route by which the types are connected.

(Local simile switch. Thorsten Altenkirch, Wouter Swierstra and I built an extensionality boat. We might imagine that one day there will be a fabulous univalence ship: the extensionality boat is just one of its lifeboats. But nobody’s built the ship yet, so don’t be too dismissive of our wee boat. You might learn something about building ships by thinking about that boat. I come from Belfast: we built the Titanic and then some prick sailed it into an iceberg because they made a valid deduction from a false hypothesis.)

So, what’s the plan? Firstly, decompose equality into two separate aspects: type equivalence and its refinement, value equality. The former is canonical.

  (X : *) {=} (Y : *)  :  *

I’m using braces rather than angle brackets only because I have to fight HTML.

The latter is computed by recursion over the former.

  (x : X) =[ (Q : X {=} Y) ]= (y : Y)  :  *

That is, the somewhat annotated mixfix operator =[…]= interprets a type isomorphism between types as the value equality relation thus induced on those types. I shall HoTT in Rel for this.

Value equality is thus heterogeneous in a way which necessarily depends on the type isomorphism which documents how to go about considering the values comparable. Let’s be quite concrete about that dependency. We get to look at Q to figure out how to relate x and y.

Reflexivity is not a constructor of {=}. Rather, every canonical type former induces a canonical constructor of {=}. In particular

  *^            :  * {=} *
  X =[ *^ ]= Y  =  X {=} Y

We may add

  sym (Q : X {=} Y)  :  Y {=} X
  y =[ sym Q ]= x    =  x =[ Q ]= y

  trans (Y : *)(XY : X {=} Y)(YZ : Y {=} Z) : X {=} Z
  x =[ trans Y XY YZ ]= z  =  Sigma Y \ y -> x =[ XY ]= y * y =[ YZ ]= z

Function extensionality becomes the value equality induced by the structural isomorphism for Pi-types. Types on which we depend turn into triples of two-things-and-a-path-between-them.

  Pi^ (S^ : S' {=} S`)
      (T^ : (s : Sigma (S' * S`) \ ss -> (s^ : ss car =[ S^ ]= ss cdr))
            -> T' (s car car) {=} T` (s car cdr))
    : Pi S' T' {=} Pi S` T`
  f' =[ Pi^ S^ T^ ]= f`  =  (s : Sigma (S' * S`) \ ss -> (s^ : ss car =[ S^ ]= ss cdr)) ->
    f' (s car car) =[ T^ s ]= f` (s car cdr)

Every elimination form must give rise to an elimination form for the corresponding equality proofs: if you eliminate equal things in equal ways, you get equal results, and these things have to compute when you get canonical proofs of equations between canonical things being eliminated. Consequently, reflexivity shows up as the translation from types to type isomorphisms, then from values to the equality induced by those type isomorphisms. In Observational Type Theory as we implemented it, reflexivity was an axiom, because by proof irrelevance (by which I mean by making sure never to look at the proof) it didn’t matter what it was: the half-built Death Star was fully operational. Here, we can’t get away with that dodge. Fortunately, I have at least some clue how to proceed. My less famous LICS rejectum, joint work with Thorsten, gives a vague sketch of the construction. The upshot is that every

X : *

has some

X^ : X {=} X

, and by way of a refinement, every

x : X

has some

x^ : x =[ X^ ]= x

.

Now, a type isomorphism is no use unless you can actually get from one side of it to the other. We shall need that type isomorphisms induce paths between values. That is, we shall need an eliminator

  path (S : *)(T : *)(Q : S {=} T)(s : S) : Sigma T \ t -> s =[ Q ]= t

and moreover, we shall need that paths are unique, in the sense that, for given inputs, every pair in the return type of

path

is equal to the thing that

path

returns. That is, we have a kind of propositional η-rule for paths. I’m not yet sure of the most ergonomic way to formulate that uniqueness. But consider, in particular, q : x =[ X^ ]= y. We will have that (x , x^) =[…]= (y , q) in the type of paths from x via X^. We thus recover more or less the J rule, seen as transportation between two path-dependent types.

  J (X : *)(x : X)
    (P : ((Sigma X \ y -> x =[ X^ ]= y) -> *)
    (m : P (x , x^))
    (y : X)(q : x =[ X^ ]= y)
    : P (y , q)
  J X x P m y q =
    path (P (x , x^)) (P (y , q)) (P^ (((x , x^) , (y , q)) , ... path uniqueness ...))
      m car

To achieve the definitional equational theory we’re used to from the J rule, we will need to make sure that the reflexivity construction, x^, generates proofs which are recognizably of that provenance, and we shall have to ensure that being recognizably reflexive is preserved by elimination forms, e.g., that we can take

  f^ ((s , s) , s^) = (f s)^

so that we can make

  path X X X^ x = (x , x^)

If we can obtain that path uniqueness from x along X^ when applied to (x , x^) gives (x , x^)^, then we shall have

  J X x P m x x^
    = path (P (x , x^)) (P (x , x^)) (P^ (((x , x^) , (x , x^)) , (x , x^)^)) m car
    = path (P (x , x^)) (P (x , x^)) (P (x , x^))^ m car
    = (m , m^) car
    = m

That is, the computationally obscure J rule has been decomposed into in-your-face transportation and path uniqueness. Somehow, I’m not surprised. It would not be the first time that a dependent eliminator has been recast as a non-dependent eliminator fixed up by an η-law. That’s exactly how I obtained a dependent case analysis principle for coinductive data without losing subject reduction.

Of course, we shall need to define

path

by recursion over type isomorphisms. We shall thus need to say how to compute

path Y X (sym XY) y

, which amounts to delivering the path in the other direction (the htap?), and its uniqueness. Transitivity goes (I hope) by composition.

So what of univalence? It’s not an axiom. It’s a constructor for

X {=} Y

where you just give the implementations of both path directions and show their uniqueness, thus explaining how to implement the elimination behaviour. We then need something like

  x =[ Univalence X Y xy yx ... ]= y  =  xy x =[ Y^ ]= y

but that’s annoyingly lopsided. We also need to know when isomorphisms are equal. Something like

  Q =[ X {=} Y ]= Q'  =  (\ x -> path X Y Q car) =[ (X -> Y)^ ]= (\ x -> path X Y Q' car)

might be enough, but again annoyingly lopsided.

It’s late and I’m tired, so I suppose I should try to sum up what I’m getting at. I’m hoping we can get to a computational treatment of univalence by isolating the notion of type isomorphism in quite an intensional way. On the one hand, the structure of a type isomorphism tells us how to formulate the equality for values in the related types. On the other hand, the structure of a particular type isomorphism tells us how to compute the transportations of values across it, giving rise to unique paths. Univalence allows us to propose arbitrary isomorphisms, and somehow, univalence gives an η-long normal form for type isomorphism: every type isomorphism is provably equal to the packaging-by-univalence of its elimination behaviour.

However, hilariously, we have to make sure that the relations =[…]= induces between equivalent type isomorphisms are equivalent (i.e. pointwise isomorphic), in order to show that =[…]=, like all the other elimination forms, respects equality. As County Antrim folk say, “There’s nothing for nothing in Islandmagee.”. Islandmagee, by the way, is the peninsula on the east coast, across the narrow sea from Westeros (which is a rehabilitated landfill site between Whitehead and Larne), apparently containing nothing.

(*) Eels, mostly.