designing the rod
For reasons largely of masochism, I’m trying to gift my future selves a piece of technology: a proofchecker, in the style of Rod Burstall’s ProveEasy, that I’m calling “rod” largely because I intend to inflict it on young people, lest they be spoiled.
The key basic choice is that the proof style is declarative. That is, the proof document shows you the decomposition of goals into subgoals which is but transient in systems like Coq and Agda. It’s intended to be used interactively, but my current plan is to facilitate that interaction by implementing a transducer for proof documents. So
rod doc1 .. docn
will read the given files and write new versions of them, updated by the machine’s take on what they say. In particular, a goal for which a rule has been suggested but no subderivations provided will be elaborated to a bunch of open subgoals if that rule applies.
E.g.
Show(imp(and(A, B), and(B, A))) by impI !
(where the ! is the user telling rod) becomes
Show(imp(and(A, B), and(B, A))) by impI {
Given(and(A, B)), Show(and(B, A)) ? }
(where the ? is rod asking the user)
I want the logic to be highly configurable, and I want to be able to express construction, where data unknown when a goal is stated get discovered as the proof progresses. Specifically, I want to be able to attach proof terms to a logic and watch them appear!
How on earth am I planning to do that? I’m writing this blogpost to try and answer that question for myself, apart from anyone else. They do say you should write the manual before the software…
sorts and modes and kinds
To get anywhere, one must have stuff. Syntax is up for grabs, but I have limited energy for niceties. Stuff lives in sorts. There are four predefined sorts: Type
, Goal
, Hypo
and Rule
. Things of sort Type
are also sorts. There is a strong whiff of the Logical Framework.
declare Prop Type
That is, we can populate our sorts by declaring constructors.
declare
id kind
where
kind ::= scope sort
scope ::= (binding,..binding) 
binding ::= kind  name kind
e.g., I might declare that I can form goals and hypotheses from propositions
declare Show(Prop) Goal
declare Given(Prop) Hypo
or propositions from connectives
declare and(Prop, Prop) Prop
declare imp(Prop, Prop) Prop
But I might also choose to introduce proof objects:
declare Proof Type
and declare a construction goal
declare Make(Proof, Prop) Goal
declare Have(Proof, Prop)
where the proof object will be synthesized by rod as we go.
rules
To get anywhere with proving things, we need inference rules. To build them, first declare a constructor of sort Rule.
declare impI Rule
Then say how to deploy the rule.
rule
Goal by
Rule subgoals
subgoals ::= {
subgoal;..}

subgoal ::= scope Hypo,..Hypo,Goal
You’ll notice I’m now writing kinds in grammars. They’re just terms that we check.
rule Show(imp(A, B)) by impI {
Given(A), Show(B) }
What are A and B? They’re metavariables. Metavariables are bound in patterns and used in expressions. The technical distinction between patterns and expressions must wait, but the idea is that we can solve for a metavariable in a pattern usage, and not in an expression usage. Likewise, we can infer the kind of a metavariable only from pattern usages. First order metavariables have only pattern usages: here, we could figure out A and B either from the goal or from the subgoal.
We may express modus ponens as follows:
declare impE(Prop) Rule
rule Show(B) by impE(A) {
Show(imp(A, B)) ;
Show(A) }
Actually, rules have an extra option.
rule
Goal by
Rule conditions subgoals
conditions ::= when Hypo,..Hypo 
which allows us to demand the presence of particular hypotheses to allow a rule to fire. As in,
declare hypo Rule
rule Show(P) by hypo when Given P
So we expect the goal to tell rod which hypothesis to check for.
Here is the introduction rule for conjunction.
declare andI Rule
rule Show(and(A, B)) by andI {
Show(A) ;
Show(B) }
I could define elimination as projection, but I’m not going to. Let me tackle that a different way.
rulesfrom
Here’s another variation on rules, specifically geared towards elimination. These ones are anonymous! Let’s have some examples.
rule Show(P) from Given(imp(A, B)) {
Show(A) ;
Given(B), Show(P) }
rule Show(P) from Given(and(A, B)) {
Given(A), Given(B), Show(P) }
The idea is to establish the canonical means to exploit a hypothesis, preferably with as little prejudice as possible about the goal. In general, that’s
rule
Goal from
Hypo subgoals
On deployment, there had better be exactly one rulefrom which fits the situation.
proofs
A proof is a tree of goals and rule invocations. We’ve seen enough to have an example.
Show(imp(and(A, B), and(B, A))) by impI {
Given(and(A, B)), Show(B, A) from and(A, B) {
Given(A), Given(B), Show(and(B, A)) by andI {
Show(B) by hypo ;
Show(A) by hypo } } }
Checking the proof is a matter of checking that rule deployments match, then checking subproofs. Which raises the question of what matching might be. Let me park that question while I make it a bit more difficult.
Crucially, wherever the rule declaration has patterns, the rule invocation has expressions, and vice versa: we change direction as demanded by the kinds. But timing is everything…
You can build this proof interactively. You may write ? instead of a rule invocation, or give a rule and a !
Sez teacher: solve (A Prop, B Prop) Show(imp(and(A, B), and(B, A)))
Sez rod: Show(imp(and(A, B), and(B, A))) ?
Sez you: Show(imp(and(A, B), and(B, A))) by impI !
Sez rod: Show(imp(and(A, B), and(B, A))) by impI {
Given(and(A, B)), Show(and(B, A)) ? }
Let’s be kind about rulesfrom. You can mark a hypothesis with a !
, then invoke it using from !
.
Sez you: Show(imp(and(A, B), and(B, A))) by impI {
Given(and(A, B)) !, Show(and(B, A)) from !}
Sez rod: Show(imp(and(A, B), and(B, A))) by impI {
Given(and(A, B)), Show(and(B, A)) from and(A, B) {
Given(A), Given(B), Show(and(B, A)) ? } }
And so it goes.
constructions
Let’s go wild.
declare lam((Proof)Proof) Proof
declare app(Proof, Proof) Proof
The lam constructor can bind a variable. That means subgoals will need to bind variables, too.
declare abstract Rule
rule Make(lam(x. b[x]), imp(A, B)) by abstract {
(x Proof) Have(x, A[]), Make(b[x], B[]) }
A subgoal can be prefixed with a parenthesis containing commaseparated declarations. Note that the rule has a proof expression as its first argument, with lam binding x
and using it as the parameter to metavariable b. The square bracket syntax is used to indicate permitted dependency. Here, we are saying that b
may depend on x
(in the rule and in the subgoal), but that A
and B
may not. If you omit the square brackets, then by default, you may depend on anything in scope, so the A[]
and B[]
are necessary. We deduce the kind of a metavariable by abstracting the kind being checked at its use site over its permitted dependencies. Here, we obtain b(Proof) Proof
.
A pattern usage of a metavariable is some m[x0,..xk] which does not occur inside a metavariable expression usage. A metavariable expression usage is a regular application of a metavariable to terms m(s0,..sk)
.
Solving m[x0,..xk] = t
for m amounts to checking whether t
can be persuaded to depend only on x0,..xk
: the only tolerable occurrences of some other variable, y
, is in the dependencies of another metavariable pattern in t
, i.e., if t
contains some n[..y]
, then we can ensure m
‘s independence of y
by forcing n
to become independent of y
, also. That’s the move Dale Miller calls “pruning”.
Solving m(s0,..sk) = t
for m
is just too hard. Our only chance is to wait for m to get solved elsewhere and substituted. That is, expressions force us to postpone constraints. It is forbidden for a metavariable to have exclusively expression usages, as there is no way ever to solve it!
We can now have bidirectional information flow in rules! In the next rule, A and B flow down, but we should expect f and a to flow back up.
declare apply(Prop) Rule
rule Make(app(f, a), B) by apply(A) {
Make(f, imp(A, B)) ;
Make(a, A) }
declare use(Proof) Rule
rule Make(x, P) by use(x) when Have(x, P)
Or if you like normal forms, drop apply and use
rule Make(p, P) from Have(f, imp(A, B)) {
Make(a, A) ;
Have(app(f, a), B), Make(p, P) }
Sez teacher: solve (A Prop, B Prop, C Prop) Make(p, imp(imp(A, imp(B, C)), imp(imp(A, B), imp(A, C))))
Sez rod: Make(p, imp(imp(A, imp(B, C)), imp(imp(A, B), imp(A, C)))) ?
Sez you: Make(p, imp(imp(A, imp(B, C)), imp(imp(A, B), imp(A, C))))
by abstract !
Sez rod: Make(lam(x. q[x]),
imp(imp(A, imp(B, C)), imp(imp(A, B), imp(A, C))))
by abstract {
(x Proof)
Have(x, imp(A, imp(B, C))),
Make(q[x], imp(imp(A, B), imp(A, C))) ? }
syntax
I’ve got a rather illiberal Cstyle syntax, which I’d like to sweeten. I’ve cooked up a formalism for grammarswithscoping which might fit here. More on that another time.
implementation
I’ve built the illiberal parser, checkers for declarations and rules, and the unification algorithm that the proofchecker will use. Let’s see how I get on with the rest. I’m still unsure whether hooliganistic unification is enough, or whether I will need to be more prescriptive about expected information flow in rules, goals and hypotheses. Perhaps a more careful approach to mode would make pattern matching enough.