Paul digs deeper into dependent types, which can often replace tests in creating confidence in code.
This month, we dig deeper into dependent types. Dependent types were introduced in an earlier article in this series as a richer language of types, allowing us to explain more about what a program was doing, by encoding structural information and proofs about properties. I closed the previous article promising that we could replace some (maybe all) of the measures required to gain confidence in the word wrap code by using such a richer language, specifically replacing tests and handwaving assertions with firm statements of what properties and details we expect to hold and back them up with proofs.
Now, the formal methods people have been able to do something like this for a while, but it’s not exactly been convenient to work with—and certainly not as immediately useful as testing.
What dependent types brings to the table is more convenience and elegance: we’re able to say more about our intentions inside the program itself, not as some added extra like paper proofs or tests, and the mechanisms work naturally with the language.
This change is quite radical: it offers new ways to think about programming, and the extra richness of language enables new tools to help with programming. For these reasons, I think it could represent a new paradigm in programming.
Of course, it’s not a silver bullet. There will be new things to learn, and will require more effort in places. But what you stand to gain in return is more confidence in your code.
There are two things I want to cover this month, with more to follow in later months.

introduce some of the new tools for growing programs with dependent types, and

cover some of the basic ideas in dependent types which we will need for the wordwrap code.
You may have seen syntaxdirected editors, ranging from inserting templates into code up to some kinds of refactoring. The tools we’ll cover go a bit further, building syntactically valid code that is able to use types to propagate some of the semantic information with the ability to fill in some details automatically. As we switch to a more powerful type system, such tools become even more useful.
Finally, here’s a reminder of some of the aspects of the wordwrap problem where dependent types can help:

Does longword splitting always produce the right size and order of fragments?

Will line packing never overflow, assuming that the input is valid?

Can we guarantee that the output doesn’t add or delete or permute nonspace characters?
Language and implementation
The “implementation” I’m going to use is Idris. Idris is being developed by Edwin Brady (of WhiteSpace fame) as a tool for practicallyoriented dependentlytyped programming. Edwin is very actively doing research on applying such advanced ideas to resourcebound embedded systems and protocols, so it’s far from being a toy. It even has a FFI! The code is on github and has a growing list of contributors.
Idris is a mix of editor, interpreter, and compiler. The Idris language is based on Haskell, but with full dependent types taking centerstage rather than being an addon. Simple Idris programs can be read as vanilla Haskell (so it has a reasonable intersection with Haskell, albeit with some syntax changes), and the more powerful language is there when it’s needed. Part of the research is to investigate the interplay between the two languages.
Idris can generate standalone executables compiled via C (very compact, compared to Haskell’s GHC compiler), and more recently can compile into Java and Javascript. The interpreter provides a REPL, allowing arbitrary code to be run from the prompt. The editor part is actually a kind of proof assistant to help with the finer detail of growing dependently typed programs. This is the component we will look at first, using basic Haskell programs.
Similar systems based on type theory also exist, such as Agda, Coq, Epigram, Lego, or Twelf, with many overlaps of features though differing in several technical details too. These ideas are starting to filter into more popular languages, for example, GHC has recently added some weak support for “holes.”
Installing Idris
Idris is currently distributed only as source, so you’ll need to compile it yourself.
First, install the Haskell Platform. Packages are available for standard *nix (including Macs). Typically these pull in recommended stable packages of standard tools like GHC (the compiler), Cabal (the package manager, like 'gem'/'bundler' or 'npm'), and various useful libraries. There’s a Windows installer for the corresponding bundle of binaries.
The most recent stable (source) release of Idris is on the standard Haskell package server, known to the world as Hackage; the entry for Idris is here. To install this version, run the following:
cabal update  
cabal install idris 
This will download the source, configure and build it with the tools in the Haskell Platform, and install it as ~/.cabal/bin/idris (with libraries and resources being stored elsewhere under ~/.cabal).
You can also grab the repository from github; there’s a README but just typing make will compile and install the current branch into ~/.cabal/bin/idris.
Running the executable should show some ASCII art and a prompt.
A very simple program
From the prompt, enter :l one.idr. It should complain about a missing file, since we’ve not created it yet. Enter the editor with :e. Which editor gets run depends on the current settings of environment variables EDITOR or VISUAL, and chances are it will run a relative of vi. (If you’ve not used vi or vim before, now’s your chance. And check out Drew Neil’s excellent guide.) Now enter this text and save the file:
module Main  
 
main : IO ()  
main = putStrLn "usual string" 
Idris will typecheck the file and load the definitions into the interpreter. You can evaluate main but this shows the corresponding call to the underlying library function via the FFI. Compare to the result of print (1 + 2)—notice that it computes the string "3" to be shown. (Supposedly you can use :x to properly execute the (monadic) IO action, though not at present, but it is coming soon.) Since this module is called Main and contains a definition of main, which is the standard entry point for Haskell and Idris executables, then we can compile it and execute the resulting binary. Use the REPL command :exec to trigger this.
If you want to save the resulting binary, use command :c foo, where foo is the desired output file. You can then run ./foo outside of Idris. Alternatively, use :js foo to generate JavaScript code, which can be run via NodeJS with node ./foo.
(Generation to Java code is being worked on, and should be ready soon.)
You can see other useful REPL commands by typing :?, or view the latest Idris tutorial for more information. (There’s also a copy in the repository.) Idris also accepts some commandline options at startup, so use help to see these.
Leaving bits out
Next, let’s see how we can grow our program using Idris’ editing tools. Start a new file two.idr and enter this text.
module Main  
 
foo : List Int > List Int  
foo xs = if List.length xs < 4  
then ?case_one  
else ?case_two  
 
main : IO ()  
main = putStrLn $ show $ foo [1,2,3] 
The key point is that we haven’t finished writing foo yet: the ?name expressions represent “holes” which need to be filled in, and the names serve to distinguish the various holes. We can also refer to holes as “metavariables,” in the sense of them being placeholders for something else, and as existing outside of the actual programming language.
Idris will check this code and report success. But can we compile it and run the whole program? Of course not, because bits are missing. However, we can check parts of it via Idris’ interpreter. Try foo [1,2,3]. It doesn’t crash, and instead does as much as it can before being blocked by the missing piece, hence shows you Main.case_one [1,2,3] : List Int. (Actually, it desugars [1,2,3] into a combination of cons and empty list, but I’ll spare you those details.) In other words, it was able to run the iftest and decide which branch to use. It would show Main.case_two [1,2,3,4,5] if you supplied a longer list. This is highly useful for checking bits of code without writing the whole lot.
Behind the scenes, Idris has created two new toplevel values to represent the holes, and used type information from the context to determine what type the hole should have. You can check such types with (e.g.) :t case_one which will show List Int > List Int, hence it needs to be a function which takes and returns a list of ints. There weren’t any arguments when ?case_one first appeared, so why does it need to be a function? Well, ?case_one appeared in a context where variable xs was bound, so plausibly the code for case_one might need to refer to xs. A simple way to handle this potential dependency is to treat holes as functions and pass in the values of the bound variables. This is why you see Main.case_one [1,2,3] etc when you evaluate foo [1,2,3].
Before we fill in something for case_one, we can start to lay down some assertions or tests to try later. Suppose we want foo to reverse short lists. Then we can define a simple Boolean assertion, which we expect to return True when the code is complete. It won’t do so now, and instead shows how far it got (basically, it’s blocked by a missing case).
assert_1 : Bool  
assert_1 = foo [1,2,3] == [3,2,1] 
Such assertions rely on us running them explicitly in order to check whether they are True or not, just like other tests. We can do better: use logic. This will be covered in more detail later, but for now we will just state the required condition as a theorem (or as a proposition if you prefer) and leave its corresponding proof until later (via the same mechanism as is used for holes). Just note that the binary operator is =, signifying propositional equality at the type level, rather then the usual == for equality tests computed by the code.
proof_1 : foo [1,2,3] = [3,2,1] 
Filling in the first case
Type :p case_one to start the “proof assistant” mode on the hole case_one. Idris will show the following, which indicates that you need to supply or 'prove' a value of type List Int > List Int.
 Goal:   
{ hole 0 }:  
List Int > List Int 
We can fill in a full value right away with the command exact reverse, i.e., indicating that the current goal is exactly solved by the (function) value reverse, but let’s explore other ways to grow functions.
The main tactic to grow a function is to assume its argument and then try to grow the function’s body. Run the command intro xs and observe the result.
 Other goals:   
{ hole 0 }  
 Assumptions:   
xs : List Int  
 Goal:   
{ hole 1 }:  
List Int 
It’s now asking us to prove a new goal (denoted by hole 1), which needs to be a value of type List Int. Notice an assumption xs : List Int has appeared, representing the input coming into the function we’re working on. The original goal hole 0 is still there too, for reference.
We know we want this first case to reverse its input, so we need to use reverse somewhere. One option is to use exact reverse xs (or exact (reverse xs) if you prefer). If you try this, you can use undo to go back a step. A more powerful tactic is refine, which fills in the head (or outer) function and then infers any arguments it can. So, being specific that we want the version of reverse for lists rather than the slightly magical version for strings, try refine List.reverse and observe the effects:
 Goal:   
{ __pi_arg 1000 }:  
List Int 
The other goals and assumptions are unchanged, but hole 1 has disappeared and we’re now being asked for another list. Now, it’s not obvious at present where this list fits in, but you might realise that it corresponds to the main argument for reverse. You can get confirmation of sorts from the term command, which shows the current state of the function you are growing in the proof assistant mode. You should see something like reverse {__pi_arg 1000} in the output, thus confirming it is the argument for reverse.
In fact, what the refine FOO tactic does is apply expression FOO to new metavariables until its type matches the general pattern of the goal’s type—hence it is filling in the hole with List.reverse ?arg and then creating a new subgoal to find out what ?arg is. The matching is done via a form of unification algorithm  using the same kind of inference behind logic programming and behind Haskell’s type inference. (If you don’t know much about unification, maybe start with this list of examples; it’s also a good coding exercise to help you gain a good understanding of tree manipulation and of unification itself.)
Anyway, this argument should be xs, so fill that in with exact xs. Idris will announce the end of the proof process with "No more goals". You can check the result with term and should see in full the function we constructed with the tactics: \ xs : Prelude.List.List Int => Prelude.List.reverse Int xs. If you don’t like the result, you can issue undo a few times and try a different approach.
Exit the proof mode with qed. Idris will display the sequence of tactics used to construct the result, and doublecheck the resulting term against the original, expected type. Assuming there’s no problem, you can append the resulting proof to your file with :a. The proof is shown below, and it is just a list of the steps we took.
Main.case_one = proof {  
intro xs;  
refine List.reverse;  
exact xs;  
} 
You can also look at the corresponding code created for case_one by evaluating case_one, though beware that Idris will try to execute as much as it can and so shows a version which has expanded reverse into its lowerlevel definition—but it can’t compute any further until bound variable xs is replaced by some data on which it can pattern match. If you want to get it to compute more, give it something to work on! For example, you can now evaluate case_one [1,2,3] and examine the result.
GOTCHA: You will need to move the five lines of proof for case_one from the end of the file to the point after the definition of foo. It’s quite ok to mix code and proof blocks like this. The move is required because Idris’ type checker will need the proof term to be defined before it checks other code or proofs which rely on it. (It’s not an oversight in design—I’m doing things in a slightly unconventional order here.)
A quick detour into CurryHoward
Notice that we went through a sequence of “proof steps” and ended up with some code. This is not accidental, and it’s one aspect of the CurryHoward correspondence.
Recall that this correspondence identifies the similarity between (logical) propositions and types, and between proofs and programs. We’ll be using this similarity soon, as we start to prove properties of the code inside the language.
It’s a big topic, of course, and too much for this short article. The wikipedia page seems a fair place to start if you want to know more.
Also notice that Idris is saving the proof steps as part of our program, and will use these to reconstruct the code when it loads the file from text. (Idris also saves the results in compiled form, but that’s another story.)
Checking the results
Remember the assertion we added earlier? We can run it now—just type assert_1 and observe it returns True. But, with the extra machinery of logic at our disposal, we can do better. Add this code to the file.
proof_of_assert_1 : foo [1,2,3] = List.reverse [1,2,3] 
This declares name proof_of_assert_1 to have a type of foo [1,2,3] = List.reverse [1,2,3], which represents the logical proposition that the expression foo [1,2,3] produces the same result as expression List.reverse [1,2,3].
You could also call it a conjecture—with the current definitions in force—that the two expressions amount to the same thing.
Now, it isn’t the same as the code we wrote for assert_1. That code will return the constant True if the two operands compute to the same list (and this has to run the code in full to determine whether it is True or not).
It will also depend on how == has been defined: it might do something strange like treat all odd numbers as equal, e.g., 2 == 4 could be True. So, it’s something we have to compute inside the language.
In contrast, propositions like X = Y are something we can reason about and don’t always have to run the code. For example, we know foo [1,2,3] will expand to case_one [1,2,3] and this in turn expands to List.reverse [1,2,3]. At this point, the inference mechanism will accept the two sides as being the same. Also, there’s no sense of True or False here: the system either has a proof or it does not, and if it does have one, it can use it to prove something else, otherwise it is stuck.
This distinction may seem strange at first, but please perservere with a few more examples, and it might get a bit clearer!
Anyway, what do we do with conjectures? Prove them, that’s what.
Start the proof mode with :p proof_of_assert_1, and you will be asked for a proof. There’s a useful tactic for situations like this: trivial. This tries a few standard proof techniques on the current goal, and in this case it is able to confirm the proposition by computing both sides to the same thing.
After qed, you’ll see a very short proof.
It’s useful to know what’s happening under the hood, so evaluate proof_of_assert_1 and you will see the term refl. For now, you can understand refl x as the proof that x = x, or the proof that equality is reflexive.
Tests into proofs
Now, we’ve proved our assertion and because of type checking, the assertion will be checked whenever the file is reloaded, so it guarantees the assertion still holds after any code changes. This is what conventional test frameworks give us.
So here’s something that tests can’t do. Start with this type signature:
case_one_proof : (l : List Int)  
> (length l < 4) = True  
> foo l = reverse l 
The type represents the proposition: “for all lists l, where the length of l is less than 4, then foo on l is the same as reversing l.” This says precisely how we want foo to behave on short lists, for all possible inputs. Compare this to a set of tests which are aimed at capturing the same requirement. The tests are a collection of examples, and additional reasoning will be required if we want confidence that they are representative of the wider picture—whereas the statement above covers all possibilities. All of them. It’s the difference between “there exists” and “for all.”
Let’s do the proof. Basically, we’re going to use the hypothesis or assumption about the length of input to push some computation through the definitions, until we get to a point where the sameness is obvious. The first two steps are intro l and intro H. This moves the function’s two inputs into the proof context as assumptions, leaving a proof state like this:
 Assumptions:   
l : List Int  
H : length l < 4 = True  
 Goal:   
{ hole 2 }:  
foo l = reverse l 
What H represents deserves some explanation. It’s an assumption of a proof of length l < 4 = True, not an actual proof of it. We often speak of terms like this being hypotheses—which we will use to prove the required conclusion. Likewise, l represents an arbitrary list. Notice that H gives us some extra info about what l is. It’s a bit like how mocks are used in several test frameworks, where you have some minimal object equipped with predictable behaviour, and the associated tests examine the consequences. For example, l could be some anonymous thing such that l.length returns a value less than 4.
Next, we need to use H to simplify the goal foo l = reverse l, aiming for a point where the goal is trivially satisfied with refl. So we’re going to use H to rewrite part of the goal and see what comes out. In this case, we want the term if length l < 4 then ... to change to if True then ....
One useful property of equality is substitution, which says if A = B then if we have P(A) then we have P(B) for all properties P. Put another way, it we know A = B and that we have some P that holds for A, then we can infer that P holds for B too, and can’t be used to distinguish them. (A related idea is that if A and B are really equal, then all properties of A hold for B too, if that helps.)
It’s a common inference step, so Idris has a special tactic for it: rewrite. One slight niggle though—we need the operands of H in the other order, but this is ok since equality is symmetric (another requirement of equality).
(I could have cheated and used True = length l < 4 as the hypothesis, but it’s useful to know about symmetry.) This means the next proof step is rewrite (sym H). The result might not be too comprehensible but you can use the tactic compute to force a bit more of the computation, and so get this—which even if we don’t know what it means, we can see it’s the same for both sides.
{ hole 3 }:  
Prelude.List.reverse.0#reverse' Int (Prelude.ListNil) l =  
Prelude.List.reverse.0#reverse' Int (Prelude.ListNil) l 
So, we can finish the proof with trivial. Notice that we don’t have to compute any further, either. It doesn’t matter what l turns out to be, because the logic ensures the result has the required properties. The completed proof sequence is shown below. I’ve omitted the compute steps because they are really just for cosmetic purposes, so we can see what’s going on. They make no difference to the proof.
Main.case_one_proof = proof {  
intro l;  
intro H;  
rewrite (sym H);  
trivial;  
} 
Filling in the rest
We’ve covered just one case, but for this we have a rocksolid statement of what it should do and a proof that it does what it should. If subsequent changes to the code alter its behavior then the proofs will not type check. We won’t be able to ignore it.
I’ll let you think up some functionality and properties for the second case. Have a go at proving things yourself! If you get stuck, post your code to the magazine’s forum and I’ll try to help you out.
As a final point, what does case_one_proof [1,2,3] refl produce? Yes, our proof is a function (with a dependent type) and we can apply it to things. What does it mean to use the proof like this? And what about case_one_proof [1,2,3,4,5] ?
The wider picture
You’ve seen now how to do a very simple development in a type theory setting. The code and proof were quite basic, but I deliberately chose a straightforward example so we could cover the key ideas without too much distraction.
We’ve barely scratched the surface of what this paradigm offers, so tune in next month. I’ll cover topics like dependently typed data structures, including vectors (sized lists) and dependent tuples. We’ll use vectors to code up size constraints on words that are being packed into lines, and to provide guarantees about the splitting of long words. Dependent tuples allow data and proofs to be packaged up and manipulated in a convenient way. And as the types become more interesting, then the proof tools will be able to do more interesting and useful things for us.
There’s much more to explore on the tool support side too. Several proof assistants have more powerful tactics which can automate straightforward proofs like the ones we saw here.
It’s not magic though: all they do is a bit of search through various combinations of the basic tactics we’ve just seen, maybe guided by some heuristics on the form of available assumptions and hypotheses, plus any relevant theorems that we’ve already proved. Idris will probably support this kind of automation in the near future. So, don’t conclude that working in this way means a lot of drudge work!
Nor are these tools limited to simple examples either. In the past two decades, such tools have been used to provide mechanical verification of complex protocols and many important results from maths. The four color theorem for map coloring is a good example.
But notice where the human fits into this process. The proof assistant manages the fine detail and the checking, and provides basic tools for standard reasoning steps, possibly with some limited automation for the relatively obvious things. The human does the creative part, of guiding the highlevel operation of the proof assistant, the bit that is currently too hard for mechanization. (That’s why we call them proof assistants.)
A similar idea applies to programming! The human sketches out the highlevel details and lets the machine fill in the obvious things. We should try to program like this. Why not?
Dr Paul Callaghan first learnt about type theory in 1996, and was immediately fascinated. It filled in many of the unsatisfactory gaps in conventional functional programming, and helped him understand why some programs in Haskell felt “wrong” and “incomplete.” Paul was fortunate to do research work with some very talented people from 1997 to 2007, during which time he implemented a niche proof assistant called Plastic and experimented with various ideas like DSLs, Mathematical Vernacular, and Coercive Subtyping. Other bits of his bio can be seen on earlier articles. Paul also flies big traction kites and can often be seen being dragged around inelegantly on the beaches of Northeast England, much to the amusement of his kids. He blogs at freevariable.org and tweets as @paulcc_two.
Send the author your feedback or discuss the article in the magazine forum.