In this final installment in his series on functional programming, Paul shows how much power can be packed into types.

Depending upon your perspective, I have some good news or some bad news. This is the last article in my series.

I’ll use it to cover some important ideas about programming with dependent types and develop a stronger version of the line-packing code. Inevitably, due to article constraints, some details will be left out or left dangling, but perhaps you will find solace when you discover I'm writing a book. Additional resources are mentioned at the end of the article.

In the May issue of PragPub, we had begun to explore the kinds of property that we wanted to establish, for example, that the function for splitting a long word into chunks of size n did not produce chunks that were longer than n. This we could write as the proposition that for all sizes n and lists l, that the maximum length of the split chunks is less than or equal to n, hence:

`ok_split : (n:Nat) -> (l:List a) -> LTE (maxLength $ splitIntoSize n l) n` |

But actually proving this wasn’t terribly attractive. Remember that in this world, proving such a thing corresponds to writing some code that, through a mix of pattern-matching and use of other code, yields a type equal to the above. The great thing is that working in this “classical” style is not the only option, and we can use the richer programming language to do it in a more convenient way.

A key part of this extra power comes from “Inductive Families”: a generalization of data types that allow us to embed data, logic, and dependencies in the data we work with. This extra information helps to make the reasoning more direct, and hence easier to do the proofs. You’ll also see some interesting effects on the programs we write, such as not having to write code for cases that logically can’t ever occur.

A simple but significant example is taking the head of an empty list. Some languages return nil for this (which has its own problems), others like Haskell cause a run-time error because there’s no sensible way to continue.

It’s arguably the Haskell version of most other languages’ null pointer exception (NPE), and like the NPE, there’s no good reason why we should have to run the risk of it happening. One easy way to eliminate it is to have a form of lists that carry size information in their types, and only allow use of head on a non-empty list.

Note that the compiler will complain if we ever try to take the head of an empty list in this type, before the code ever runs, and once it is past the compiler, it does not need to be checked again. Ever. It is not the same as dynamic or run-time checks, which may have to repeatedly check and if the check fails, do something annoying like throw an exception. I know which one I prefer. Yes, you have to do a bit more work, but I claim the tradeoff is starting to tilt in our direction as we explore and develop the technology.

# Working with Vectors

Quite a bit of the word-wrap code deals with limits on sizes, so let’s have a type that shows some size information on the outside of the box. Lists with sizes are usually called “Vectors,” and they are defined in the Idris Prelude like this:

`data Vect : Type -> Nat -> Type where` | |

` Nil : Vect a O` | |

` (::) : a -> Vect a n -> Vect a (S n)` |

This definition and some supporting functions are automatically available in your Idris session or program without needing any additional imports. (Other more specialized functionality is available in the Idris library too, which you can just import when required.)

The first line says there’s a group or family of related types Vect a n where a is some type (anything, pretty much) that is the type of the elements in the vector, and where n is a size expressed as a natural number (i.e., 0 or 1 + x where x is a natural number: please see previous articles).Notice that the size can’t be negative, since this can’t be expressed in the Nat type.

Then, there are two “canonical” ways to build such vectors. First, the Nil constructor that creates a vector of size zero (and with any element type, because there are no actual elements to constrain us to a particular element type!). Second, the constructor (::) (also called 'cons' or 'vcons' in discussions) that joins an element onto a vector of size n to give a vector whose size is one bigger. These are the only ways to build proper elements. Notice that the pattern is similar to how we define vanilla lists, but we have added meaningful size information.

So, building elements is slightly different, since the constructors don’t have to build into the exact same type each time. What does this mean? Well, for lists [] is a list and so is x :: xs—lists all around. For vectors, Nil produces a vector of a particular size (zero) and (::) builds vectors of non-empty size (one plus the size of its tail). Not that these are the only ways to build the respective cases, it’s just how we chose to define vectors.

We can use this information in reverse, too, to take information in the type to influence or restrict or direct the de-construction (i.e., pattern matching) of values as we manipulate them. This is an important point: the extra information makes pattern matching different in various interesting ways, which we’ll see through various examples.

Last time, I mentioned the safe vector head operation. Since it doesn’t make sense to take the head of an empty vector, we write the type signature to make it clear that we avoid the size zero case. How’s it avoided? Because S n for any choice of n will never unify with O, hence any attempt to call head on an empty vector will fail type checking.

`head : Vect a (S n) -> a` | |

`head (x::xs) = x` |

The definition above is also 'total' because no Nil case is required, and the checking mechanism can infer this from the type signature. We don’t have to work this bit out, and if we tried to add it, the checker would complain. (See the May article for further information on totality.) But if we wanted to remind ourselves of this fact, and protect against certain code changes, then we could write an assertion head Nil impossible as one of the clauses.

Notice we don’t need to mention a or n in the pattern above. This information is left implicit, and filled in as needed—yep, you guessed it, using mostly unification again. However, have you noticed that a and n don’t behave the same way? The payload type a is a parameter of the type, since it is the same every where it appears, whereas the size n is called an index because it can change depending on the various constructors. Just keep this in the back of your mind for now.

# Some More Vector Functions

There are some basic vector operations we’ll need later on, so let’s take a quick look. First, vector append, or joining two vectors together. The definition is quite boring, pretty much what you’d expect for vanilla lists, although remember that the implicit arguments (a and n) are being manipulated behind the scenes. The key thing is the type it has: it basically says that appending a vector of size m with one of size n gives one of size m + n.

`(++) : Vect a m -> Vect a n -> Vect a (m + n)` | |

`(++) [] ys = ys` | |

`(++) (x::xs) ys = x :: xs ++ ys` |

Notice what this type is NOT saying though: it gives no guarantee that we don’t mix up the vector contents in any way, like repeating elements or reversing them or putting elements from the first vector at the end. We could, if we wanted, but it might not be worth the effort. Remember that, like with tests, we probably want a tradeoff between thoroughness and reasonable confidence.

For the line-packing code, I’m assuming that respecting the line width is the key thing, and so the code here is slanted towards that. But if it gives me nicely formatted junk, then it’s still junk, but simple testing will quickly highlight any problems. (Yes, I’m suggesting that some things are easier to check with testing; and some with the techniques here, i.e., that it isn’t a one-size-fits-all and that we should explore why. More simply, mistakes in ordering could be apparent with basic testing (e.g., always building a result list in the wrong order), whereas mistakes in slicing and packing might be harder to spot (e.g., off by one error in indexing in some edge case). The point is, the cases and their effects differ, and it’d be great to exploit this. But I haven’t got time to investigate that here! Rats.)

Another useful function here is take on a Vector. Conventional take n xs on lists usually allows a short list to be returned if the slice size n is bigger than the length of the input list xs. To simplify things later and avoid an explosion of cases, I’m going to use a precise version that only works when the input vector contains at least enough elements, and can’t be used if the input is smaller than required. (Note: this version of take differs from the version currently used in the Idris Prelude.)

`take : (n:Nat) -> Vect a (n + o) -> Vect a n` | |

`take O _ = []` | |

`take (S n) (x :: xs) = x :: take n xs` |

The two clauses should make sense, but again the interesting part is the type. Like with head, the type will ensure that it can only be used when there are enough elements to take. Similarly, we’ll also use drop : (n:Nat) -> Vect a (n + o) -> Vect a o, with the requirement that there are enough elements to drop.

You may be wondering about how these constraints work. Basically, there are two main aspects. Firstly, the arguments could be concrete enough to computation to reach a unifiable state with the type, and the code is allowed to run. For example, take 3 [1,2,3,4] is ok and gives back the expected result. Secondly, if direct computation doesn’t yield a match, then you can try using what you know about the various arguments to prove that the call is ok. For example, you want take 3 v though you have only v : Vect a m, but if you also have a proof hyp : m = n + o then you can use hyp to get a version of v with the matching type. Yes, it’s more work, though typically this kind of extra reasoning is providing explicit justification for some detail in code that would normally be handled by hand-waving and/or finger-crossing.

# Customizing Code to People?

Before we move on, I just want to flag up some of the interesting discussion on Reddit arising from the April article.

It’s worth a read. It covers some more background and shows that some aspects of this field are still open to healthy and active debate. One thread mentioned other possibilities such as Intersection Types. The main discussion covered the “fragility” of code under certain kinds of changes.

For example, the take code above passes the type check because of the way that x + y is defined and how it computes. Fortunately for us, the pattern of recursion in the current definition mirrors the pattern we need in take so we don’t have to justify it. To put it another way, we get the “theorems” that O + y = y and (S x) + y = S (x + y) for free from the current definition because the LHS reduces to the RHS through one step of computation, and so we don’t need to mention these explicitly in the definition of take. Contrast that with x + (S y) = S (x + y) which would have to be proven and then explicitly applied.

(Side point: are we really happy with the above definition of take? Sure, it passes the type checker, but is “computer says yes” enough? Do we need anything else?)

And if we changed the definition of + without changing its type? Then we might get a different set of free theorems, meaning some existing code could now be broken and need additional justification. So, the point is that computation behavior is kind of part of the interface, not just the type, and it can affect code. The discussion on Reddit contains more detail, but I want to highlight one thought here.

Have you read any math proofs recently? Ever seen the phrase “it is obvious that ...” and not entirely agreed with the statement? (I hope it’s not just me.) Typically, the author is moving from one proposition to another by use of various standard or free theorems, like the simple properties of + above. But typically for beginners, some of these steps need to be pointed out explicitly, and are steps that they need as part and parcel of their understanding of the proof.

Note, too, that mathematicians sometimes highlight a few of the potentially mechanical steps because they are central to the (human) understanding of the proof. So, they don’t show the smallest proof possible, but instead, according to their audience, will flag the key conceptual steps that make the proof “work.” That is, they can vary the presentation and sometimes want to draw out the steps that are important for the overall understanding. Let’s call these the “a-ha steps.”

To take this idea one step further, the explanation of the proof can depend on who is reading it. Different people require different levels of detail and explanation. (There have been a few research papers that look at these ideas, under titles like “informal mathematics” or “mathematical vernacular.” Contact the author for more details if you are interested.)

Does this apply to code as well? I think so, and it’s maybe something we should be more conscious of. For example, I’m happy with fairly complex combinations of higher order functions and overloading, but it might be too daunting and non-obvious for a relative beginner. I want to keep my code, but also want the beginner to be able to follow it without having to rewrite the code in simpler terms.

Can we accommodate both positions? And the various positions in-between? Do we have to have a single version of the code for everyone? Must all programmers have to reach the same level of understanding? I suggest not; it should be possible to map from my code (or even sparser code!) to various levels of explanation suitable for people of different levels of experience and ability. Already we are moving away from the idea of a program as a piece of text in a file towards something richer, like a tree in an IDE that supports meaningful operations like syntax-directed editing or refactoring. Can we go even further and have semi-formal kinds of “programming vernacular”? I think so.

Briefly returning to the Reddit discussion, my point is that sometimes we do need to flag up the “a-ha steps” in code, such as how the basic properties of + are used in take, and so make it an explicit part of the module that provides +. This would reduce some of the potential fragility and maybe help comprehension of the code.

# Sigma & Dependent Tuples

Here’s another technical point I want to introduce. Well, you’ve seen it implicitly via the definition of the Vect type, but I want to look at it in isolation because it is both important and interesting. We’ll see the concepts used in some of the record-like structures soon.

Tuples are an ordered sequence of values, like (1, "foo"). Idris provides a baked-in form of 2-tuple (pairs) and provides bigger tuples by a bit of syntactic sugar. But, we can define our own version like this:

`data Pair : Type -> Type -> Type where` | |

` pair : A -> B -> Pair A B` |

The example (1, "foo") can be rewritten as pair 1 "foo". Now, the key detail is that the values in the fields, and the types of their values, are independent of each other. We can combine any two things! We can even combine pairs of types, e.g., pair Nat Bool is valid and means just what it says: a pair of types. (Remember another key detail of some type theories, including Idris’, is that types are first-class citizens too and so can be manipulated like conventional data. Contrast this to most mainstream languages, where the world of values and the world of types are entirely separate.)

What happens if we add some dependency? The Sigma A B type below is like the Pair type, but now, the type of the second element depends on the value of the first element. This is signified by the second value having type B x where x is the first value.

The curly braces indicate implicit arguments, ones that are normally inferred from the context. (This exact type is not in the standard library though its functionality is present through the Prelude’s Exists type for reasons I won’t go into here.)

`data` | |

` Sigma : (A:Type) -> (B:A -> Type) -> Type` | |

`where` | |

` sigma : {A : Type}->{B:A->Type}->(x:A)->(y:B x)->Sigma A B` |

So, can we write sigma O True as a (dependent) pair of a Nat and a Bool? Er, no. You’ll be told that Idris “Can’t unify Bool with B O.” The implicit parameters mechanism is trying to find a value for the implicit B value, and got stuck because there’s no obvious B such that B O is equal to Bool.

What about the simplest one, that of a function that returns the Bool type whatever its input, i.e., \_ => Bool? That would work, but it’s only one of the infinite number of functions that return Bool from input O, and it might not be the one appropriate for where the sigma value might be used, hence the mechanism holds back from this.

Typically, complex values like sigma pairs are used in a particular context, and the context provides sufficient information to solve the unknowns. For example, the following type checks without any other intervention (because the declared type of foo is used to inform the checking of its definition).

`foo : Sigma Nat (\_ => Bool)` | |

`foo = sigma O True` |

Alternatively, you can manually supply one of the unknowns, like this: sigma {B = \x => Bool} O True. We do sometimes need to add such hints. How do we get values out of a Sigma value? By pattern matching, of course. There are two standard accessor functions, traditionally called pi1 and pi2 for projection of the first (resp. second) field.

`pi1 : {A : Type} -> {B : A -> Type} -> Sigma A B -> A` | |

`pi1 (sigma a _) = a` | |

| |

`pi2 : {A:Type}->{B:A->Type} -> (p:Sigma A B)->B (pi1 p)` | |

`pi2 (sigma _ b) = b` |

The interesting case is pi2, where the result type B (pi1 p) must use pi1 to get the first value so it can compute the actual type from B.

Using it with our foo example value above, it means the result type is (\_ => Bool) (pi1 foo) which reduces to Bool.

Let’s try a more complex result type now. Since we can put arbitrary values in the types, we can run arbitrary code there. Our next example selects which type to return by testing its input. Values t2 and t3 are declared to have the same type. However, their type uses function b_or_c which indicates the type Bool when given a O value, else Char. In this way, we can wrap up essentially different kinds of data under the same type, and unpack them in a consistent way too.

`b_or_c : Nat -> Type` | |

`b_or_c = \n => if n == O then Bool else Char` | |

| |

`t2 : Sigma Nat b_or_c` | |

`t2 = sigma O True` | |

| |

`t3 : Sigma Nat b_or_c` | |

`t3 = sigma (S O) ` |

To demonstrate that these values are in the same type, let’s put them together in a list.

`t4 : List (Sigma Nat b_or_c)` | |

`t4 = [t2,t3]` |

We can even do map pi1 t4 and get back [O, S O]. What about map pi2 t4? Can we get the list of second values? No, because that would mean trying to build the list [True, 'a'] which isn’t allowed in this exact form. The type system requires the elements to be a predictable type so it can treat them all consistently and not have to do additional run-time checks etc etc. This means we can’t use the OOP style of “heterogeneous list” (i.e., posh for “different things”). But all is not lost! Let’s do something interesting: define a slightly more powerful version of pi2 that wraps up the second element with its type in a new Sigma value.

`pi2_t : {A : Type} -> {B : A -> Type} -> Sigma A B -> Sigma Type (\t => t)` | |

`pi2_t {A} {B} p = sigma (B $ pi1 p) (pi2 p)` |

And now we can run map pi2_t t4 to get [sigma Bool True, sigma Char 'a'], which is a “homogeneous list” (same things) of type List (Sigma Type (\ t => t)). Notice that we’re storing the type as a piece of data: types are first-class citizens too, remember! So the above result is entirely safe and we can use it for other things, e.g., select out the items that have a Bool type and do something Bool-y to them. To underline the point: we couldn’t do direct extraction of the second elements because it means a list of mixed types, which isn’t allowed. Instead, we can re-wrap the second elements with their type, and now manipulate this safely with similar values.

Now, you may balk at having to wrap up a value with its type and pass the two around, but isn’t this pretty much what most OO implementations are doing? They aren’t passing raw data, but data plus some structure or information like a table of functions or some kind of prototype or dictionary? It’s a big topic, but see if you can ponder the following without blowing your mind too much: with a very powerful type system, are we starting to come full circle and get back to the flexibility we thought we lost when we moved to static types? Paradise recaptured? Maybe in a better form?

# Considering the Split

Back to the code. I’m using a more recent version of Idris now, specifically, 0.9.8. You may want to update.

First up is the operation that splits long words into fragments. What are the details we perceive as risky? What do we want to be sure about? As I suggested above, it’s not always feasible to cover all the bases and so it’s worth thinking about priorities.

Since this kata is about line packing to a certain width, then sizes seem to be a key aspect. For sake of argument, I’d like to be sure that a word of size M is split into as many chunks of size N as possible, followed by a piece of size less than N‚Äîwhich could even be zero if M is a multiple of N.

In the previous code, we were basically crossing our fingers on a few details that the pieces ended up the right size with the left-over bits last. We could have written a few tests to explore the code’s behavior and gained some confidence, or written something in the comments about what callers of the code should be able to expect, but that’s not the same as certainty.

To start, let’s think in data terms about what output result we want from splitting. We want a list of equally-sized chunks, and the left-overs. We also want to be sure that the left-over chunk is too small to be a proper chunk. So, we can try the following:

The type SplitBy a n type represents the splitting of some thing into chunks of size n. The other parameter a is a placeholder for the element type of the input collection (list, vector, ...) that is to be split.

`data` | |

` SplitBy : Type -> Nat -> Type` | |

`where` | |

` Split : (vs : List (Vect a n))` | |

` -> (m : Nat) -> (LT_ m n)` | |

` -> (v : Vect a m)` | |

` -> SplitBy a n` |

The single constructor Split requires (i) a list vs of vectors of size n to hold the proper chunks; (ii) then a number m and some evidence that m < n; then (iii) a vector v that is of size m, which holds the left-overs.

These various details mean we can’t do things like make chunks of random size, or have the leftovers too big. It also forbids splitting into chunks of size zero (because no Nat value is less than zero!), which I flagged a while back as being a potential pitfall‚Äîlike is the result infinite?

With this, we can say what we expect the type for split_up to be: taking a size and a list of some size, and producing a splitting by that size. There’s one restriction though: it doesn’t make sense to split into size zero chunks, so let’s use the type to avoid that case. Hence, we propose the following: (BTW, it’s not ideal having the (S n) in the result type, and there’s probably a neater way to do it. Suggestions welcome!)

`split_up : (n:Nat) -> (Vect a m) -> SplitBy a (S n)` |

If we can produce a definition of split_up with this type, then it will automatically obey some of the properties we require of the output. Notice that instead of working with externally stated properties (as propositions or, in some sense, tests as well), we’re starting to move that information into the data our code is operating on. This is an important point about the benefits of richer types: instead of separating code and tests or code and properties, we’re working with something a lot more uniform and meaningful. Also, we now have more opportunity to exploit that information to help us program. I think this is real progress!

The SplitBy type could be extended to capture other invariants and properties, such as the left-over vector really being the last m elements of the original input, or that the number of size n chunks is correct or that the chunks contain appropriately correct text. However, some of these properties are straying into the responsibility of the various functions that will operate on this type. For example, we can define splat to unpack the splitting, and state a property about how split_up and splat are related. Other properties might be useful too, particularly as lemmas in a bigger proof, e.g., splat (Split [] _ _ y) = toList y to check what happens for a short list. Also useful is add_elm which puts another chunk in the list.

`splat : SplitBy a n -> List a` | |

`splat (Split xs _ _ y) = concat (map toList xs) ++ toList y` | |

| |

`split_splat : (n:Nat) -> (l:List a) -> splat (split_up n (fromList l)) = l` | |

`split_splat = ?hard_exercise_left_for_the_reader` | |

| |

`add_elm : Vect a n -> SplitBy a n -> SplitBy a n` | |

`add_elm v (Split vs a b c) = Split (v::vs) a b c` |

# Doing the Splits

The general algorithm should not surprise you. The key decision point is whether the size of elements remaining is less than the chunk size or not. Intuitively, if it is smaller then we return a SplitBy value with no big chunks and with the input as the left-overs. Otherwise, we can use take to slice off a chunk, then we add this chunk (using add_elm) to the result of splitting up the rest of the input after the first chunk has been removed with drop.

Now, how do we encode this understanding in the programming language? Here’s a naive version of the split_up code. Most of it seems straightforward so I just write it in directly, but I can’t be bothered to work out the proof term so I leave a “hole” called ?case1_pf to be filled in later.

The comparison is done with compare m (S n), essentially the spaceship operator <=> found in Ruby, and it returns LT if the first argument is less than the second, else EQ or GT as appropriate.

The syntax with (thing) is important, but for now just regard it as allowing us to do more pattern matching on the result of (thing), in this case to pattern match and branch on the comparison result, similar to case in Ruby. (Also don’t forget about that annoying detail of needing (S n) instead of n to avoid the split-by-zero difficulties. It is clearly a smell!)

`simple_split_up : (n:Nat) -> (Vect a m) -> SplitBy a (S n)` | |

`simple_split_up n {m} xs with (compare m (S n))` | |

` | LT = Split [] m ?case1_pf xs` | |

` | _ = add_elm (take (S n) xs) $ split_up n (drop (S n) xs)` |

This won’t get past the type checker, basically because there are a few implicit details in the second case that can’t be inferred immediately from the code. The main problem is in the second case, that it Can't unify Vect a m with Vect a (S n + o), or, that there’s no evidence around that m and S n + O are equal sizes. Let’s see what is available to the checker in the second clause by stubbing out pieces of it with holes.

`simple_split_up : (n:Nat) -> (Vect a m) -> SplitBy a (S n)` | |

`simple_split_up n {m} xs with (compare m (S n))` | |

` | LT = Split [] m ?case1_pf xs` | |

` | _ = add_elm ?case2_arg1 ?case2_arg2` |

When we enter proof mode with the command :p case2_arg1 and do intros to see what is available to help, we see this:

`---------- Assumptions: ----------` | |

` warg : Ordering` | |

` a : Type` | |

` m : Nat` | |

` n : Nat` | |

` xs : Vect a m` | |

`---------- Goal: ----------` | |

`{ hole 5 }:` | |

` Vect a (S n)` |

From this, we need to build a value of type Vect a (S n) from the value xs : Vect a m but there’s nothing here at all that help us to relate m and S n. The compare test has sent us to the case where m is greater than or equal to S n. But there’s no evidence here that confirms this or allows us to exploit that information easily, and we need this to help build the relevant results, e.g., we need to know this in order to use take. In effect, even though we’re in the appropriate branch, we are not really much the wiser about why we are there!

# These are Not the Patterns You are Looking For

So, we need more information. I mentioned before the idea of first writing the program you want to write, then write another program that runs the good program. We can take this approach with patterns too, so let’s think about the patterns we’d really like to work with. Remember that pattern-matching is about classifying and decomposing data, and what we’re going to do is extend the collection of patterns we can work with.

This idea has been around in various forms for quite a few years, under the general name of views. You can create views in databases to control or to simplify access to data. Professor Philip Wadler (he of monads fame) has developed similar ideas in the context of functional programming. Dr. Conor McBride has developed these ideas into a powerful framework for programming with dependent types. What you see next is a simple (and brief) example of this.

In our context, a view has two components: a data type that encodes the various patterns, and a “covering function” that converts our target data into a value in the view type.

For the split_up function, we’re interested in decomposing the input vector into two cases. One case is for the “too small” case, and we’d like to get a proof of it being too small. The other case is for “big enough,” where we want a different proof that proves that fact. But, to simplify the coding, we want the “proof” in a slightly different form. Saying a >= b is equivalent to saying there is a number c such that a = b + c. In constructive style, we “prove” a >= b by supplying such a number c and a proof that it’s the value that solves the equation. Hence, using the machinery of dependent types and inductive families, we define this:

`data CompareLT_ : Nat -> Nat -> Type where` | |

` LessCase : (LT_ a b) -> CompareLT_ a b` | |

` GteCase : (c:Nat) -> (b + c = a) -> CompareLT_ a b` |

Remember: it encodes the end result of a comparison. Next, we define the covering function, which converts our input into the comparison result. I’m not going to explain everything here, but roughly: the first case says any value x is greater or equal to O and the difference is x; the second says O is less than 1 + anything, and the proof is obvious; and the third says to decide the result for (1 + x) compared to (1 + y), then get the result for comparing x and y and modify the results accordingly. (That is, the result for compareLT_ 6 4 will use the result for compareLT_ 2 0.)

`compareLT_ : (a,b : Nat) -> CompareLT_ a b` | |

`compareLT_ x O = GteCase x refl` | |

`compareLT_ O (S y) = LessCase ltZero` | |

`compareLT_ (S x) (S y) with (compareLT_ x y)` | |

` | LessCase pf` | |

` = LessCase (ltSucc pf)` | |

` | GteCase c h` | |

` = GteCase c (replace {P = \z => S (y + c) = S z} h refl)` |

With this in place, we can now write a fairly straightforward definition of split_up. I’m using proof mode to construct the arguments for the add_elm function, and the proof steps are shown below. (Remember that the proof process builds a term that then gets slotted in, so we could replace the proofs by hard code if we want. If you type the name of a completed proof in Idris, you’ll see the underlying term.) The proofs build pretty much the code you’d expect, apart from a few fiddly details with implicit values and the need to tease out details from the constraints, for which the proof mode gives us a lot of support.

`split_up : (n:Nat) -> (Vect a m) -> SplitBy a (S n)` | |

`split_up n {m} xs with (compareLT_ m (S n))` | |

` | LessCase pf = Split [] m pf xs` | |

` | GteCase d hyp = add_elm ?gte_arg1 ?gte_arg2` | |

| |

`Main.gte_arg1 = proof {` | |

` intros;` | |

` let take_call = Main.take {o=d} (S n);` | |

` refine take_call;` | |

` compute;` | |

` rewrite sym hyp;` | |

` exact xs;` | |

`}` | |

| |

`Main.gte_arg2 = proof {` | |

` intros;` | |

` let rec_call = \v => split_up {a=a} n (drop {o=d} (S n) v) ;` | |

` refine rec_call;` | |

` compute;` | |

` rewrite sym hyp;` | |

` exact xs;` | |

`}` |

The above example could be done more directly, where instead of the view giving us proofs that we use in take etc., it could instead give us the decomposed value directly. That is, a different view could allow us to analyze a vector (with respect to a target length n) as either a short vector OR as a chunk of size n paired with the remainder of a given size). This might be a good exercise to try.

To sum up, we now have a word-splitting function that we can be sure produces correctly sized output. It’s a total function, so we can be sure all possible cases are covered and that the code doesn’t crash or loop. That is, assuming the checker in Idris is correct! (Correctness of the checker is a very important issue, but I’ve not got time to cover it now.)

# The Rest of the Code

Very brief details now. The input to the line-packing needs to be a list of fragments that do not exceed the line size. We can wrap up each fragment in a new type, storing each vector with a proof that its size is less than or equal to the limit. I’m cheating a bit here, and reusing the compareLT_ code from earlier, so the code requires the length to be strictly less than the limit + 1.

`data MaxVect : (a:Type) -> (n:Nat) -> Type where` | |

` MVect : (i:Nat) -> Vect a i -> LT_ i (S n) -> MaxVect a n` |

Next, we unpack the split-up long words into a sequence of such units, each of them obviously within the width limit. (Annoyingly, the (S n) here comes from the attempt to avoid the split-by-zero case...)

`unpack_split : SplitBy a (S n) -> List (MaxVect a (S n))` |

And then, we can start packing these units together, inputting a list of fragments and then merging adjacent ones (with the separator element) if the result doesn’t break the line limit. Notice how the input and output types are basically the same, though the contents may vary.

`pack : (n:Nat)` | |

` -> (separator : a)` | |

` -> List (MaxVect a n)` | |

` -> List (MaxVect a n)` |

And then we glue the various stages together to get the complete program! Phew.

The full version of the code will appear on GitHub after a few days, after a bit more tidying up, as this gist.

# Finishing Off

What else can be done? We might want a few concrete tests, though more as a sanity check rather than anything like TDD because we’ve already done a lot of the hard work via the type system.

We could start writing down and investigating useful properties, like the packing process not permuting or changing the content being packed. We could even start trying to prove them. Another possibility is to first use a QuickCheck-style tool to try properties on a big set of examples to see whether obvious counter-examples exist. It’s up to us, and how much confidence we require in any part of the code.

Also, I expect there are quite a few ways this code could be improved and simplified, so don’t take the above code as the last word!

What do you think of this kind of program?

Some bits look strange because they are quite different, even for reasonably experienced functional programmers, and programming this way does require developing a few new skills. In return, we are able to be more exact about how certain parts of the program work, to be more thorough on details that are often left implicit, and obtain high levels of confidence as well. Plus, you can get these inside a uniform language that has plenty of scope for powerful abstractions, and where serious proof becomes easier to deal with. If such details are important for a certain project, then this tradeoff might be one that works for you.

# So Much Left to Cover!

This series of articles has been a tour of some advanced ideas in functional programming and beyond. I hope you’ve found it interesting.

We started from the view of “functional programming” as a style of programming that really takes data structures seriously, and thus a style where we seek to identify relevant forms of data and solve problems by transforming data from one kind of data to another.

I’ve shown some examples where taking this idea as a key design principle has led to pretty straightforward and flexible code.

We also touched briefly on the idea that many aspects of your programs, such as navigation schemes for web apps or complex interactions with users, can be brought to the fore as data structures rather than left as implicit details buried under other stuff, and making them more explicit as data might help to simplify those parts of your code.

Throughout this, a powerful type system was used both as a tool for structuring and organizing code, and as a safety net. We saw ways in which the type information can even be exploited by advanced editors to guide the programming process. I suggest that using types well is a key part of the data-first style of functional programming. The relationship with testing was also considered.

Finally, we had a look at one possible future direction for this style of programming, toward languages that combine very powerful ideas about types with strong support for reasoning. It was quite a different world, but one with many advantages.

There are quite a few more related topics to explore, such as effects (refining some of the ideas about monads), and functional reactive programming. If you’re reading this and know something about these topics, do consider writing an article or three for this magazine (that’s my suggestion, not the editor’s!).

And if you want to know more about advanced functional programming or type theory? I’ve mentioned various books in earlier articles, as well as looking at the web pages for Idris or Agda and the various resources they mention. Joining the mailing lists is also useful, and many of us on the lists are more than happy to help with queries.

Also worth a mention is a very new book on Homotopy Type Theory. This was released only a few days ago (June 20th), but chances are you might have seen some mention of it already on Twitter.

It’s notable for quite a few reasons, despite the dry title. First, it explores the promising links between the foundations of type theory and some key areas of math. Secondly, it includes contributions from a very high caliber of people, a core group of about 20, with various contributions from many other experts. Thirdly, this group managed to put a substantial book together in under a year, and over GitHub no less. Fourth, despite the subject matter, from what I’ve seen it seems extremely well written, engaging, and accessible to a wide range of people (just try it!)

And you can download it for free! There’s even a version formatted for e-book readers! Plus the whole book’s repository is on GitHub so you can fork it and submit pull requests. It isn’t your normal academic book! (As soon as I finish writing this article, I’m going to dive back into the book. You’re lucky I managed to finish this article...)

Anyway, that’s it from me. Thanks for reading, you’ve been a great audience, and I hope you found something interesting in this series!

I’m off now, to read some more about Homotopy Type Theory.

And then, I have a book to write.

Dr Paul Callaghan first learnt about type theory in 1996, and was immediately fascinated. It filled in many of the unsatisfactory gaps in conventional functional programming, and helped him understand why some programs in Haskell felt wrong and incomplete. Paul was fortunate to do research work with some very talented people from 1997 to 2007, during which time he implemented a niche proof assistant called Plastic and experimented with various ideas like DSLs, Mathematical Vernacular, and Coercive Subtyping. Other bits of his bio can be seen on earlier articles. Paul currently works on a variety of bespoke rails apps at TranscendIt, and thanks TranscendIt for its continued flexibility with his writing schedule! Paul also flies big traction kites and can often be seen being dragged around inelegantly on the beaches of North-east England, much to the amusement of his kids. He blogs at free-variable.org and tweets as @paulcc_two.

Send the author your feedback or discuss the article in the magazine forum.