Venkat explores the immense time-space tradeoffs of memoization, and explains how Groovy makes memoization easy.

“Dynamic programming” is intriguing for a few reasons. The combined words do not directly refer to anything dynamic or programming, at least not in the sense attached to those words individually today. It is an algorithmic technique that takes a counterintuitive approach to realizing blazingly fast computation. We will explore this technique here and quickly implement an example using the facility available in Groovy.

In a recursive solution we solve a problem using solutions to subproblems. Dynamic programming takes this to the extreme, by using overlapping recursive solutions. In other words, the solution to a sub-problem is used multiple times in the solution to the problem. Take the Fibonacci series, 1, 1, 2, 3, 5, 8, 13, 21, ..., for example. The number at any position in this series can be expressed as a recursion:

`Fib(position) is` | |

` 1 if position < 2` | |

` Fib(position - 1) + Fib(position - 2) if position >= 2` | |

`for position >= 0` |

This recursion is simple to express, but terrible in performance if naively implemented, because it uses overlapping solutions. For example, to find the value at position 5, we’d perform Fib(4) + Fib(3). But Fib(3) would also be computed in the solution to Fib(4). Likewise, Fib(2) would appear in the solution to Fib(4) and also in solution to Fib(3). These overlapping reevaluations make the computational complexity of this recursion exponential, that is, O(2^position).

That’s bad, but only if we actually perform the reevaluations. If instead we store a computation the first time and reuse it each time it’s needed, we can reduce this to linear complexity, that is, to a time complexity of O(position). It’s a space vs. time tradeoff, in favor of gaining speed and implementation simplicity. And this is memoization, a way to store away the solutions and reuse them to speed up the dynamic programming algorithmic technique.

Though we look at a simple problem here, the Fibonacci series, dynamic programming and memoization are used in a number of day-to-day problems, such as finding the shortest driving distance between cities, maximizing profit in sales, minimizing cost and reducing waste, etc. It’s used in a broad range of optimization problems where the goal is to choose a best solution among different possible solutions.

Let’s implement the algorithm for the Fibonacci series as a simple recursion first, and measure its time. Then let’s use the memoization technique to speed it up.

Here’s the function to compute the Fibonacci value at a given position.

`def fib(position) {` | |

` if (position < 2)` | |

` BigInteger.ONE` | |

` else` | |

` fib(position - 1) + fib(position - 2)` | |

`}` |

It’s a direct implementation of the recursive algorithm in Groovy. Since the values in the series would grow bigger than an integer or long can hold, we need to use BigInteger for the series values. Due to the dynamically typed nature of Groovy, the only impact of that decision is in the return where BigInteger is used in code: pretty low ceremony.

Let’s create a function to help measure the execution time for various position values.

`def timeit(position, function) {` | |

` def start = System.nanoTime()` | |

` def result = function(position)` | |

` def end = System.nanoTime()` | |

` def time = String.format(` | |

` println ` | |

`}` |

The timeit function receives a position and a reference to a function—a closure—as a parameter. It forwards the position to the closure and reports the time it took to execute.

We have everything in place to try out the simple recursion. We’ll start with a small position value first.

`timeit(5) { position -> fib(position) }` |

We called the timeit function and passed it two arguments, a position value of 5, and a closure that receives the position and invokes the fib function.

The value at the given position in the Fibonacci series and the time taken is reported by this call.

`Fib(5): 8, time: 0.02 sec` |

The result is correct and the code took about two hundredths of a second to run. Let’s increase the input to a position of 40 and see how this code performs.

`timeit(40) { position -> fib(position) }` |

The value and the time reported are:

`Fib(40): 165580141, time: 20.01 sec` |

That took well over twenty seconds: that’s long. But this algorithm will get worse rapidly due to the exponential time complexity. For example, let’s increase the position just slightly by 2.

`timeit(42) { position -> fib(position) }` |

Here are the value and time reported:

`Fib(42): 433494437, time: 50.42 sec` |

A small variation in the input size but a large degradation in performance.

Imagine this was a problem like the rod cutting problem (the Groovy solution to which is presented in Programming Groovy 2nd Edition) where we want to measure the maximum profit from the retail sale of different sizes of rods. For a large size of rod, say 500 inches, it would take hours or even days to compute, during which time the market may well fluctuate before we find out the best ways to sell the inventory on hand, rendering the solution totally useless.

The degradation in performance came from calling the functions over and over. For example, just for the small position of 5, the fib function was invoked 15 times, and for the position of 40, it was called a whopping 331,160,281 times. Let’s turn to memoization for help.

When the fib function is invoked, we could look up the value in a hashmap and evaluate the logic only if the value is not present. First we need a hashmap to store the values; let’s create it and seed it with the two initial values.

`fibSeries = [0: BigInteger.ONE, 1: BigInteger.ONE]` |

Now, we can modify the fib function to use this hashmap.

`def fib(position) {` | |

` if(!fibSeries.containsKey(position))` | |

` fibSeries[position] = fib(position - 1) + fib(position - 2)` | |

` fibSeries[position]` | |

`}` |

If the value for the given position is not in the hashmap, we compute it and store it in the collection. Then we simply return the value stored. If the value is already present, then we skip the computation. Let’s invoke the modified version of the function and measure the time to execute.

`timeit(5) { position -> fib(position) }` | |

`timeit(5) { position -> fib(position) }` | |

`timeit(40) { position -> fib(position) }` | |

`timeit(42) { position -> fib(position) }` |

The code to call the functions are the same as before, but the time it takes is quite different.

`Fib(5): 8, time: 0.02 sec` | |

`Fib(5): 8, time: 0.00 sec` | |

`Fib(40): 165580141, time: 0.00 sec` | |

`Fib(42): 433494437, time: 0.00 sec` |

The first call for position 5 took about the same time as the previous version. The subsequent calls for positions of 5, 40 and 42 took really no noticeable time. They appear faster than the first invocation, but that’s because the JVM has warmed up and taken care of JIT optimizations in the first call.

If we compare the results of the two versions side-by-side, the values at respective positions are the same, but the computation time is a world apart.

Compared to the 15 times the fib function was called for the position of 5, in this version it is called only 9 times. For the position of 40, on the other hand, the number of calls went down from 331,160,281 times to a mere 79 calls, thanks to memoization.

We can use the above technique in any language, so what’s special about Groovy? The answer is that in Groovy, memoization is already baked into the library as a function on closures. We don’t have to create the hashmap and look it up, that’s taken care under the covers.

To make use of this built-in facility in Groovy, we have to transform the function fib into a closure. Let’s transform the original non-memoized version.

`def fib` | |

`fib = { position ->` | |

` if (position < 2)` | |

` BigInteger.ONE` | |

` else` | |

` fib(position - 1) + fib(position - 2)` | |

`}.memoize()` |

We first defined a variable named fib. This initial definition is needed to bring the variable into the lexical scope of the closure to enable the recursive call. Then we created a closure, the body of which is pretty much the body of the original function. Finally, on the closure we call the memoize function, which is a Groovy added function on closures.

The memoize function under the covers creates a wrapper function that looks up a hashmap for the value and does the memoization for us.

Let’s invoke this modified version with some position values, and let’s be bold and increase the position size much further this time.

`timeit(5) { position -> fib(position) }` | |

`timeit(5) { position -> fib(position) }` | |

`timeit(40) { position -> fib(position) }` | |

`timeit(42) { position -> fib(position) }` | |

`timeit(500) { position -> fib(position) }` |

The larger position value is something we can’t even imagine with the poor performance of the overlapping recursion, but we’re confident the memoization has turned this into linear time complexity and so will handle this with reasonable speed. Let’s run the code and look at the speed and the results.

`Fib(5): 8, time: 0.02 sec` | |

`Fib(5): 8, time: 0.00 sec` | |

`Fib(40): 165580141, time: 0.00 sec` | |

`Fib(42): 433494437, time: 0.00 sec` | |

`Fib(500): 22559151616193633087251269503607207204601132491375` | |

`8190588638866418474627738686883405015987052796968498626,` | |

`time: 0.01 sec` |

The output confirms that the results of the implementation are consistent with the slow version, but the speed is simply remarkable. We hardly made a dent in time even for a large position.

We made use of the memoize function in Groovy. That means less effort on our part, as we did not have to create and maintain the hashmap. But that’s just the beginning of the benefits. The underlying implementation of memoize is quite efficient. Recall that memoization is a time-space tradeoff—we use more memory—to store the values—in exchange for faster computation. If the input size is very large, we have the risk of running up the memory usage beyond reasonable levels. Groovy provides variations of the memoize method to better handle situations like this. To this method, we can recommend the size to use for the hashmap, specify upper and lower bounds, etc.

Groovy is quite versatile in a number of ways; we just looked at one feature in this article. The dynamic nature of Groovy, its metaprogramming capabilities, convenience functions like the one we saw, and the ability to fine-tune between dynamic and static typing, all make it quite an interesting and powerful language for programming on the JVM. Groovy 2.0 was released recently and it’s gearing up for the next release 2.1 this month. You can find out more about the language at groovy.codehaus.org. If you’d like to indulge in the latest updates of this language, you might check out the recently released Programming Groovy 2nd Edition, updated to the latest version of Groovy.

Dr. Venkat Subramaniam is an award-winning author, founder of Agile Developer, Inc., and an adjunct faculty at the University of Houston.

He has trained and mentored thousands of software developers in the US, Canada, Europe, and Asia, and is a regularly-invited speaker at several international conferences. Venkat helps his clients effectively apply and succeed with agile practices on their software projects.

Venkat is the author of .NET Gotchas, the coauthor of 2007 Jolt Productivity Award winning Practices of an Agile Developer, the author of Programming Groovy 2nd Edition, Programming Scala: Tackle Multi-Core Complexity on the Java Virtual Machine, and Programming Concurrency on the JVM: Mastering synchronization, STM, and Actors.

Send the author your feedback or discuss the article in the magazine forum.