From Functional Programming in C++ by Ivan Čukić
This article discusses laziness implementation in C++, using a common example – calculating Fibonacci numbers. |

Save 37% on *Functional Programming in C++.* Just enter code **fcccukic** into the discount code box at checkout at manning.com.

**Recursion tree pruning by caching the function results**

The good thing about C++ not directly supporting laziness is that we can implement it as we wish — and we’re in control of how lazy we want to be on a case-per-case basis. Let’s use the common example — calculating Fibonacci numbers. Following the definition, we could implement it like this:

unsigned int fib(unsigned int n) { return n == 0 ? 0 : n == 1 ? 1 : fib(n - 1) + fib(n - 2); }

This implementation is inefficient — we have two recursive calls in the common case, and each of them performs duplicate work because both fib(n) and fib(n – 1) need to calculate fib(n – 2). Both fib(n – 1) and fib(n – 2) need to calculate fib(n – 3), and so on. The number of recursive calls grows exponentially with the number n.

**Figure 1**. With fib implemented recursively, we’ll calculate the same value more than once. The recursion tree that calculates fib(5) contains three identical sub-trees calculating fib(2), and two identical sub-trees calculating fib(3). The multiplication of work grows exponentially with n.

This function is pure, and it always returns the same result when given a specific value as the input. This means that once we calculate fib(n), we can store it somewhere, and reuse that value any time someone needs fib(n). If we cached the previously calculated results, we’d be able to remove all the duplicate sub-trees. This way, our evaluation tree would look like the figure 2 – with fib implemented lazily; we’ll be able to avoid evaluating the same trees over and over. If one branch calculates fib(2), other ones use the previously calculated value. Now, the number of nodes this tree has grows linearly with n. (the exact order of evaluation might not be like in figure 2 because C++ doesn’t guarantee that fib(n – 1) is called before fib(n – 2), but all possible trees are the same — there are no repeated sub-trees).

**Figure 2**. With fib implemented lazily, we’ll be able to avoid evaluating the same trees over and over. If one branch calculates fib(2), other ones use the previously calculated value. Now, the number of nodes this tree has grows linearly with n.

**Listing 1**. Fibonacci implementation with caching

std::vector<unsigned int> cache { 0, 1 }; ❶ unsigned int fib(unsigned int n) { if (cache.size() > n) { ❷ return cache[n]; ❷ } else { const auto result = fib(n - 1) + fib(n - 2); ❸ cache.push_back(result); ❸ return result; } }

❶ we can use a vector as the cache because we know that finding out fib(n), requires that we need to know fib(k) for all k < n

❷ if the value is already in the cache, return it

❸ We’re getting the result and adding it to the cache. We can use pushback to add the n-th element because we know that all the previous ones are filled

The good thing about this approach is that we don’t need to invent new algorithms for calculating the Fibonacci numbers — we don’t even need to understand how the algorithm works. We need to recognize that it calculates the same thing multiple times, and that the result is always the same because the fib function is pure.

The only downside is that we’re now taking much more memory to store the cached values. If we wanted to optimize this, we’d need to investigate a bit into how our algorithm works, and when the cached values are being used.

If we analyzed the code, we’d see that we never use any values in the cache except the two last inserted ones. This means we can replace the whole vector with a cache that stores only two values. In order to match the used std::vector API, we’re going to create a class that looks like a vector, but it remembers only the last two inserted values.

**Listing 2**. Efficient cache for calculating the Fibonacci numbers (example:fibonacci/main.cpp)

class fib_cache { public: fib_cache() : m_previous{0} ❶ , m_last{1} ❶ , m_size{2} ❷ { } size_t size() const { return m_size; } unsigned int operator[] (unsigned int n) const { return n == m_size - 1 ? m_last : ❸ m == m_size - 2 ? m_previous : ❸ 0; ❸ } void push_back(unsigned int value) { m_size++; ❹ m_previous = m_last; ❹ m_last = value; ❹ } };

❶ The start of Fibonacci series — { 0, 1 }

❷ The size of the cache (not excluding the forgotten values)

❸ If we still have the requested numbers in the cache, we’re returning them. Otherwise, we return zero or throw an exception.

❹ Adding a new value to the cache and increasing the size

This was an excerpt from chapter 6 of *Functional Programming in C++*. You can find out how lazy evaluation relates to dynamic programming, and how to handle lazy evaluation in C++ in a generic way, even for recursive functions like this one.

For more, check out the whole book on liveBook here and see this Slideshare presentation.