So is this code fast enough? Using the GHC interpreter, it takes around a second or two to do the lead-gold example (both directions), and a few seconds more for the ruby-code example (also both directions). With the GHC compiler, this reduces to a fraction of a second—even without turning on the optimization options of the compiler. It’s probably fast enough.
GHC provides some good profiling tools to help diagnose and track performance issues. The following snippet is part of the standard time and space profile produced by running the ruby-to-code search 1,000 times (taking around 10 seconds, so 0.01 seconds per search). This is without any compiler optimizations.
The profiler result indicates that seventy-eight percent of the time was spent looking up words, and fourteen percent of the time calculating the candidates. For memory, eighty-four percent of the total allocation was spent on the generated words. A similar pattern is seen when searching the reverse direction, albeit some twenty times slower(!)—though this could be a quirk of these words (we’d have to see this pattern on a bigger test set before concluding it was really a code problem).
| CALLS time space TIME SPACE |
| state_tree 655000 0.8 4.3 95.9 93.7 |
| child_paths 58000 0.5 3.1 95.2 89.4 |
| one_step 0 2.3 1.7 94.6 86.3 |
| is_valid_word 5703000 77.9 0.0 77.9 0.0 |
| generate_candidates 286000 14.4 84.6 14.4 84.6 |
Now that we know this, if time or memory really were an issue, then we could start analyzing the offending operations, maybe even fusing them together in some way to reduce overheads, or adding in cheaper word validity tests (like reject words with no vowels). Until then, we can stick with the clear code.
We’ve not used many complex types, but types have still been useful in the process to describe what data we’re manipulating, and as a guide for how certain operations should behave. There’s room for a bit more type use in this Haskell-98 code, maybe wrapping up search problems as bundles, or perhaps wrapping valid words inside a new type to distinguish them from unchecked words.
But can we do better? What about dependent types? What details are we worried about that we’d like the extra confidence for? I’ll let you think about this, but you might like to think about confirming that all candidate words are the same length, or conditions on the state generation to ensure that the states don’t inadvertently cause infinite loops or repeat work, or that the state space is finite (which would confirm that a tree-based approach is adequate rather than needing the full graph-based approach).
I’ve not said much about testing either. We can certainly add tests to double-check that code is doing what we expect, but we’ve also managed to split various concerns apart so it’s clear how each part works—indeed reflecting Hoare’s comment on “obviously no deficiencies” vs. “no obvious deficiencies.” Testing these smaller, more self-contained pieces is generally easier. With simpler code, it’s also easier to exercise the relevant pathways.
At this point, I hope you have some appreciation of how we can use the language to keep the details simple and modular, and what a difference it makes to the final program. This isn’t just a happy coincidence—many functional programs do develop like this, and it really does help us work with much more complex programs. In Chapter 23, Haskell’s Type System and Chapter 24, A Haskell Project: Testing Native Code, you can dig deeper into Haskell. But in the next chapter, you’ll see a different approach to functional programming with Apple’s Swift language.