Chapter 13. Eliminate Waste

It’s difficult to change the course of a heavy cruise ship, whereas a river kayak dances through rapids with the slightest touch of the paddle. Although a cruise ship has its place, the kayak is much more agile.

Agility requires flexibility and a lean process, stripped to its essentials. Anything more is wasteful. Eliminate it! The less you have to do, the less time your work will take, the less it will cost, and the more quickly you will deliver.

You can’t just cut out practices, though. What’s really necessary? How can you tell if something helps or hinders you? What actually gets good software to the people who need it? Answering these questions helps you eliminate waste from your process and increase your agility.

The easiest way to reduce waste is to reduce the amount of work you may have to throw away. This means breaking your work down into its smallest possible units and verifying them separately.

Sometimes while debugging, I see multiple problems and their solutions at once. Shotgun debugging is tempting, but if I try several different solutions simultaneously and fix the bug, I may not know which solution actually worked. This also usually leaves a mess behind. Incremental change is a better approach. I make one well-reasoned change, observe and verify its effects, and decide whether to commit to the change or revert it. I learn more and come up with better—and cleaner—solutions.

This may sound like taking baby steps, and it is. Though I can work for 10 or 15 minutes on a feature and get it mostly right, the quality of my code improves immensely when I focus on a very small part and spend time perfecting that one tiny piece before continuing. These short, quick steps build on each other; I rarely have to revert any changes.

If the step doesn’t work, I’ve spent a minute or two learning something and can backtrack a few moments to position myself to make further progress. These frequent course corrections help me get where I really want to go. Baby steps reduce the scope of possible errors to only the most recent changes, which are small and fresh in my mind.

Last summer, I introduced a friend to pair programming. She wanted to automate a family history project, and we agreed to write a parser for some sample genealogical data. The file format was complex, with some interesting rules and fields neither of us understood, but she knew which data we needed to process and which data we could safely ignore.

We started by writing a simple skeleton driven by tests. Could we load a file by name effectively? Would we get reasonable errors for exceptional conditions?

Then the fun began. I copied the first few records out of the sample file for test data and wrote a single test: could our parser identify the first record type? Then I pushed the keyboard to her and said, “Make it pass.”

“What good is being able to read one record, and just the type?” she wondered, but she added two lines of code and the test passed. I asked her to write the next test. She wrote one line to check if we could identify the person’s name from that record and pushed the keyboard back my way.

I wrote three lines of code. The test passed. Then I wrote a test to identify the next record type. Of course it failed. As I passed back the keyboard, we discussed ways to make it pass. I suggested hardcoding the second type in the parsing method. She looked doubtful but did it anyway, and the tests all passed.

“It’s time to refactor,” I said, and we generalized the method by reducing its code. With her next test, I had to parse another piece of data from both record types. This took one line.

We continued that way for two hours, adding more and more of the sample file to our test data as we passed more tests. Each time we encountered a new feature of the file format, we nibbled away at it with tiny tests. By the end of that time, we hadn’t finished, but we had a small amount of code and a comprehensive test suite that would serve her well for further development.