Estimating Unknowns

The reason we rarely know how long a particular task will take is that we’ve never done it before.

It’s possible, and in fact probable, that we’ll miss a step. We might assume there’s already some code written and available to do a particular subtask of what needs to be done and find out that there isn’t any, or we’ll discover that this unique task we need to do isn’t so unique after all and there’s a library that has exactly the functionality we need. More than anything we need to think things through, and we can never do that as effectively as when we’re actually performing the task.

The tasks we perform when writing software are vastly different moment to moment, day to day, month to month, and project to project. Of course, there are similar things that we do all the time—designing, testing, coding, and so forth—and there are similar techniques we use and similar ways of approaching problems. But the problems themselves, and their solutions, are often markedly dissimilar to ones we’ve encountered before.

I know companies, particularly larger companies, that devote a very small percentage of a team’s time to writing code. A lot of times this shows up in the guise of trying to improve quality.

For example, a great deal of effort is placed on writing internal design documents and keeping them up to date throughout the development process. In a process like this, all design documents are valued equally, but many, though once useful, end up being abandoned and are clearly not as valuable.

If the customer isn’t paying us to write these extensive analysis and design documents, then why are we doing it? The answer is that we believe it helps us understand and express the problem so that we can better approach a good solution. But we often wind up wasting most of our time on issues that don’t ultimately provide value to our customers. When we are focused on measuring meaningless things, like actual time vs. planned time using estimates that were based on false assumptions, then we are starting out on the wrong foot.

Developing software is risky. It’s rarely done well and the software is practically obsolete moments after it’s written. Faced with this increased complexity, the traditional approach to fixing problems in software development is to create a better process. We rely on process to tell us what to do, to keep us on track, keep us honest, keep us on schedule, and so on.

This is the basic philosophy behind Waterfall development. Because changing code after the initial design phase is difficult to accomplish, we’ll prevent changes after the design is done. Since testing is time consuming and expensive, we’ll wait till the end of the project so we have to test only once. This approach makes sense in theory but is clearly inefficient in practice. In many ways what we’re doing is purposely avoiding what is painful or difficult, not because it helps us build better software, but because it’s hard, takes more time (at least, more time than we initially estimated), and costs more money.