Unfortunately, no programmer is perfect. Good programmers will introduce roughly one defect into a program for every 100 lines of code they write. The best programmers, under the best possible circumstances, will introduce one defect per 1,000 lines of code they write.
In other words, no matter how good or bad you are as a programmer, it’s certain that the more you code, the more defects you will introduce. This allows us to state a law called the Law of Defect Probability:
The chance of introducing a defect into your program is proportional to the size of the changes you make to it.
This is important because defects violate our purpose of helping people, and therefore should be avoided. Also, fixing defects is a form of maintenance. Thus, increasing the number of defects increases our effort of maintenance.
With this law, without having to predict the future, we can immediately see that making small changes is likely to lead to lower maintenance effort than making large changes would. Small changes = fewer defects = less maintenance.
This law is also sometimes stated more informally as “You can’t introduce new bugs if you don’t add or modify code.”
The funny thing about this law is that it seems to be in conflict with the Law of Change—your software has to change, but changing it will introduce defects. That is a real conflict, and it’s balancing these laws that requires your intelligence as a software designer. It is actually that conflict that explains why we need design, and in fact tells us what the ideal design is:
The best design is the one that allows for the most change in the environment with the least change in the software.
And that, pretty simply, sums up much of what is known about good software design today.
Okay, so you can’t introduce bugs into your program if you don’t add or modify code, and that’s a major law of software design. However, there’s also a very important related rule that many software engineers have heard in one form or another, but sometimes forget:
Never “fix” anything unless it’s a problem, and you have evidence showing that the problem really exists.
It’s important to have evidence of problems before you address them. Otherwise, you might be developing features that don’t solve anybody’s problem, or you might be “fixing” things that aren’t broken.
If you fix problems without evidence, you’re probably going to break things. You’re introducing change into your system, which is going to bring new defects along with it. And not just that, but you’re wasting your time and adding complexity to your program for no reason.
So what counts as “evidence”? Suppose five users report that when they push the red button, your program crashes. Okay, that’s evidence enough! Alternatively, you may push the red button yourself and notice that the program crashes.
However, just because a user reports something doesn’t mean it’s a problem. Sometimes the user will simply not have realized that your program had some feature already, and so asked you to implement something else unnecessarily. For example, say you write a program that sorts a list of words alphabetically, and a user asks you to add a feature that sorts a list of letters alphabetically. Your program already does that. Actually, it already does more than that—this is often the case, with this sort of confused request. In this case, the user may think there is a problem when there isn’t. He may even present “evidence” that he can’t sort a list of letters, when in fact the problem is just that he didn’t realize that he should use the word-sorting feature.
If you get a lot of requests like the above, it means that users can’t easily find the features they need in your program. That’s something you should fix.
Sometimes a user will report that there’s a bug, when actually it’s the program behaving exactly as you intended it to. In this case, it’s a matter of majority rules. If a significant number of users think that the behavior is a bug, it’s a bug. If only a tiny minority (like one or two) think it’s a bug, it’s not a bug.
The most famous error in this area is what we call “premature optimization.” That is, some developers seem to like to make things go fast, but they spend time optimizing their code before they know that it’s slow! This is like a charity sending food to rich people and saying, “We just wanted to help people!” Illogical, isn’t it? They’re solving a problem that doesn’t exist.
The only parts of your program where you should be concerned about speed are the exact parts that you can show are causing a real performance problem for your users. For the rest of the code, the primary concerns are flexibility and simplicity, not making it go fast.
There are infinite ways of violating this rule, but the way to follow it is simple: just get real evidence that a problem is valid before you address it.
This is probably the most well known rule in software design. You probably already know it. But it is valid, and so it is included here:
In any particular system, any piece of information should, ideally, exist only once.
Let’s say you have a field called “Password” that appears on 100 screens of your program’s user interface. What if you want to change the name of the field to “Passcode”? Well, if you have stored the name of the field in one central location in your code, fixing it will require a one-line code change. But if you wrote the word “Password” manually into all 100 screens of the user interface, you’ll need to make 100 changes to fix it.
This also applies to blocks of code. You should not be copying and pasting blocks of code. Instead, you should be using the various pieces of programming technology that allow one piece of code to “use,” “call,” or “include” another piece of existing code.
One of the good reasons to follow this rule is the Law of Defect Probability. If we can reuse old code, we don’t have to write or change as much code when we add new features, so we introduce fewer defects.
It also helps us with flexibility in our designs. If we need to change how our program works, we can change some code in just one place, instead of having to go through the whole program and make multiple changes.
A lot of good design is based on this rule. That is, the more clever you can get with making code “use” other code and centralizing information, the better your design is. This is another area where your intelligence really comes to play in programming.