Normalization

The process of separating your data into tables and creating primary keys is called normalization. Its main goal is to make sure each piece of information appears in the database only once. Duplicating data is very inefficient, because it makes databases larger than they need to be and therefore slows down access. More importantly, the presence of duplicates creates a strong risk that you’ll update only one row of the duplicated data, creating inconsistencies and potentially causing serious errors.

If you list the titles of books in the authors table as well as the books table, for example, and you have to correct a typographic error in a title, you’ll have to search through both tables and make sure you make the same change every place the title is listed. It’s better to keep the title in one place and use the ISBN in other places.

In the process of splitting a database into multiple tables, it is also important not to go too far and create more tables than is necessary, which can also lead to inefficient design and slower access.

Luckily, E.F. Codd, the inventor of the relational model, analyzed the concept of normalization and split it into three separate schemas called First, Second, and Third Normal Form. If you modify a database to satisfy each of these forms in order, you will ensure that your database is optimally balanced for fast access, and minimum memory and disk space usage.

To see how the normalization process works, let’s start with the rather monstrous database in Table 9-1, which shows a single table containing all of the author names, book titles, and (fictional) customer details. You could consider it a first attempt at a table intended to keep track of which customers have ordered which books. Obviously this is an inefficient design, because data is duplicated all over the place (duplications are highlighted), but it represents a starting point.

In the following three sections, we will examine this database design and you’ll see how we can improve it by removing the various duplicate entries and splitting the single table into multiple tables, each containing one type of data.

Table 9-1. A highly inefficient design for a database table

Author 1

Author 2

Title

ISBN

Price (USD)

Customer name

Customer address

Purch. date

David Sklar

Adam Trachtenberg

PHP Cookbook

0596101015

44.99

Emma Brown

1565 Rainbow Road, Los Angeles, CA 90014

Mar 03 2009

Danny Goodman

 

Dynamic HTML

0596527403

59.99

Darren Ryder

4758 Emily Drive, Richmond, VA 23219

Dec 19 2008

Hugh E Williams

David Lane

PHP and MySQL

0596005436

44.95

Earl B. Thurston

862 Gregory Lane, Frankfort, KY 40601

Jun 22 2009

David Sklar

Adam Trachtenberg

PHP Cookbook

0596101015

44.99

Darren Ryder

4758 Emily Drive, Richmond, VA 23219

Dec 19 2008

Rasmus Lerdorf

Kevin Tatroe & Peter MacIntyre

Programming PHP

0596006815

39.99

David Miller

3647 Cedar Lane, Waltham, MA 02154

Jan 16 2009

For a database to satisfy the First Normal Form, it must fulfill three requirements:

Looking at these requirements in order, you should notice straight away that the Author 1 and Author 2 columns constitute repeating data types. So, we already have a target column for pulling into a separate table, as the repeated Author columns violate Rule 1.

Second, there are three authors listed for the final book, Programming PHP. In this table that has been handled by making Kevin Tatroe and Peter MacIntyre share the Author 2 column, which violates Rule 2—yet another reason to transfer the author details to a separate table.

However, Rule 3 is satisfied, because the primary key of ISBN has already been created.

Table 9-2 shows the result of removing the Author columns from Table 9-1. Already it looks a lot less cluttered, although there remain duplications that are highlighted.

Title

ISBN

Price (USD)

Customer name

Customer address

Purchase date

PHP Cookbook

0596101015

44.99

Emma Brown

1565 Rainbow Road, Los Angeles, CA 90014

Mar 03 2009

Dynamic HTML

0596527403

59.99

Darren Ryder

4758 Emily Drive, Richmond, VA 23219

Dec 19 2008

PHP and MySQL

0596005436

44.95

Earl B. Thurston

862 Gregory Lane, Frankfort, KY 40601

Jun 22 2009

PHP Cookbook

0596101015

44.99

Darren Ryder

4758 Emily Drive, Richmond, VA 23219

Dec 19 2008

Programming PHP

0596006815

39.99

David Miller

3647 Cedar Lane, Waltham, MA 02154

Jan 16 2009

The new Authors table, shown in Table 9-3, is small and simple. It just lists the ISBN of a title along with an author. If a title has more than one author, additional authors get their own rows. At first you may feel ill at ease with this table, because you can’t tell at a glance which author wrote which book. But don’t worry: MySQL can quickly tell you. All you have to do is tell it which book you want information for, and MySQL will use its ISBN to search the Authors table in a matter of milliseconds.

As I mentioned earlier, the ISBN will be the primary key for the Books table, when we get around to creating that table. I mention that here in order to emphasize that the ISBN is not, however, the primary key for the Authors table. In the real world, the Authors table would deserve a primary key, too, so that each author would have a key to uniquely identify him or her.

In the Authors table, the ISBN numbers will appear in a column that (for the purposes of speeding up searches) we’ll probably make a key, but not the primary key. In fact, it cannot be the primary key in this table, because it’s not unique: the same ISBN appears multiple times whenever two or more authors have collaborated on a book.

Because we’ll use it to link authors to books in another table, this column is called a foreign key.

Note

Keys (also called indexes) have several purposes in MySQL. The fundamental reason for defining a key is to make searches faster. You’ve seen examples in Chapter 8 in which keys are used in WHERE clauses for searching. But a key can also be useful to uniquely identify an item. Thus, a unique key is often used as a primary key in one table, and as a foreign key to link rows in that table to rows in another table.

The First Normal Form deals with duplicate data (or redundancy) across multiple columns. The Second Normal Form is all about redundancy across multiple rows. In order to achieve Second Normal Form, your tables must already be in First Normal Form. Once this has been done, Second Normal Form is achieved by identifying columns whose data repeats in different places and removing them to their own tables.

Let’s look again at Table 9-2. Notice that Darren Ryder bought two books, and therefore his details are duplicated. This tells us that the customer columns (Customer name and Customer address) need to be pulled into their own tables. Table 9-4 shows the result of removing the two Customer columns from Table 9-2.

As you can see, all that’s left in Table 9-4 are the ISBN, Title, and Price columns for four unique books—this now constitutes an efficient and self-contained table that satisfies the requirements of both the First and Second Normal Forms. Along the way, we’ve managed to reduce the information in this table to data closely related to book titles. The table could also include years of publication, page counts, numbers of reprints, and so on, as these details are also closely related. The only rule is that we can’t put in any column that could have multiple values for a single book, because then we’d have to list the same book in multiple rows, thus violating Second Normal Form. Restoring an Author column, for instance, would violate this normalization.

However, looking at the extracted Customer columns, now in Table 9-5, we can see that there’s still more normalization work to do, because Darren Ryder’s details are still duplicated. It could also be argued that First Normal Form Rule 2 (all columns should contain a single value) has not been properly complied with, because the addresses really need to be broken into separate columns for Address, City, State, and Zip code.

ISBN

Customer name

Customer address

Purchase date

0596101015

Emma Brown

1565 Rainbow Road, Los Angeles, CA 90014

Mar 03 2009

0596527403

Darren Ryder

4758 Emily Drive, Richmond, VA 23219

Dec 19 2008

0596005436

Earl B. Thurston

862 Gregory Lane, Frankfort, KY 40601

Jun 22 2009

0596101015

Darren Ryder

4758 Emily Drive, Richmond, VA 23219

Dec 19 2008

0596006815

David Miller

3647 Cedar Lane, Waltham, MA 02154

Jan 16 2009

What we have to do is split this table further to ensure that each customer’s details are entered only once. Because the ISBN is not and cannot be used as a primary key to identify customers (or authors), a new key must be created.

Table 9-6 shows the result of normalizing the Customers table into both First and Second Normal Forms. Each customer now has a unique customer number called CustNo that is the table’s primary key, and that will most likely have been created using AUTO_INCREMENT. All the parts of the customers’ addresses have also been separated into distinct columns to make them easily searchable and updateable.

At the same time, in order to normalize Table 9-6, it was necessary to remove the information on customer purchases, because otherwise there would have been multiple instances of customer details for each book purchased. Instead, the purchase data is now placed in a new table called Purchases (see Table 9-7).

Here, the CustNo column from Table 9-6 is reused as a key to tie the Customers and Purchases tables together. Because the ISBN column is also repeated here, this table can be linked with either of the Authors and Titles tables, too.

The CustNo column can be a useful key in the Purchases table, but it’s not a primary key: a single customer can buy multiple books (and even multiple copies of one book). In fact, the Purchases table has no primary key. That’s all right, because we don’t expect to need to keep track of unique purchases. If one customer buys two copies of the same book on the same day, we’ll just allow two rows with the same information. For easy searching, we can define both CustNo and ISBN as keys—just not as primary keys.

Note

There are now four tables, one more than the three we had initially assumed would be needed. We arrived at this decision through the normalization processes, by methodically following the First and Second Normal Form rules, which made it plain that a fourth table called Purchases would also be required.

The tables we now have are: Authors (Table 9-3), Titles (Table 9-4), Customers (Table 9-6), and Purchases (Table 9-7). Each table can be linked to any other using either the CustNo or the ISBN keys.

For example, to see which books Darren Ryder has purchased, you can look him up in Table 9-6, the Customers table, where you will see that his CustNo is 2. Armed with this number, you can now go to Table 9-7, the Purchases table; looking at the ISBN column here, you will see that he purchased titles 0596527403 and 0596101015 on December 19, 2008. This looks like a lot of trouble for a human, but it’s not so hard for MySQL.

To determine what these titles were, you can then refer to Table 9-4, the Titles table, and see that the books he bought were Dynamic HTML and PHP Cookbook. Should you wish to know the authors of these books, you could also use the ISBN numbers you just looked up on Table 9-3, the Authors table, and you would see that ISBN 0596527403, Dynamic HTML, was written by Danny Goodman, and that ISBN 0596101015, PHP Cookbook, was written by David Sklar and Adam Trachtenberg.

Third Normal Form

Once you have a database that complies with both the First and Second Normal Forms, it is in pretty good shape and you might not have to modify it any further. However, if you wish to be very strict with your database, you can ensure that it adheres to the Third Normal Form, which requires that data that is not directly dependent on the primary key but that is dependent on another value in the table should also be moved into separate tables, according to the dependence.

For example, in Table 9-6, the Customers table, it could be argued that the State, City, and Zip code keys are not directly related to each customer, because many other people will have the same details in their addresses, too. However, they are directly related to each other, in that the street Address relies on the City, and the City relies on the State.

Therefore, to satisfy Third Normal Form for Table 9-6, you would need to split it into Table 9-8, Table 9-9, Table 9-10, and Table 9-11.

So, how would you use this set of four tables instead of the single Table 9-6? Well, you would look up the Zip code in Table 9-8, then find the matching CityID in Table 9-9. Given this information, you could then look up the city Name in Table 9-10 and then also find the StateID, which you could use in Table 9-11 to look up the state’s Name.

Although using the Third Normal Form in this way may seem like overkill, it can have advantages. For example, take a look at Table 9-11, where it has been possible to include both a state’s name and its two-letter abbreviation. Such a table could also contain population details and other demographics, if you desired.

Note

Table 9-10 could also contain even more localized demographics that could be useful to you and/or your customers. By splitting up these pieces of data, you can make it easier to maintain your database in the future, should it be necessary to add additional columns.

Deciding whether to use the Third Normal Form can be tricky. Your evaluation should rest on what additional data you may need to add at a later date. If you are absolutely certain that the name and address of a customer is all that you will ever require, you probably will want to leave out this final normalization stage.

On the other hand, suppose you are writing a database for a large organization such as the U.S. Postal Service. What would you do if a city were to be renamed? With a table such as Table 9-6, you would need to perform a global search and replace on every instance of that city’s name. But if you had your database set up according to the Third Normal Form, you would have to change only a single entry in Table 9-10 for the change to be reflected throughout the entire database.

Therefore, I suggest that you ask yourself two questions to help you decide whether to perform a Third Normal Form normalization on any table:

  1. Is it likely that many new columns will need to be added to this table?

  2. Could any of this table’s fields require a global update at any point?

If either of the answers is yes, you should probably consider performing this final stage of normalization.

When Not to Use Normalization

Now that you know all about normalization, I’m going to tell you why you should throw these rules out of the window on high-traffic sites. Now, I’m not saying you’ve wasted your time reading the last several pages (you most definitely haven’t), but you should never fully normalize your tables on sites that will cause MySQL to thrash.

You see, normalization requires spreading data across multiple tables, and this means making multiple calls to MySQL for each query. On a very popular site, if you have normalized tables, your database access will slow down considerably once you get above a few dozen concurrent users, because they will be creating hundreds of database accesses between them. In fact, I would go so far as to say that you should denormalize any commonly looked-up data as much as you can.

The reason is that if you have data duplicated across your tables, you can substantially reduce the number of additional requests that need to be made, because most of the data you want is available in each table. This means that you can simply add an extra column to a query and that field will be available for all matching results, although (of course) you will have to deal with the previously mentioned downsides, such as using up large amounts of disk space and needing to ensure that you update every single duplicate copy of your data when it needs modifying.

Multiple updates can be computerized, though. MySQL provides a feature called triggers that make automatic changes to the database in response to changes you make. (Triggers are, however, beyond the scope of this book.) Another way to propagate redundant data is to set up a PHP program to run regularly and keep all copies in sync. The program reads changes from a “master” table and updates all the others. (You’ll see how to access MySQL from PHP in the next chapter.)

However, until you are very experienced with MySQL, I recommend you fully normalize all your tables, as this will instill the habit and put you in good stead. Only when you actually start to see MySQL logjams should you consider looking at denormalization.