Concurrent index creation

When you build an index normally, it locks the table against writes. This means that reads using SELECT against the table will work fine, but any attempt to insert, update, or delete a row in it will block until the index creation is finished. The statement will pause and wait for the index lock to be released, rather than throwing an error immediately. Since index rebuilding on large tables can easily take hours, that's a problem. There's some potential for a deadlock producing error here as well, if the client backend already had some locks on that table before the index build began.

Accordingly, on any production database with significant table sizes, where a user being blocked for a long time is an issue, you usually want to build indexes using CREATE INDEX CONCURRENTLY. This is not the default because the concurrent index build is much less efficient than the standard one that takes a lock. It scans the table once to initially build the index, then makes a second pass to look for things added after the first pass.

There is also a risk to any concurrent index build on indexes that enforce some constraint, such as any UNIQUE index. If an unacceptable row is found during that second pass, such as a uniqueness violation, the concurrent index build fails, having wasted resources without accomplishing anything. The resulting index will be marked INVALID when viewed with the psql utility and will not be usable for queries. But changes to the table will still update values in it--with wasted overhead. You can recover from this situation by dropping the index and trying to build it again, presuming you know the same issue won't pop up again (for example, all duplicate values have been erased). You can also use REINDEX to fix it; but that can't be done concurrently, it will take the usual write lock the whole time.