Chapter 45. Introduction to System V IPC

System V IPC is the label used to refer to three different mechanisms for interprocess communication:

Although these three IPC mechanisms are quite diverse in function, there are good reasons for discussing them together. One reason is that they were developed together, first appearing in the late 1970s in Columbus UNIX. This was a Bell-internal UNIX implementation used for database and transaction-processing systems for telephone company record keeping and administration. Around 1983, these IPC mechanisms made their way into mainstream UNIX by appearing in System V—hence the appellation System V IPC.

A more significant reason for discussing the System V IPC mechanisms together is that their programming interfaces share a number of common characteristics, so that many of the same concepts apply to all of these mechanisms.

Note

Because System V IPC is required by SUSv3 for XSI conformance, it is sometimes alternatively labeled XSI IPC.

This chapter provides an overview of the System V IPC mechanisms and details those features that are common to all three mechanisms. The three mechanisms are then discussed individually in the following chapters.

Note

System V IPC is a kernel option that is configured via the CONFIG_SYSVIPC option.

Table 45-1 summarizes the header files and system calls used for working with System V IPC objects.

Some implementations require the inclusion of <sys/types.h> before including the header files shown in Table 45-1. Some older UNIX implementations may also require the inclusion of <sys/ipc.h>. (No versions of the Single UNIX Specification required these header files.)

Note

On most hardware architectures on which Linux is implemented, a single system call (ipc(2)) acts as the entry point to the kernel for all System V IPC operations, and all of the calls listed in Table 45-1 are actually implemented as library functions layered on top of this system call. (Two exceptions to this arrangement are Alpha and IA-64, where the functions listed in the table really are implemented as individual system calls.) This somewhat unusual approach is an artifact of the initial implementation of System V IPC as a loadable kernel module. Although they are actually library functions on most Linux architectures, throughout this chapter, we’ll refer to the functions in Table 45-1 as system calls. Only implementers of C libraries need to use ipc(2); any other use in applications is not portable.

Each System V IPC mechanism has an associated get system call (msgget(), semget(), or shmget()), which is analogous to the open() system call used for files. Given an integer key (analogous to a filename), the get call either:

We’ll (loosely) term the second use opening an existing IPC object. In this case, all that the get call is doing is converting one number (the key) into another number (the identifier).

An IPC identifier is analogous to a file descriptor in that it is used in all subsequent system calls to refer to the IPC object. There is, however, an important semantic difference. Whereas a file descriptor is a process attribute, an IPC identifier is a property of the object itself and is visible system-wide. All processes accessing the same object use the same identifier. This means that if we know an IPC object already exists, we can skip the get call, provided we have some other means of knowing the identifier of the object. For example, the process that created the object might write the identifier to a file that can then be read by other processes.

The following example shows how to create a System V message queue:

id = msgget(key, IPC_CREAT | S_IRUSR | S_IWUSR);
if (id == -1)
    errExit("msgget");

As with all of the get calls, the key is the first argument, and the identifier is returned as the function result. We specify the permissions to be placed on the new object as part of the final (flags) argument to the get call, using the same bit-mask constants as are used for files (Table 15-4, in Permissions on Regular Files). In the above example, permission is granted to just the owner of the object to read and write messages on the queue.

The process umask (The Process File Mode Creation Mask: umask()) is not applied to the permissions placed on a newly created IPC object.

Each process that wants to access the same IPC object performs a get call specifying the same key in order to obtain the same identifier for that object. We consider how to choose a key for an application in Section 45.2.

If no IPC object corresponding to the given key currently exists, and IPC_CREAT (analogous to the open() O_CREAT flag) was specified as part of the flags argument, then the get call creates a new IPC object. If no corresponding IPC object currently exists, and IPC_CREAT was not specified (and the key was not specified as IPC_PRIVATE, described in IPC Keys), then the get call fails with the error ENOENT.

A process can guarantee that it is the one creating an IPC object by specifying the IPC_EXCL flag (analogous to the open() O_EXCL flag). If IPC_EXCL is specified and the IPC object corresponding to the given key already exists, then the get call fails with the error EEXIST.

The ctl system call (msgctl(), semctl(), shmctl()) for each System V IPC mechanism performs a range of control operations for the object. Many of these operations are specific to the IPC mechanism, but a few are generic to all IPC mechanisms. An example of a generic control operation is IPC_RMID, which is used to delete an object. For example, we can use the following call to delete a shared memory object:

if (shmctl(id, IPC_RMID, NULL) == -1)
    errExit("shmctl");

For message queues and semaphores, deletion of the IPC object is immediate, and any information contained within the object is destroyed, regardless of whether any other process is still using the object. (This is one of a number of points where the operation of System IPC objects is not analogous to files. In Creating and Removing (Hard) Links: link() and unlink(), we saw that if we remove the last link to a file, then the file is actually removed only after all open file descriptors referring to it have been closed.)

Deletion of shared memory objects occurs differently. Following the shmctl(id, IPC_RMID, NULL) call, the shared memory segment is removed only after all processes using the segment detach it (using shmdt()). (This is much closer to the situation with file deletion.)

System V IPC objects have kernel persistence. Once created, an object continues to exist until it is explicitly deleted or the system is shut down. This property of System V IPC objects can be advantageous. It is possible for a process to create an object, modify its state, and then exit, leaving the object to be accessed by some process that is started at a later time. It can also be disadvantageous for the following reasons:

  • There are system-imposed limits on the number of IPC objects of each type. If we fail to remove unused objects, we may eventually encounter application errors as a result of reaching these limits.

  • When deleting a message queue or semaphore object, a multiprocess application may not be able to easily determine which will be the last process requiring access to the object, and thus when the object can be safely deleted. The problem is that these objects are connectionless—the kernel doesn’t keep a record of which processes have the object open. (This disadvantage doesn’t apply for shared memory segments, because of their different deletion semantics, described above.)