When you turn on the power to your Linux system, it triggers a series of events that eventually leads to a login prompt. Normally, you don't worry about what happens behind the scenes of those events; you just log in and start using your applications and services.
However, there may be times when your Linux system doesn't start quite correctly, or perhaps an application that you expected to be running isn't. In those cases, it helps to have a basic understanding of just how Linux loads the operating system and starts programs so you can troubleshoot the problem.
Starting a server and loading its operating system is called booting. The term has a history in the old saying, “pull yourself up by your bootstraps,” which means to physically pull yourself up from lying on the floor to an upright position using small straps on your boots. Pulling yourself up off the floor using nothing but bootstraps is physically impossible, and it seems like that's what a system is doing when it boots up—an impossible task. However, once the process is demystified, the order and logic behind it make sense.
As a Linux administrator, it is important to understand the details of booting a Linux server. This section walks through the steps of the boot process and how you can watch the boot process to see what steps failed.
The Linux boot process can be split into three main steps.
While on the surface these three steps may seem simple, a ballet of operations happens to keep the boot process working. Each step performs several actions as they prepare your system to run Linux.
You can monitor the Linux boot process by watching the system console screen as the system boots. You'll see lots of informative messages scroll by as the system detects hardware and loads software.
Usually the boot messages scroll by somewhat quickly and it's hard to see what's happening. If you need to troubleshoot boot problems, you can review the boot‐time messages using the dmesg
command. Most Linux distributions copy the boot kernel messages into a special ring buffer in memory, called the kernel ring buffer. The buffer is circular and set to a predetermined size. As new messages are logged in the buffer, older messages are rotated out.
The dmesg
command displays the most recent boot messages that are currently stored in the kernel ring buffer, as shown snipped here:
$ dmesg
[ 0.000000] Linux version 4.18.0-193.28.1.el8_2.x86_64
(mockbuild@kbuilder.bsys.centos.org) (gcc version 8.3.1 20191121
(Red Hat 8.3.1-5) (GCC)) #1 SMP Thu Oct 22 00:20:22 UTC 2020
[ 0.000000] Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-4.18.0-
193.28.1.el8_2.x86_64 root=/dev/mapper/cl-root ro crashkernel=auto
resume=/dev/mapper/cl-swap rd.lvm.lv=cl/root rd.lvm.lv=cl/
swap rhgb quiet
[…]
[ 47.263987] IPv6: ADDRCONF(NETDEV_UP): enp0s8: link is not ready
[ 47.454715] IPv6: ADDRCONF(NETDEV_UP): enp0s8: link is not ready
[ 48.161674] IPv6: ADDRCONF(NETDEV_CHANGE): enp0s8: link
becomes ready
$
Most Linux distributions also store the boot messages in a log file, usually in the /var/log
folder. For Debian‐based systems, such as Ubuntu, the file is usually /var/log/boot
or /var/log/bootstrap.log
, and for Red Hat–based systems, such as CentOS, the file is /var/log/boot.log
.
While it helps to be able to see the different messages generated during boot time, it is also helpful to know just what generates those messages. This chapter discusses each of these three boot steps and goes through some examples showing just how they work.
All IBM‐compatible servers utilize some type of built‐in firmware to control how the installed operating system starts. On older servers, this firmware was called the Basic Input/Output System (BIOS). On newer servers, the Unified Extensible Firmware Interface (UEFI) maintains the system hardware status and launches an installed operating system.
The BIOS firmware had a simplistic menu interface. It allowed you to change some settings to control how the system found hardware and define what device the BIOS should use to start the operating system.
One limitation of the original BIOS firmware was that it could read only one sector's worth of data from a hard drive into memory to run. That's not enough space to load an entire operating system. To get around that limitation, most operating systems split the boot process into two parts.
First, the BIOS ran a bootloader program, a small program that initialized the necessary hardware to find and run the full operating system program. It was often found at another location on the same hard drive, but sometimes on a separate internal or external storage device.
The bootloader program usually had a configuration file, so you could tell it just where to look to find the actual operating system file to run. Also, you could use the configuration to produce a small menu, allowing the user to boot between multiple operating systems.
To get things started, the BIOS had to know where to find the bootloader program on an installed storage device. Most BIOS setups allowed you to load the bootloader program from several locations.
When booting from a hard drive, you had to designate the hard drive, and partition on the hard drive, from which the BIOS should load the bootloader program. This was done by defining a master boot record (MBR).
The MBR was the first sector on the first hard drive partition on the system. There was only one MBR for the computer system. The BIOS looked for the MBR and read the program stored there into memory. Since the bootloader program had to fit in one sector, it had to be very small, so it couldn't do too much. The bootloader program mainly pointed to the location of the actual operating system kernel file, stored in a boot sector of a separate partition installed on the system. There were no size limitations on kernel boot files.
The bootloader program wasn't required to point directly to an operating system kernel file. It could point to any type of program, including another bootloader program. You could create a primary bootloader program that pointed to a secondary bootloader program, which provided options to load multiple operating systems. This process is called chainloading.
As operating systems became more complicated, it eventually became clear that a new boot method needed to be developed. However, be aware that there are still some systems that use the old BIOS startup process.
Intel created the Extensible Firmware Interface (EFI) in 1998 to address some of the limitations of BIOS. By 2005, the idea caught on with other vendors, and the Universal EFI (UEFI) specification was adopted as a standard. These days, just about all IBM‐compatible server systems utilize the UEFI firmware standard.
Not all Linux distributions support the UEFI firmware. If you're going to use a UEFI system, ensure that the Linux distribution you select supports it.
Instead of relying on a single boot sector on a hard drive to hold the bootloader program, UEFI specifies a special disk partition, called the EFI System Partition (ESP) to store bootloader programs. This allows for any size of bootloader program, plus the ability to store multiple bootloader programs for multiple operating systems.
The ESP setup utilizes the old Microsoft File Allocation Table (FAT) filesystem to store the bootloader programs. On Linux systems, the ESP is typically mounted in the /boot/efi
directory (mounting concepts are covered in Chapter 11, “Working with Storage Devices”), and the bootloader files are typically stored using the .efi
filename extension, such as linux.efi
.
The UEFI firmware utilizes a built‐in mini bootloader (sometimes referred to as a boot manager) that allows you to configure just which bootloader program file to launch.
With UEFI, you need to register each individual bootloader file you want to appear at boot time in the boot manager interface menu. You can then select the bootloader to run each time you boot the system.
Once the firmware finds and runs the bootloader, its job is done. The bootloader step in the boot process is somewhat complicated. The next section dives into covering that.
The bootloader program helps bridge the gap between the system firmware and the full Linux operating system kernel. In Linux, there are several choices of bootloaders to use. However, the most popular one is Grand Unified Bootloader 2, which is more commonly referred to as GRUB2.
There's a little bit of evolution behind GRUB2, which is helpful in understanding this bootloader program.
/etc/lilo.conf
, which defined the systems to boot. Unfortunately, LILO doesn't work with UEFI systems, so it had limited use on modern systems and quickly faded into history.The GRUB2 bootloader program has maintained its popularity through the years, and it still holds the position of the most used bootloader among Linux distributions. Since Linux kernel v3.3.0, the UEFI can load any size of a program, including the kernel itself, so a bootloader is no longer necessary. However, using this method isn't common, due to the fact that bootloader programs, such as GRUB2, provide more versatility in booting a Linux system.
The GRUB2 bootloader was designed to simplify the process of the booting, and this includes the management of the process. It provides both an interactive boot menu and a shell. This section walks through understanding GRUB2 bootloader configuration basics, how to interact with its menu and shell at boot time, and some troubleshooting techniques.
You'll often find that there is no need to make changes in your GRUB2 configuration. For example, when your system's Linux kernel is upgraded, GRUB2 looks for any kernels (new and old) on the system and attempts to create boot menu entries for each one. That way, if a new kernel fails, you can pick an older kernel from the boot menu to get the system up and running. Figure 10.1 shows a GRUB2 boot menu on a CentOS distribution with older an Linux kernel available for selection.
FIGURE 10.1 A CentOS GRUB2 boot menu
GRUB2 uses the grub.cfg
configuration file to design its boot menu and/or directly load the kernel. Depending on your distribution and configuration, this file resides in either the /boot/grub/
or the /boot/grub2/
directory.
You should never directly modify the grub.cfg
file. This configuration file is generated by the grub‐mkconfig
command, the grub2‐mkconfig
command, and/or other GRUB2 utilities. So any changes you make directly in the file are lost the next time one of these programs is run.
The GRUB2 configuration file is built from a set of individual files in the /etc/grub.d/
directory. Also, a control file, /etc/default/grub
, is used in building grub.cfg
, which manages such items as the boot menu's appearance and what command‐line arguments are passed to a Linux kernel at boot time.
The files in the /etc/grub.d
directory are a series of high‐level shell script files (basic shell scripting is covered in Chapter 19, “Writing Scripts”). These scripts are called helper scripts, because they help the GRUB2 utilities in generating the grub.cfg
configuration file. The following example is a list of the helper scripts on an Ubuntu distribution. At the end of the example, the file
command is used on one file to show that it is indeed a shell script.
$ ls -l /etc/grub.d
total 128
-rwxr-xr-x 1 root root 10627 Jul 31 00:34 00_header
-rwxr-xr-x 1 root root 6258 Jul 20 18:19 05_debian_theme
-rwxr-xr-x 1 root root 17622 Sep 8 10:24 10_linux
-rwxr-xr-x 1 root root 42359 Sep 8 10:24 10_linux_zfs
-rwxr-xr-x 1 root root 12894 Jul 31 00:34 20_linux_xen
-rwxr-xr-x 1 root root 12059 Jul 31 00:34 30_os-prober
-rwxr-xr-x 1 root root 1424 Jul 31 00:34 30_uefi-firmware
-rwxr-xr-x 1 root root 214 Jul 31 00:34 40_custom
-rwxr-xr-x 1 root root 216 Jul 31 00:34 41_custom
-rw-r--r-- 1 root root 483 Jul 31 00:34 README
$
$ file /etc/grub.d/10_linux
/etc/grub.d/10_linux: POSIX shell script, ASCII text executable
$
Notice that a number precedes each script file's name. These numbers guide the utilities that use the helper scripts so that the files are processed in a particular order. Be aware that on some Linux distributions, such as CentOS, you have to use super user privileges to see this information concerning the helper scripts.
Some administrators add customized boot menu entries by modifying the 40_custom
script. You'll rarely, if ever, need to make changes to this helper script.
You'll often want to modify items in the GRUB2 control file, /etc/default/grub
, which contains settings called keys. These keys manage the appearance and behavior of the boot menu. Table 10.1 describes the more commonly set keys.
TABLE 10.1: Commonly Defined /etc/default/grub
Keys
KEY | DESCRIPTION |
---|---|
GRUB_CMDLINE_LINUX |
Sets the argument(s) to pass to the Linux kernel for all boot menu entries |
GRUB_DEFAULT |
Defines the default boot menu entry |
GRUB_DISABLE_RECOVERY |
When set to true , disables the generation of recovery mode boot menu entries |
GRUB_DISABLE_SUBMENU |
When set to true , disables the use of submenus in the boot menu, and lists all entries in the top menu |
GRUB_DISTRIBUTOR |
Sets the distribution's name and/or information in the boot menu |
GRUB_GFXMODE |
Defines the resolution on a graphical terminal, if used |
GRUB_INIT_TUNE |
Sets a sound to play when the menu starts |
GRUB_TERMINAL |
Defines the terminal input and output devices |
GRUB_TIMEOUT |
When no selection is made, sets the number of seconds until the default boot menu entry is selected (If set to 0 , the boot menu is not displayed.) |
GRUB_TIMEOUT_STYLE |
Defines the method of handling the boot menu and the GRUB_TIMEOUT through one of three possible settings:
|
Once you've completed making changes to either the control file or the 40_custom
helper script file, you need to update the grub.cfg
configuration file. This is done by running the grub‐mkconfig
, grub2‐mkconfig
, update‐grub
, or update‐grub2
command with super user privileges, depending on your distribution.
# grub-mkconfig> /boot/grub/grub.cfg
Notice that you must redirect the output of the program to the grub.cfg
configuration file. By default, these update programs just output the new configuration file commands to standard output.
When the GRUB2 boot menu displays during the boot process, you can wait for the timeout to expire (if set), and the default boot menu selection will process. Alternatively, you can use the arrow keys to select one of the other menu options and press Enter to boot it.
You can also edit boot options from the GRUB menu on the fly. First, use the arrow key to the boot option you want to modify, and press the E key. Then use the arrow key to move the cursor to the line you need to modify, and edit it. Once you have completed your edits, press the B key to boot the system using the new values.
For on‐the‐fly changes during the boot process, besides modifying the boot menu, you can also access the GRUB2 interactive shell to submit commands. This shell is accessed by pressing the C key from the boot menu.
There are several commands available in the interactive shell, and you can see them all after you are in the shell by typing help
and pressing Enter. You'll have to press the spacebar multiple times to return to the GRUB2 interactive shell prompt. To view the syntax needed for these commands as well a brief description, type help
followed by the command name, such as help cat
.
Once you have completed using the interactive shell, press Ctrl+X to use the settings made in the interactive shell to finish booting the system. If you'd like to discard any modifications made in the shell, press the Esc key to return to the boot menu.
Once the kernel program is loaded into memory, the bootloader's job is done. Now the Linux kernel takes over and starts the initialization process. We'll cover that topic next.
After your Linux system has traversed the boot process, it enters the final system initialization process, where it starts various services. A service, or daemon, is a program that performs a particular duty. The systemd initialization method is the most popular system service initialization and management mechanism.
Beyond initialization, systemd (also considered a daemon) is responsible for managing these services. We'll take a look at the various systemd components used for starting services at boot time, as well as the ones for managing them.
The easiest way to start exploring systemd is through the systemd units. A unit defines a service, a group of services, or an action. Each unit consists of a name, a type, and a configuration file. There are currently 12 different systemd unit types, as follows:
The systemctl
command is the main utility to managing systemd and system services. Its basic syntax is as follows:
systemctl [OPTIONS…] COMMAND [NAME…]
You can use the systemctl
utility to display a list of the various units currently loaded in your Linux system. A snipped example is shown here:
$ systemctl list-units
UNIT LOAD ACTIVE SUB DESCRIPTION
[…]
atd.service loaded active running Job spoo[…]
auditd.service loaded active running Security[…]
[…]
multi-user.target loaded active active Multi-Us[…]
121 loaded units listed. Pass --all to see loaded but inactive[…]
To show all installed unit files use 'systemctl list-unit-file[…]
$
In the example, you can see various units as well as additional information. Units are identified by their name and type using the format name
.
type
. System services (daemons) have unit files with the .service
extension. Thus, the job spooling daemon, atd
, has a unit filename of atd.service
.
Be aware that many displays from the systemctl
utility use the less
pager by default (The less
pager was first covered in Chapter 6, “Working with the Shell”). Thus, to exit the displayed output, you must press the Q key. If you want to turn off the systemctl
utility's use of the less
pager, tack on the ‐‐no‐pager
option to the command.
Groups of services are started via target unit files. At system startup, the default.target
unit is responsible for ensuring that all required and desired services are launched at system initialization. The default.target
unit is actually a pointer to another target unit file, as shown here using the systemctl get‐default
command:
$ systemctl get-default
multi-user.target
$
Though multi‐user.target
is typically the default target unit file on Linux server systems, others are often employed as well. Table 10.2 shows the more commonly used system boot target unit files.
TABLE 10.2: Commonly Used System Boot Target Unit Files
NAME | DESCRIPTION |
---|---|
graphical.target |
Provides multiple users access to the system via local terminals and/or through the network. Graphical user interface (GUI) access is offered, if available. |
multi‐user.target |
Provides multiple users access to the system via local terminals and/or through the network. No GUI access is offered. |
runlevel
n
.target |
Provides backward compatibility to SysV init (initialization) systems, where n is set to 1–5 for the desired SysV runlevel equivalence |
If you need to change the group of services started at boot time, the command to use is systemctl set‐default
target‐unit
. You'll need to use super user privileges or be logged into the root
account for this to work properly.
Service unit files contain service information, such as when a service must be started, what targets want this service started, documentation resources, and so on. These configuration files are located in different directories.
A unit configuration file's directory location is critical, because if a file is found in two different directory locations, one will have precedence over the other. The following list shows the directory locations in ascending priority order:
/etc/systemd/system/
/run/systemd/system/
/usr/lib/systemd/system/
To see the various service unit files available, you can again employ the systemctl
utility. However, a slightly different argument is needed than when viewing units, as shown here:
$ systemctl list-unit-files
UNIT FILE STATE
proc-sys-fs-binfmt_misc.automount static
[…]
atd.service enabled
auditd.service enabled
[…]
$
Besides the unit file's base name in this command's output, you also see a unit file's state. This is called an enablement state and refers to when the service is started. There are at least 12 different enablement states, but you'll commonly see these:
enabled
: Service starts at system boot.disabled
: Service does not start at system boot.static
: Service starts if another unit depends on it. Can also be manually started.To determine what directory or directories store a particular systemd unit file (or files), use the systemctl
utility's cat
command. An example on a CentOS distribution is shown here:
$ systemctl cat atd.service
# /usr/lib/systemd/system/atd.service
[Unit]
Description=Job spooling tools
After=syslog.target systemd-user-sessions.service
[Service]
EnvironmentFile=/etc/sysconfig/atd
ExecStart=/usr/sbin/atd -f $OPTS
IgnoreSIGPIPE=no
[Install]
WantedBy=multi-user.target
$
Notice that the first displayed line shows the atd.service
unit file's directory location (/usr/lib/systemd/system/
) and base name (atd.service
). The next several lines are the unit configuration file's contents.
For service unit files, there are three primary configuration sections. They are as follows:
[Unit]
[Service]
[Install]
Within the service unit configuration file's [Unit]
section, there are basic directives. A directive is a setting that modifies a configuration, such as the After
setting shown in the earlier example. Table 10.3 shows the more commonly used [Unit]
section directives.
TABLE 10.3: Commonly Used Service Unit File [Unit]
Section Directives
DIRECTIVE | DESCRIPTION |
---|---|
After |
Sets this unit to start after the designated units |
Before |
Sets this unit to start before the designated units |
Description |
Describes the unit |
Documentation |
Sets a list of uniform resource identifiers (URIs) that point to documentation sources. The URIs can be web locations, particular system files, info pages, and man pages |
Conflicts |
Sets this unit to not start with the designated units. If any of the designated units start, this unit is not started. (Opposite of Requires ) |
Requires |
Sets this unit to start together with the designated units. If any of the designated units do not start, this unit is not started. (Opposite of Conflicts ) |
Wants |
Sets this unit to start together with the designated units. If any of the designated units do not start, this unit is still started. |
The [Service]
directives within a unit file set configuration items, which are specific to that service. You will only find a unit file [Service]
section in a service unit file. This middle section is different for each unit type. For example, in auto mount unit files, you would find an [Automount]
section as the middle unit file section.
Table 10.4 describes the more commonly used [Service]
section directives.
TABLE 10.4: Commonly Used Service Unit File [Service]
Section Directives
DIRECTIVE | DESCRIPTION |
---|---|
ExecReload |
Indicates scripts or commands (and options) to run when unit is reloaded |
ExecStart |
Indicates scripts or commands (and options) to run when unit is started |
ExecStop |
Indicates scripts or commands (and options) to run when unit is stopped |
Environment |
Sets environment variable substitutes, separated by a space |
Environment File |
Indicates a file that contains environment variable substitutes |
RemainAfterExit |
Set to either no (default) or yes . If set to yes , the service is left active even when the process started by ExecStart terminates. If set to no , then ExecStop is called when the process started by ExecStart terminates. |
Restart |
Service is restarted when the process started by ExecStart terminates. It is ignored if a systemctl restart or systemctl stop command is issued. Set to no (default), on‐success , on‐failure , on‐abnormal , on‐watchdog , on‐abort , or always . |
Type |
Sets the startup type |
The [Service] Type
directive needs a little more explanation than what is given in Table 10.4. This directive can be set to at least six different specifications, of which the most typical are listed here:
forking
—
ExecStart
starts a parent process. The parent process creates the service's main process as a child process and exits.simple
—(Default) ExecStart
starts the service's main process.oneshot
—
ExecStart
starts the service's main process, which is typically a configuration setting or a quick command, and the process exits.idle
—
ExecStart
starts the service's main process, but it waits until all other start jobs are finished.The [Install]
directives within a unit file determine what happens to a particular service if it is enabled or disabled. An enabled service is one that starts at system boot. A disabled service is one that does not start at system boot. Table 10.5 describes the more commonly used [Install]
section directives.
TABLE 10.5: Commonly Used Service Unit File [Install]
Section Directives
DIRECTIVE | DESCRIPTION |
---|---|
Alias |
Sets additional names that can be used to denote the service in systemctl commands |
Also |
Sets additional units that must be enabled or disabled for this service. Often the additional units are socket type units |
RequiredBy |
Designates other units that require this service |
WantedBy |
Designates which target unit manages this service |
There is a great deal of useful information in the man pages for systemd and unit configuration files. Just type in man ‐k systemd
to find several items you can explore. For example, explore the service type unit file directives and more via the man systemd.service
command. You can find information on all the various directives by typing in man systemd.directives
at the command line.
Understanding service unit files is helpful, especially if you have to troubleshoot a problem with a service not starting when the system boots. Another unit file type that needs special attention is the target unit file. We'll explore that topic next.
As mentioned previously, the primary purpose of target unit files is to group together various services to start at system boot time. The default target unit file, default.target
, is linked to the target unit file used at system boot. In this example, the default target unit file is located and displayed using the systemctl
command on a CentOS distribution.
$ systemctl get-default
multi-user.target
$
$ systemctl cat multi-user.target
# /usr/lib/systemd/system/multi-user.target
[…]
[Unit]
Description=Multi-User System
Documentation=man:systemd.special(7)
Requires=basic.target
Conflicts=rescue.service rescue.target
After=basic.target rescue.service rescue.target
AllowIsolate=yes
$
Notice that the multi‐user.target
unit file has many of the same [Unit]
directives as a service unit file has in its [Unit]
section. These directives were described earlier in Table 10.3. Of course, these directives apply to a target type unit file instead of a service type unit file. For example, the After
directive in the multi‐user.target
unit file sets this target unit to start after the designated units, such as basic.target
. Target units, similar to service units, have various target dependency chains as well as conflicts.
In the previous example, there is one directive we have not covered yet—the AllowIsolate
directive, if set to yes
, permits the systemctl isolate
command to use this target file. The isolate
command is covered later in this chapter.
Occasionally, you may need to change a particular unit configuration file for your Linux system's requirements or add components. However, be careful when doing this task. You should not modify any unit files in the /lib/systemd/system/
or /usr/lib/systemd/system/
directory.
To modify a unit configuration file, copy the file to the /etc/systemd/system/
directory and modify it there. This modified file will take precedence over the original unit file left in the original directory. Also, it will protect the modified unit file from software updates.
If you just have a few additional components, you can extend the configuration. Using super user privileges, create a new subdirectory in the /etc/systemd/system/
directory named service.service‐name
.d
, where service‐name
is the service's name. For example, for the openSSH daemon, you would create the /etc/systemd/system/service.sshd.d
directory. This newly created directory is called a drop‐in file directory, because you can drop‐in additional configuration files. Create any configuration files with names like description
.conf
, where description
describes the configuration file's purpose, such as local
or script
. Add your modified directives to this configuration file.
After making these modifications, there are a few more needed steps. Find and compare any unit file that overrides another unit file by issuing the systemd‐delta
command. It will display any unit files that are duplicated, extended, redirected, and so on. Review this list. It will help you avoid any unintended consequences from modifying or extending a service unit file.
To have your changes take effect, issue the systemctl daemon‐reload
command for the service whose unit file you modified or extended. After you accomplish that task, you may need to issue the systemctl restart
command to start or restart the service. This command is explained in the next section.
The master systemd configuration file, system.conf
, is located in the /etc/systemd/
directory. In this file, you will find all the default configuration settings commented out via a hash mark (#
). Viewing this file is a quick way to see the current systemd configuration. Here is a snipped listing of this file:
$ cat /etc/systemd/system.conf
[…]
# See systemd-system.conf(5) for details.
[Manager]
#LogLevel=info
#LogTarget=journal-or-kmsg
#LogColor=yes
#LogLocation=no
#DumpCore=yes
#ShowStatus=yes
[…]
#IPAddressAllow=
#IPAddressDeny=
$
If you need to modify the configuration, just edit the file. However, it would be wise to peruse the file's man page first by typing man systemd‐system.conf
at the command line.
There are several basic systemctl
commands available for you to manage system services. We've already looked at a few of them, but several more deserve our attention. One that is often used after a system is booted is the status
command. It provides a wealth of information. A couple of snipped examples are shown here:
$ systemctl status console-getty
● console-getty.service - Console Getty
Loaded: loaded […]disabled[…]
Active: inactive (dead)
Docs: man:agetty(8)
man:systemd-getty-generator(8)
$
$ systemctl status atd
● atd.service - Job spooling tools
Loaded: loaded […]enabled[…]
Active: active (running) since Wed […]
Main PID: 1123 (atd)
Tasks: 1 (limit: 11479)
Memory: 748.0K
CGroup: /system.slice/atd.service
∟1123 /usr/sbin/atd -f
$
The first systemctl
command shows the status of the console‐getty
service. Notice the third line in the utility's output. It states that the service is disabled
. The fourth line states that the service is inactive
. In essence, this means that the console‐getty
service is not running (inactive
) and is not configured to start at system boot time (disabled
). The status of the atd
service is also displayed, showing that atd
is running (active
) and configured to start at system boot time (enabled
).
There are several simple commands you can use with the systemctl
utility to manage systemd services and view information regarding them. Table 10.6 describes the more common commands. These systemctl
commands generally use the following syntax:
systemctl COMMAND UNIT-NAME…
TABLE 10.6: Commonly Used systemctl
Service Management Commands
COMMAND | DESCRIPTION |
---|---|
daemon‐reload |
Load the unit configuration file of the running designated unit(s) to make unit file configuration changes without stopping the service. Note that this is different from the reload command. |
disable |
Mark the designated unit(s) to not be started automatically at system boot time. |
enable |
Mark the designated unit(s) to be started automatically at system boot time. |
mask |
Prevent the designated unit(s) from starting. The service cannot be started using the start command or at system boot. Use the ‐‐now option to immediately stop any running instances as well. Use the ‐‐running option to mask the service only until the next reboot or unmask is used. |
restart |
Stop and immediately restart the designated unit(s). If a designated unit is not already started, this will simply start it. |
start |
Start the designated unit(s). |
status |
Display the designated unit's current status. |
stop |
Stop the designated unit(s). |
reload |
Load the service configuration file of the running designated unit(s) to make service configuration changes without stopping the service. Note that this is different from the daemon‐reload command. |
unmask |
Undo the effects of the mask command on the designated unit(s). |
Notice the difference in Table 10.6 between the daemon‐reload
and reload
commands. This is an important difference. Use the daemon‐reload
command if you need to load systemd unit file configuration changes for a running service. Use the reload
command to load a service's modified configuration file. For example, if your system used the Network Time Protocol (NTP) and you modified its ntpd
service configuration file, /etc/ntp.conf
, for the new configuration to take immediate effect, you would issue the command systemctl reload ntpd
at the command line.
Besides the commands in Table 10.6, there are some other handy systemctl
commands you can use for managing system services. A few are shown in this example using super user privileges:
# systemctl stop atd
#
# systemctl is-active atd
inactive
#
# systemctl start atd
#
# systemctl is-active atd
active
#
The atd
daemon is stopped using systemctl
and its stop
command. Instead of the status
command, the is‐active
command is used to quickly display that the service is stopped (inactive
). The atd
service is then started back up, and again the is‐active
command is employed showing that the service is now running (active
). Table 10.7 shows these useful service status checking commands.
TABLE 10.7: Convenient systemctl
Service Status Commands
COMMAND | DESCRIPTION |
---|---|
is‐active |
Displays active for running services and failed for any service that has reached a failed state |
is‐enabled |
Displays enabled for any service that is configured to start at system boot and disabled for any service that is not configured to start at system boot |
is‐failed |
Displays failed for any service that has reached a failed state and active for running services |
Services can fail for many reasons: for hardware issues, a missing dependency set in the unit configuration file, an incorrect permission setting, and so on. You can employ the systemctl
utility's is‐failed
command to see if a particular service has failed.
One special command to explore is the systemctl is‐system‐running
command. Here is an example of this command in action:
$ systemctl is-system-running
running
$
You may think the status returned here is obvious, but it means all is well with your Linux system currently. Table 10.8 shows other useful statuses.
TABLE 10.8: Operational Statuses Provided by systemctl is‐system‐running
STATUS | DESCRIPTION |
---|---|
running |
System is fully in working order. |
degraded |
System has one or more failed units. |
maintenance |
System is in emergency or recovery mode. |
initializing |
System is starting to boot. |
starting |
System is still booting. |
stopping |
System is starting to shut down. |
If you receive degraded
status, you should review your units to see which ones have failed and take appropriate action. Use the systemctl ‐‐failed
command to find the failed unit(s), as shown snipped here:
$ systemctl is-system-running
degraded
$
$ systemctl --failed
UNIT LOAD ACTIVE SUB […]
● NetworkManager-wait-online.service loaded failed […]
[…]
$
In this case, it looks like there are some potential network problems that need exploration. Chapter 12, “Configuring Network Settings,” dives deeper into networking with Linux.
The systemctl
utility has several commands that go beyond service management. You can jump between various system states and even analyze your system's boot‐time performance. We'll look at these various commands next.
Occasionally, you may need to start or stop several services. If those services are grouped in a particular target unit, you can use systemctl
to “jump” to that target, starting groups of services and stopping others on the fly.
The isolate
command, used with super user privileges, is handy for jumping between system targets. When this command is used along with a target name for an argument, all services and processes not enabled in the listed target are stopped. Any services and processes enabled and not running in the listed target are started. A snipped example of jumping targets is shown snipped here on an Ubuntu distribution:
$ systemctl get-default
graphical.target
$
$ sudo systemctl isolate multi-user.target
[sudo] password for sysadmin:
$
$ systemctl status graphical.target
● graphical.target - Graphical Interface
Loaded: loaded (/lib/systemd/system/[…]
Active: inactive (dead) since Wed 20[…]
Docs: man:systemd.special(7)
[…]
$
In the example, the systemctl isolate
command caused the system to jump from the default system target to the multiuser target. Unfortunately, there is no simple command to show your system's current target in this case. However, the systemctl status
command is useful. If you employ the command and give it the previous target's name (graphical.target
in this case), you should see that it is no longer active (inactive
) and thus not the current system target.
Be aware that the systemctl isolate
command can be used only with certain targets. The target's unit file must have the AllowIsolate=yes
directive set.
Two extra special targets to which you can jump are rescue and emergency. These targets, sometimes called modes, are described here:
systemctl is‐system‐running
command will return the maintenance
status. Running disk utilities to fix corrupted disks is a useful task in this particular target.systemctl is‐system‐running
command will return the maintenance
status. If your system goes into emergency mode by itself, there are serious problems. This target is used for situations where even rescue mode cannot be reached.Be aware that if you jump into either rescue or emergency mode, you'll only be able to log into the root account. Also, your screen may go blank for a minute, so don't panic.
Other targets you can jump to include reboot
, poweroff
, and halt
. For example, type in systemctl isolate reboot
to reboot your system.
With GRUB2, you can reach a different target than the default target before the system boots via the bootloader menu. Just move your cursor to the menu option that typically boots your system and press the E key to edit it. Scroll down and find the line that starts with the linux16
or linux
command. Press the End key or arrow keys to reach the line's end. Press the spacebar and type in systemd.unit=
target‐name
.target
, where target‐name
is the name of the target you want your system to activate, such as emergency.target
. This is useful for crisis situations.
A handy systemd component is the systemd‐analyze
utility. With this utility, you can investigate your system's boot performance and check for potential system initialization problems. Table 10.9 contains the more common commands you can use with the systemd‐analyze
utility.
TABLE 10.9: Common systemd‐analyze
Commands
COMMAND | DESCRIPTION |
---|---|
blame |
Displays the amount of time each running unit took to initialize. Units and their times are listed starting from the slowest to the fastest. |
time |
Displays the amount of time system initialization spent for the kernel, and the initial RAM filesystem, as well as the time it took for normal system user space to initialize (Default) |
critical‐chain |
Displays time‐critical units in a tree format. Can pass it a unit file argument to focus the information on that particular unit. |
dump |
Displays information concerning all the units. The display format is subject to change without notice, so it should be used only for human viewing. |
verify |
Scans unit files and displays warning messages if any errors are found. Will accept a unit filename as an argument, but follows directory location precedence. |
Be aware that some of the longer systemd‐analyze
displays are piped into the less
pager utility. You can turn that feature off by using the ‐‐no‐pager
option. In the snipped example here, a few of these systemd‐analyze
commands are shown in action:
$ systemd-analyze time
Startup finished in 1.465s (kernel) + 13.911s (initrd)
+ 35.896s (userspace) = 5 1.274s
multi-user.target reached after 30.317s in userspace
$
$ systemd-analyze --no-pager blame
7.811s NetworkManager-wait-online.service
[…]
5.022s firewalld.service
4.753s polkit.service
[…]
586ms auditd.service
[…]
338ms rsyslog.service
[…]
36ms sys-kernel-config.mount
$
The first command in the preceding example provides time information concerning your system's initialization. Note that you could leave off the time
keyword, and the systemd‐analyze
utility would still display the system initialization time because that is the utility's default action.
The last command in the example employs the blame
argument. This display begins with those units that took the longest to initialize. At the bottom of the list are the units that initialized the fastest. It is a handy guide for troubleshooting unit initialization problems.
The systemd initialization approach is flexible and reliable for managing Linux systems and their services. Having a basic understanding of the methods and commands for managing systemd initialized systems will serve you well in your Linux career.
multi‐user.target
for the next boot. How can you accomplish this?systemctl
utility help in managing system services. They allow you to control what services are started at boot time, start and stop services, and analyze service issues and troubleshoot problems.
ntpd
service was not enabled to start at boot time, and you need to immediately get this service up and running. Assuming you have super user privileges, how can you use systemd to start the service and check that it is indeed started?systemd‐analyze
, which has several commands you can use in troubleshooting situations.