CHAPTER 12

Secure Systems Design and Deployment

In this chapter, you will

•  Learn to implement secure systems design for a given scenario

•  Understand the importance of secure staging development concepts

System design has a great effect on the security of a system once it is in operation. Errors in system design are very difficult to correct later, and almost impossible once a system is in production. Ensuring that security concerns are considered and addressed during the design phase of a project will go a long way in establishing a system that can be secured using security controls.

Certification Objective   This chapter covers CompTIA Security+ exam objectives 3.3, Given a scenario, implement secure systems design, and 3.4, Explain the importance of secure staging deployment concepts.

These exam objectives are good candidates for performance-based questions, which means you should expect questions in which you must apply your knowledge of the topic to a scenario. The best answer to a question will depend upon specific details in the scenario preceding the question, not just the question. The question may also involve tasks other than just picking the best answer from a list. Instead, you may be instructed to order things on a diagram, put options in rank order, match two columns of items, or perform a similar task.

Hardware/Firmware Security

Hardware, in the form of servers, workstations, and even mobile devices, can represent a weakness or vulnerability in the security system associated with an enterprise. While you can easily replace hardware if it is lost or stolen, you can’t retrieve the information the lost or stolen hardware contains. You can safeguard against complete loss of data through backups, but this does little in the way of protecting it from disclosure to an unauthorized party who comes into possession of your lost or stolen hardware. You can implement software measures such as encryption to hinder the unauthorized party’s access to the data, but these measures also have drawbacks in the form of scalability and key distribution.

There are some hardware protection mechanisms that your organization should consider employing to safeguard servers, workstations, and mobile devices from theft, such as placing cable locks on mobile devices and using locking cabinets and safes to secure portable media, USB drives, and CDs/DVDs.

A lot of hardware has firmware that provides the necessary software instructions to facilitate the hardware functionality. Firmware is a source of program code for the system, and if an adversary changes the firmware, this can result in an open attack vector into the trusted core of the enterprise. This is because most systems will trust the firmware of a trusted system. Monitoring and managing firmware security is a time-intensive task because few tools exist for that purpose, and even fewer for automation of the task. This makes physical security of the system and its peripheral hardware important.

Images

EXAM TIP    Physical security is an essential element of a security plan. Unauthorized access to hardware and networking components can make many security controls ineffective.

FDE/SED

Full disk encryption (FDE) and self-encrypting disks (SEDs) are methods of implementing cryptographic protection on hard disk drives and other similar storage media with the express purpose of protecting the data even if the disk drive is removed from the machine. Portable machines, such as laptops, have a physical security weakness in that they are relatively easy to steal, after which they can be attacked offline at an attacker’s leisure. The use of modern cryptography, coupled with hardware protection to the keys, makes this vector of attack much more difficult. In essence, both of these methods offer a transparent, seamless manner of encrypting the entire hard disk drive using keys that are only available to someone who can properly log into the machine.

TPM

The Trusted Platform Module (TPM) is a hardware solution on the motherboard, one that assists with key generation and storage as well as random number generation. When the encryption keys are stored in the TPM, they are not accessible via normal software channels and are physically separated from the hard drive or other encrypted data locations. This makes the TPM a more secure solution than storing the keys on the machine’s normal storage.

HSM

A hardware security module (HSM) is a device used to manage or store encryption keys. It can also assist in cryptographic operations such as encryption, hashing, or the application of digital signatures. HSMs typically are peripheral devices, connected via USB or a network connection. HSMs have tamper protection mechanisms to prevent physical access to the secrets they protect. Because of their dedicated design, they can offer significant performance advantages over general-purpose computers when it comes to cryptographic operations. When an enterprise has significant levels of cryptographic operations, HSMs can provide throughput efficiencies.

Images

EXAM TIP    Storing private keys anywhere on a networked system is a recipe for loss. HSMs are designed to allow the use of the key without exposing it to the wide range of host-based threats.

UEFI/BIOS

Basic Input/Output System (BIOS) is the firmware that a computer system uses as a connection between the actual hardware and the operating system. BIOS is typically stored on nonvolatile flash memory, which allows for updates yet persists when the machine is powered off. The purpose behind BIOS is to initialize and test the interfaces to any actual hardware in a system. Once the system is running, the BIOS functions to translate low-level access to the CPU, memory, and hardware devices, making a common interface for the OS to connect to. This facilitates multiple hardware manufacturers and differing configurations against a single OS install.

Unified Extensible Firmware Interface (UEFI) is the current replacement for BIOS. UEFI offers significant modernization over the decades-old BIOS, including the capability to deal with modern peripherals such as high-capacity storage and high-bandwidth communications. UEFI also has more security designed into it, including provisions for secure booting. From a system design aspect, UEFI offers advantages in newer hardware support, and from a security point of view, secure boot has some specific advantages. For these reasons, all new systems are UEFI based.

Secure Boot and Attestation

One of the challenges in securing an OS is that it has myriad drivers and other add-ons that hook into it and provide specific added functionality. If you do not properly vet these additional programs before installation, this pathway can provide a means by which malicious software can attack a machine. And since these attacks can occur at boot time, at a level below security applications such as antivirus software, they can be very difficult to detect and defeat. UEFI offers a solution to this problem, called Secure Boot. Secure Boot is a mode that, when enabled, only allows signed drivers and OS loaders to be invoked. Secure Boot requires specific setup steps, but once enabled, it blocks malware that attempts to alter the boot process. Secure Boot enables the attestation that the drivers and OS loaders being used have not changed since they were approved for use. Secure Boot is supported by Microsoft Windows and all major versions of Linux.

Images

EXAM TIP    Understand how TPM, UEFI, Secure Boot, hardware root of trust (covered shortly), and integrity measurement (covered near the end of the chapter) work together to solve a specific security issue.

Supply Chain

Hardware and firmware security is ultimately dependent upon the manufacturer for the root of trust (discussed in the next section). In today’s world of global manufacturing with global outsourcing, attempting to identify all the suppliers in a hardware manufacturer’s supply chain, which commonly changes from device to device, and even between lots, is practically futile in most cases. Who manufactured all the components of the device you are ordering? If you’re buying a new PC, where did the hard drive come from? Is it possible the new PC comes preloaded with malware? (Yes, it has happened.)

Unraveling the supply chain for assembled equipment can be very tricky, because even when purchasing equipment from a highly trusted vendor, you don’t know where they got the components, the software, the libraries, and so forth, and even if you could figure that out, you can’t be sure the suppliers don’t have their own supply chains. As major equipment manufacturers have become global in nature, and use parts from all over the world, specifying national content is both difficult and expensive. Global supply chains can be very difficult to negotiate if you have very strict rules concerning country of origin.

Hardware Root of Trust

A hardware root of trust is a concept that if one has a trusted source of specific security functions, this layer can be used to promote security to higher layers of a system. Because roots of trust are inherently trusted, they must be secure by design. This is usually accomplished by keeping them small and limiting their functionality to a few specific tasks. Many roots of trust are implemented in hardware that is isolated from the OS and the rest of the system so that malware cannot tamper with the functions they provide. Examples of roots of trust include TPM chips in computers and Apple’s Secure Enclave coprocessor in its iPhones and iPads. Apple also uses a signed Boot ROM mechanism for all software loading.

EMI/EMP

Electromagnetic interference (EMI) is an electrical disturbance that affects an electrical circuit. EMI is due to either electromagnetic induction or radiation emitted from an external source, either of which can induce currents into the small circuits that make up computer systems and cause logic upsets. An electromagnetic pulse (EMP) is a burst of current in an electronic device as a result of a current pulse from electromagnetic radiation. EMP can produce damaging current and voltage surges in today’s sensitive electronics. The main sources of EMPs would be industrial equipment on the same circuit, solar flares, and nuclear bursts high in the atmosphere.

It is important to shield computer systems from circuits with large industrial loads, such as motors. These power sources can produce significant noise, including EMI and EMPs, that will potentially damage computer equipment. Another source of EMI is fluorescent lights. Be sure any cabling that goes near fluorescent light fixtures is well shielded and grounded. Shielding and grounding are the tools used to protect from stray electric fields, although the challenge is that the effort needs to be exacting, for even little gaps can be catastrophic.

Operating Systems

Operating systems are complex programs designed to provide a platform for a wide variety of services to run. Some of these services are extensions of the OS itself, while others are stand-alone applications that use the OS as a mechanism to connect to other programs and hardware resources. It is up to the OS to manage the security aspects of the hardware being utilized. Things such as access control mechanisms are great in theory, but it is the practical implementation of these security elements in the OS that provides the actual security profile of a machine.

Early versions of home operating systems did not have separate named accounts for separate users. This was seen as a convenience mechanism; after all, who wants the hassle of signing into the machine? This led to the simple problem that all users could then see and modify and delete everyone else’s content. Content could be separated by using access control mechanisms, but that required configuration of the OS to manage every user’s identity. Early versions of many OSs came with literally every option turned on. Again, this was a convenience factor, but it led to systems running processes and services that they never used, and increasing the attack surface of the host unnecessarily.

Determining the correct settings and implementing them correctly is an important step in securing a host system. This section explores the multitude of controls and options that need to be employed properly to achieve a reasonable level of security on a host system.

Types

Many different systems have the need for an operating system. Hardware in networks requires an operating system to perform the networking function. Servers and workstations require an OS to act as the interface between applications and the hardware. Specialized systems such as kiosks and appliances, both of which are forms of automated single-purpose systems, require an OS between the application software and hardware.

Network

Network components use a network operating system to provide the actual configuration and computation portion of networking. There are many vendors of networking equipment, and each has its own proprietary operating system. Cisco has the largest footprint with its IOS, internetworking operating system, the operating system that runs on all Cisco routers and switches. Other vendors such as Juniper have Junos, which is built off of a stripped Linux core. As networking moves to software-defined networking (SDN), introduced in Chapter 11, the concept of the network operating system will become more important and mainstream because it will become a major part of day-to-day operations in the IT enterprise.

Server

Server operating systems bridge the gap between the server hardware and the applications that are being run on the server. Currently, server OSs include Microsoft Windows Server, many flavors of Linux, and an ever-increasing number of virtual machine/hypervisor environments. For performance reasons, Linux had a significant market share in the realm of server OSs, although Windows Server with its Active Directory technology and built-in Hyper-V capability has assumed a commanding lead in market share.

Workstation

The workstation OS exists to provide a functional working space, typically a graphical interface, for a user to interact with the system and its various applications. Because of the high level of user interaction on workstations, it is very common to see Windows in the role of workstation OS. In large enterprises, administrators tend to favor Windows client workstations because Active Directory enables them to manage users, configurations, and settings easily across the entire enterprise.

Appliance

Appliances are stand-alone devices, wired into the network and designed to run an application to perform a specific function on traffic. These systems operate as headless servers, preconfigured with applications that run and perform a wide range of security services on the network traffic that they see. For reasons of economics, portability, and functionality, the vast majority of appliances OSs are built using a Linux-based OS. As these are often customized distributions, keeping them patched becomes a vendor problem because most IT people aren’t properly trained to manage that task. Enterprise class intrusion detection appliances, loss prevention appliances, backup appliances, and more, are all examples of systems that bring Linux OSs into the enterprise, but not under the enterprise patch process, for the maintenance is a vendor issue.

Kiosk

Kiosks are stand-alone machines, typically operating a browser instance on top of a Windows OS. These machines are usually set up to autologin to a browser instance that is locked to a website that allows all of the functionality desired. These are commonly used for interactive customer service applications, such as interactive information sites, menus, and so on. The OS on a kiosk needs to be able to be locked down to minimal functionality so that users can’t make any configuration changes. It also should have elements such as autologin and an easy way to construct the applications.

Mobile OS

Mobile devices began as a phone, with limited other abilities. Mobile OSs come in two main types: Apple’s iOS and Google’s Android OS. These OSs are optimized to both device capability and desired set of functionality. These systems are not readily expandable at the OS level, but serve as stable platforms from which users can run apps to do tasks. As the Internet and functionality spread to mobile devices, the capability of these devices has expanded as well. From smartphones to tablets, today’s mobile system is a computer, with virtually all compute capability one could ask for, with a phone attached. Chapter 9 covers how to manage mobile device security in depth.

Images

EXAM TIP    Pay attention to the nuances in a scenario-based question that asks you to identify the correct type of operating system types, for the details in the scenario will dictate the correct or best response.

Patch Management

Every OS, from Linux to Windows, requires software updates, and each OS has different methods of assisting users in keeping their systems up to date. Microsoft, for example, typically makes updates available for download from its website. While most administrators or technically proficient users may prefer to identify and download updates individually, Microsoft recognizes that nontechnical users prefer a simpler approach, which Microsoft has built into its operating systems. In Windows 7 forward, Microsoft provides an automated update functionality that will, once configured, locate any required updates, download them to your system, and even install the updates if that is your preference.

How you patch a Linux system depends a great deal on the specific version in use and the patch being applied. In some cases, a patch will consist of a series of manual steps requiring the administrator to replace files, change permissions, and alter directories. In other cases, the patches are executable scripts or utilities that perform the patch actions automatically. Some Linux versions, such as Red Hat, have built-in utilities that handle the patching process. In those cases, the administrator downloads a specifically formatted file that the patching utility then processes to perform any modifications or updates that need to be made.

Regardless of the method you use to update the OS, it is critically important to keep systems up to date. New security advisories come out every day, and while a buffer overflow may be a “potential” problem today, it will almost certainly become a “definite” problem in the near future. Much like the steps taken to baseline and initially secure an OS, keeping every system patched and up to date is critical to protecting the system and the information it contains.

Vendors typically follow a hierarchy for software updates:

•  Hotfix This term refers to a (usually) small software update designed to address a specific problem, such as a buffer overflow in an application that exposes the system to attacks. Hotfixes are typically developed in reaction to a discovered problem and are produced and released rather quickly.

•  Patch This term refers to a more formal, larger software update that can address several or many software problems. Patches often contain enhancements or additional capabilities as well as fixes for known bugs. Patches are usually developed over a longer period of time.

•  Service pack This refers to a large collection of patches and hotfixes rolled into a single, rather large package. Service packs are designed to bring a system up to the latest known good level all at once, rather than requiring the user or system administrator to download dozens or hundreds of updates separately.

Disabling Unnecessary Ports and Services

An important management issue for running a secure system is to identify the specific needs of a system for its proper operation and to enable only items necessary for those functions. Disabling unnecessary ports and services prevents their use by unauthorized users and improves system throughput and increases security. Systems have ports and connections that need to be disabled if not in use.

Images

EXAM TIP    Disabling unnecessary ports and services is a simple way to improve system security. This minimalist setup is similar to the implicit deny philosophy and can significantly reduce an attack surface.

Least Functionality

Just as we have a principle of least privilege, we should follow a similar track with least functionality on systems. A system should do what it is supposed to do, and only what it is supposed to do. Any additional functionality is an added attack surface for an adversary and offers no additional benefit to the enterprise.

Secure Configurations

Operating systems can be configured in a variety of manners, from completely open with lots of functionality, whether it is needed or not, to relatively closed and stripped down to only the services needed to perform its intended function. Operating system developers and manufacturers all share a common problem: they cannot possibly anticipate the many different configurations and variations that the user community will require from their products. So, rather than spending countless hours and funds attempting to meet every need, manufacturers provide a “default” installation for their products that usually contains the base OS and some more commonly desirable options, such as drivers, utilities, and enhancements. Because the OS could be used for any of a variety of purposes, and could be placed in any number of logical locations (LAN, DMZ, WAN, and so on), the manufacturer typically does little to nothing with regard to security. The manufacturer may provide some recommendations or simplified tools and settings to facilitate securing the system, but in general, end users are responsible for securing their own systems. Generally, this involves removing unnecessary applications and utilities, disabling unneeded services, setting appropriate permissions on files, and updating the OS and application code to the latest version.

This process of securing an OS is called hardening, and it is intended to make the system more resistant to attack, much like armor or steel is hardened to make it less susceptible to breakage or damage. Each OS has its own approach to security, and while the process of hardening is generally the same, different steps must be taken to secure each OS. The process of securing and preparing an OS for the production environment is not trivial; it requires preparation and planning. Unfortunately, many users don’t understand the steps necessary to secure their systems effectively, resulting in hundreds of compromised systems every day. Having systems properly configured prior to use can limit the number of user caused incidents.

Images

EXAM TIP    System hardening is the process of preparing and securing a system and involves the removal of all unnecessary software and services.

You must meet several key requirements to ensure that the system hardening processes described in this section achieve their security goals. These are OS independent and should be a normal part of all system maintenance operations:

•  The base installation of all OS and application software comes from a trusted source, and is verified as correct by using hash values.

•  Machines are connected only to a completely trusted network during the installation, hardening, and update processes.

•  The base installation includes all current patches and updates for both the OS and applications.

•  Current backup images are taken after hardening and updates to facilitate system restoration to a known state.

These steps ensure that you know what is on the machine, can verify its authenticity, and have an established backup version.

Trusted Operating System

A trusted operating system is one that is designed to allow multilevel security in its operation. This is further defined by its ability to meet a series of criteria required by the U.S. government. Trusted OSs are expensive to create and maintain because any change must typically undergo a recertification process. The most common criteria used to define a trusted OS is the Common Criteria for Information Technology Security Evaluation (abbreviated as Common Criteria, or CC), a harmonized security criteria recognized by many nations, including the United States, Canada, Great Britain, and most of the EU countries, as well as others. Versions of Windows, Linux, mainframe OSs, and specialty OSs have been qualified to various Common Criteria levels. Trusted OSs are most commonly used by government agencies and contractors for sensitive systems that require this level of protection.

Application Whitelisting/Blacklisting

Applications can be controlled at the OS at the time of start via blacklisting or whitelisting. Application blacklisting is essentially noting which applications should not be allowed to run on the machine. This is basically a permanent “ignore” or “call block” type capability. Application whitelisting is the exact opposite: it consists of a list of allowed applications. Each of these approaches has advantages and disadvantages. Blacklisting is difficult to use against dynamic threats, as the identification of a specific application can easily be avoided through minor changes. Whitelisting is easier to employ from the aspect of the identification of applications that are allowed to run—hash values can be used to ensure the executables are not corrupted. The challenge in whitelisting is the number of potential applications that are run on a typical machine. For a single-purpose machine, such as a database server, whitelisting can be relatively easy to employ. For multipurpose machines, it can be more complicated.

Microsoft has two mechanisms that are part of the OS to control which users can use which applications:

•  Software restrictive policies Employed via group policies and allow significant control over applications, scripts, and executable files. The primary mode is by machine and not by user account.

•  User account level control Enforced via AppLocker, a service that allows granular control over which users can execute which programs. Through the use of rules, an enterprise can exert significant control over who can access and use installed software.

On a Linux platform, similar capabilities are offered from third-party vendor applications.

Disable Default Accounts/Passwords

Because accounts are necessary for many systems to be established, default accounts with default passwords are a way of life in computing. Whether the OS or an application, these defaults represent a significant security vulnerability if not immediately addressed as part of setting up the system or installing of the application. Disable default accounts/passwords should be such a common mantra for people that no systems exist with this vulnerability. This is a simple task, and one that you must do for any new system. If you cannot disable the default account—and there will be times when this is not a viable option—the other alternative is to change the password to a very long password that offers strong resistance to brute force attacks.

Peripherals

Peripherals used to be basically dumb devices with low to no interaction, but with the low cost of compute power and the vendors’ desire to offer greater functionality, many of these devices have embedded computers in them. This has led to hacking of peripherals and the need for security pros to understand the security aspects of peripherals. From wireless keyboards and mice, to printers, to displays and storage devices, these items have all become sources of risk.

Wireless Keyboards

Wireless keyboards operate via a short-range wireless signal between the keyboard and the computer. The main method of connection is via either a USB Bluetooth connector, in essence creating a small personal area network (PAN), or a 2.4-GHz dongle. Wireless keyboards are frequently paired with wireless mice, removing troublesome and annoying cables off the desktop. Because of the wireless connection, the signals to and from the peripherals are subject to interception, and attacks have been made on these devices. Having the keystrokes recorded between the keyboard and the computer is equivalent to keylogging, and since it is external to the system it can be very difficult to detect.

Because of the usefulness of wireless keyboards, banning them because of the risk of signal interception is not a solid security case, but ensuring that you get them from reputable firms that patch their products is important.

Wireless Mice

Wireless mice are similar in nature to wireless keyboards. They tend to connect as a human interface device (HID) class of USB. This is part of the USB specification and is used for mice and keyboards, simplifying connections, drivers, and interfaces through a common specification.

One of the interesting security problems with wireless mice and keyboards has been the development of the mousejacking attack. This is when an attacker performs a man-in-the-middle attack on the wireless interface and can control the mouse and or intercept the traffic. When this attack first hit the environment, manufacturers had to provide updates to their software interfaces to block this form of attack. Some of the major manufacturers, like Logitech, took this effort for their mainstream product line, but a lot of mice that are older were never patched. And smaller vendors have never addressed the vulnerability, so it still exists.

Displays

Computer displays are primarily connected to machines via a cable to one of several types of display connectors on a machine. But for conferences and other group settings, there are a wide array of devices today that can enable a machine to connect to a display via a wireless network. These devices are available from Apple, Google, and a wide range of AV companies. The risk of using these is simple: who else within range of the wireless signal can watch what you are beaming to the display in the conference room? And if the signal was intercepted, would you even know? In a word, you wouldn’t. This doesn’t mean these devices should not be used in the enterprise, just that they should not be used when transmitting sensitive data to the screen.

Wi-Fi-Enabled MicroSD Cards

A class of Wi-Fi-enabled MicroSD cards was developed to eliminate the need to move the card from device to device to move the data. Primarily designed for digital cameras, these cards are very useful for creating Wi-Fi devices out of devices that had an SD slot. These devices work by having a tiny computer embedded in the card running a stripped-down version of Linux. One of the major vendors in this space uses a stripped-down version of BusyBox and has no security invoked at all, making the device completely open to hackers. Putting devices such as these into an enterprise network can introduce a wide variety of unpatched vulnerabilities.

Printers/MFDs

Printers have CPUs and a lot of memory. The primary purpose for this is to offload the printing from the device sending the print job to the print queue. Modern printers now come standard with a bidirectional channel, so that you can send a print job to the printer and it can send back information as to job status, printer status, and other items. Multifunction devices (MFDs) are like printers on steroids. They typically combine printing, scanning, and faxing all into a single device. This has become a popular market segment as it reduces costs and device proliferation in the office.

Connecting printers to the network allows multiple people to connect and independently print jobs, sharing a fairly expensive high-speed duplexing printer. But with the CPU, firmware, and memory comes the risk of an attack vector, and hackers have demonstrated malware passed by a printer to another computer that shares the printer. This is not a mainstream issue yet, but it has passed the proof-of-concept phase and in the future we will need to have software protect us from our printers.

External Storage Devices

The rise of network-attached storage (NAS) devices moved quickly from the enterprise into form factors that are found in homes. As users have developed large collections of digital videos and music, these external storage devices, running on the home network, solve the storage problem. These devices are typically fairly simple Linux-based appliances, with multiple hard drives in a RAID arrangement. With the rise of ransomware, these devices can spread infections to any and all devices that connect to the network. For this reason, precautions should be taken with respect to always-on connections to storage arrays. If not necessary, always-on should be avoided.

Digital Cameras

Digital cameras are sophisticated computing platforms that can capture images, perform image analysis, connect over networks, and even send files across the globe directly from a camera into a production system in a newsroom, for instance. The capabilities are vast, and the ability to move significant data quantities is built in for up to live 4K video streaming. Most cameras that have all of this capability are designed for high-end professional use, and the data streams are encrypted, as the typical use would require an encrypted channel.

Images

EXAM TIP    All sorts of peripherals are used in today’s systems, and each of them needs to be properly configured to reduce the threat environment. When given a scenario-based question, the key is to use the context of the question to determine which peripheral is the correct answer. The necessary details will be in the context, not the answer choices.

Sandboxing

Sandboxing refers to the quarantine or isolation of a system from its surroundings. It has become standard practice for some programs with an increased risk surface to operate within a sandbox, limiting the interaction with the CPU and other processes, such as memory. This works as a means of quarantine, preventing problems from getting out of the sandbox and onto the OS and other applications on a system.

Virtualization can be used as a form of sandboxing with respect to an entire system. You can build a VM, test something inside the VM, and, based on the results, make a decision with regard to stability or whatever concern was present.

Environment

Most organizations have multiple, separate computing environments designed to provide isolation between the functions of development, test, staging, and production. The primary purpose of having these separate environments is to prevent security incidents arising from untested code ending up in the production environment. The hardware of these environments is segregated and access control lists are used to prevent users from accessing more than one environment at a time. Moving code between environments requires a special account that can access both, minimizing issues of cross-contamination.

Development

The development environment is sized, configured, and set up for developers to develop applications and systems. Unlike production hardware, the development hardware does not have to be scalable, and it probably does not need to be as responsive for given transactions. The development platform does need to use the same OS type and version as used in the production environment, for developing on Windows and deploying to Linux is fraught with difficulties that can be avoided by matching the environments in terms of OS type and version. After code is successfully developed, it is moved to a test system.

Test

The test environment fairly closely mimics the production environment—same versions of software, down to patch levels, same sets of permissions, same file structures, and so forth. The purpose of the test environment is to test a system fully prior to deploying it into production to ensure that it is bug-free and will not disrupt the production environment. The test environment may not scale like production, but from a software/hardware footprint, it will look exactly like production. This is important to ensure that system-specific settings are tested in an environment identical to that in which they will be run.

Staging

The staging environment is an optional environment, but it is commonly used when an organization has multiple production environments. After passing testing, the system moves into staging, from where it can be deployed to the different production systems. The primary purpose of staging is to serve as a sandbox after testing, so the test system can test the next set, while the current set is deployed across the enterprise. One method of deployment is a staged deployment, where software is deployed to part of the enterprise and then a pause occurs to watch for unseen problems. If none occur, the deployment continues, stage by stage, until all of the production systems are changed. By moving software in this manner, you never lose the old production system until the end of the move, giving you time to monitor and catch any unforeseen problems. This also prevents the total loss of production to a failed update.

Production

The production environment is where the systems work with real data, doing the business that the system is intended to perform. This is an environment where, by design, very few changes occur, and those that do must first be approved and tested via the system’s change management process.

Images

EXAM TIP    Understand the structure and purpose of the different environments so that when given a scenario and asked to identify which environment is appropriate, you can pick the best answer: development, test, staging, or production.

Secure Baseline

To secure the software on a system effectively and consistently, you must take a structured and logical approach. Start by examining the system’s intended functions and capabilities to determine what processes and applications will be housed on the system. As a best practice, you should remove or disable anything that is not required for operations; then, apply all the appropriate patches, hotfixes, and settings to protect and secure the system.

This process of establishing software’s base security state is called baselining, and the resulting product is a secure baseline that allows the software to run safely and securely. Software and hardware can be tied intimately when it comes to security, so you must consider them together. Once you have completed the baselining process for a particular hardware and software combination, you can configure any similar systems with the same baseline to achieve the same level and depth of security and protection. Uniform software baselines are critical in large-scale operations, because maintaining separate configurations and security levels for hundreds or thousands of systems is far too costly.

After administrators have finished patching, securing, and preparing a system, they often create an initial baseline configuration. This represents a secure state for the system or network device and a reference point of the software and its configuration. This information establishes a reference that can be used to help keep the system secure by establishing a known safe configuration. If this initial baseline can be replicated, it can also be used as a template when deploying similar systems and network devices.

Integrity Measurement

Integrity measurement is the measuring and identification of changes to a specific system away from an expected value. From the simple changing of data as measured by a hash value to the TPM-based integrity measurement of the system boot process and attestation of trust, the concept is the same. Take a known value, perform a storage of a hash or other keyed value, and then, at time of concern, recalculate and compare.

In the case of a TPM-mediated system, where the TPM chip provides a hardware-based root of trust anchor, the TPM system is specifically designed to calculate hashes of a system and store them in a Platform Configurations Register (PRC). This register can be read later and compared to a known, or expected, value, and if they differ, there is a trust violation. Certain BIOSs, UEFIs, and boot loaders can work with the TPM chip in this manner, providing a means of establishing a trust chain during system boot.

Chapter Review

In this chapter, you became acquainted with the elements of secure system design and deployment. The chapter opened with hardware/firmware security, exploring FDE/SED, TPM, and HSM devices, UEFI/BIOS, Secure Boot and attestation, supply chain, hardware root of trust, and EMI/EMP issues. Next, we surveyed the various types of operating systems, including network, server, workstation, appliance, kiosk, and mobile OSs, and then looked at hardening those systems via patch management, disabling unnecessary functions, ports, and services, and adhering to the concept of least functionality. We continued with the secure OS design concepts of secure configurations, trusted operating systems, application whitelisting/blacklisting, and disabling default accounts and passwords.

The chapter then explored security aspects of peripheral devices, including wireless keyboards and mice, displays, Wi-Fi-enabled MicroSD cards, printers and MFDs, external storage devices, and digital cameras.

The chapter closed with an examination of secure staging deployment concepts, including sandboxing, maintaining independent environments of development, test, staging, and production, creating a secure baseline, and monitoring change through integrity measurement.

Questions

To help you prepare further for the exam, and to test your level of preparedness, answer the following questions and then check your answers against the correct answers at the end of the chapter.

1. Why is physical security an essential element of a security plan?

A. Because employees telecommute, physical security is of lesser concern.

B. Physical security is not necessary with capabilities like encrypted hard drives and UEFI.

C. Unauthorized access to hardware and networking components can make many security controls ineffective.

D. Physical security has no impact to software security.

2. Which of the following is true concerning the purpose of full disk encryption and self-encrypting drives?

A. They significantly affect user response times during the encryption process.

B. They make offline attacks easier.

C. They eliminate the need for physical security measures.

D. They protect the data even if the disk is removed from the machine.

3. What is the primary purpose of the TPM?

A. To store encryption keys and make them inaccessible via normal software channels

B. To ensure platforms can run in a trusted environment

C. To facilitate storage of keys in the machine’s normal storage

D. To safely use system-provided key generation and storage and random number generation capabilities

4. Which of the following is not true about HSMs?

A. They are devices used to manage or store encryption keys.

B. Their limiting factor is performance.

C. They allow the use of keys without exposing them to host-based threats.

D. They typically have tamper-protection mechanisms to prevent physical access.

5. Why is UEFI preferable to BIOS?

A. UEFI resides on the hardware, making it faster than BIOS.

B. UEFI is stored in volatile hardware storage.

C. UEFI has limited ability to deal with high-capacity storage and high-bandwidth communications and thus is more optimized.

D. UEFI has more security designed into it, including provisions for secure booting.

6. Secure Boot performs all of the following except:

A. It provides all approved drivers needed.

B. It enables attestation that drivers haven’t changed since they were approved.

C. It only allows signed drivers and OS loaders to be invoked.

D. It blocks malware that attempts to alter the boot process.

7. When researching the security of a device manufacturer’s supply chain, which of the following is most difficult to determine?

A. Once a device is ordered, the purchaser can be sure its source won’t change.

B. Specifications are consistent between lots.

C. Country of origin.

D. The purchaser can rely on the root of trust to be consistent.

8. Which of the following is not true regarding hardware roots of trust?

A. They are secure by design.

B. They have very specific functionality.

C. They are typically implemented in hardware that is isolated from the operating system.

D. They provide security only at their level, not to higher layers of a system.

9. Which of the following is true about electromagnetic interference (EMI)?

A. It is a well-known issue and computer systems are protected from it.

B. Fluorescent lights can produce EMI that can affect computer systems.

C. Industrial equipment doesn’t produce EMI.

D. Shielding protects most devices from EMI.

10. What is an important step in securing a host system?

A. Determining the correct settings and implementing them correctly

B. Using the operating system’s embedded options for ease of configuration

C. Increasing the attack surface by enabling all available settings

D. Use manufacturer settings to provide a secure baseline to work from

11. Which of the following is a stand-alone machine, typically operating a browser on top of a Windows OS and set up to autologin to a browser instance locked to a specific website?

A. Workstation

B. Kiosk

C. Appliance

D. Server

12. Which of the following is a more formal, larger software update that addresses many software problems, often containing enhancements or additional capabilities as well as fixes for known bugs?

A. Hotfix

B. Service pack

C. Patch

D. Rollup

13. What is a simple way to improve system security?

A. Enabling all ports and services

B. Maintaining comprehensive access control rules

C. Disabling unnecessary ports and services

D. Optimizing system throughput

14. Why is the principle of least functionality important?

A. A system needs to be flexible in the functions it performs.

B. Manufacturer settings control known vulnerabilities.

C. Dynamically assigning functions reduces the attack surface.

D. Unnecessary functionality adds to the attack surface.

15. All of the following are steps in the OS hardening process except for:

A. Removing unnecessary applications and utilities

B. Disabling unneeded services

C. Updating the OS and application code to the latest version

D. Accepting default permissions

Answers

1. C. Physical security is an essential element of a security plan because unauthorized access to hardware and networking components can make many security controls ineffective.

2. D. The purpose of full disk encryption (FDE) and self-encrypting drives (SEDs) is to protect the data even if the disk is removed from the machine.

3. A. The primary purpose of Trusted Platform Module (TPM) is to store encryption keys and make them inaccessible via normal software channels.

4. B. Performance is not a limiting factor for HSMs.

5. D. UEFI is preferable to BIOS because it has more security designed into it, including provisions for secure booting.

6. A. Secure Boot does not provide all drivers; rather, it ensures they are signed and unchanged.

7. C. The country of origin of all the device’s components.

8. D. Hardware roots of trust are built on the principle that if one “trusts” one layer, that layer can be used to promote security to higher layers of a system.

9. B. Fluorescent lights can produce EMI that can affect computer systems.

10. A. An important step in securing a host system is determining the correct settings and implementing them correctly.

11. B. A kiosk is a stand-alone machine, typically operating a browser on top of a Windows OS and set up to autologin to a browser instance locked to a specific website.

12. C. A patch is a more formal, larger software update that addresses many software problems, often containing enhancements or additional capabilities as well as fixes for known bugs.

13. C. Disabling unnecessary ports and services is a simple way to improve system security.

14. D. The principle of least functionality is important because unnecessary or unused functions add to the attack surface.

15. D. Accepting default permissions is not part of the OS hardening process.