Chapter 5

Security Architecture and Engineering

IN THIS CHAPTER

Bullet Adopting secure design principles

Bullet Understanding security models

Bullet Choosing the right controls and countermeasures

Bullet Using security capabilities in information systems

Bullet Assessing and mitigating vulnerabilities

Bullet Deciphering cryptographic concepts and fundamentals

Bullet Getting physical with physical security design concepts

Security must be part of the design of information systems, as well as the facilities housing information systems and workers, which is covered in the Security Architecture and Engineering domain. This domain represents 13 percent of the CISSP certification exam.

Research, Implement, and Manage Engineering Processes Using Secure Design Principles

It is a natural human tendency to build things without first considering their design or security implications. A network engineer who is building a new network may just start plugging cables into routers and switches without thinking about the overall design — much less any security or privacy considerations. Similarly, a software engineer assigned to write a new program is apt to begin coding without planning the program’s architecture or design.

Crossreference This section covers Objective 3.1 of the Security Architecture and Engineering domain in the CISSP Exam Outline (May 1, 2021).

If we observe the outside world and the consumer products that are available, sometimes we see egregious usability and security flaws that make us wonder how the person or organization was ever allowed to participate in its design and development.

Tip Security professionals need to help organizations understand that security-by-design principles are vital components of the development of any system.

The engineering processes that require the inclusion of secure design principles include the following:

  • Concept development: From the idea stage, security considerations are vital to the success of any new IT engineering endeavor. Every project and product starts with something: a whiteboard session, sketches on cocktail napkins or pizza boxes, or a conference call. However the project starts, someone should ask how vital data, functions, and components will be protected in this new thing. We’re not looking for detailed answers; we’re looking for just enough confidence to know that we aren’t the latest lemmings rushing toward the nearest cliff.
  • Requirements: Before actual design begins, one or more people will define the requirements for the new system or feature. Often, there are several categories of requirements. Security, privacy, and regulatory requirements need to be included.
  • Design: After all requirements have been established and agreed on, formal design of the system or component can begin. Design must incorporate all requirements established in the preceding step.
  • Development: Depending on what is being built, development may take many forms, including creating
    • System and device configurations
    • Data center equipment racking diagrams
    • Data flows for management and monitoring systems
  • Testing: Individual components and the entire system are tested to confirm that each requirement developed earlier has been achieved. Generally, someone other than the builder/developer should perform testing.
  • Implementation: When the system or component is placed into service, security considerations help ensure that the new system/component and related things are not at risk. Implementation activities include
    • Configuring and cabling network devices
    • Installing and configuring operating systems (OSes) and subsystems, such as database management systems, web servers, and applications
    • Construction of physical facilities, work areas, and data centers
  • Maintenance and support: After the system or facility is placed into service, all subsequent changes need to undergo similar engineering steps to ensure that new or changing security risks are quickly mitigated.
  • Decommissioning: When a system or facility reaches the end of its service life, it must be decommissioned without placing data, other systems, or personnel at risk.

Tip The Building Security in Maturity Model (BSIMM) is a software security benchmarking tool that provides a framework for software security. The model is composed of 256 measurements and 113 activities. BSIMM activities consist of 12 practices organized into four domains, including governance, intelligence, SSDL touchpoints, and deployment. Go to https://www.bsimm.com to learn more.

The application development life cycle also includes security considerations that are nearly identical to the security engineering principles discussed here. Application development is covered in Chapter 10.

Design principles and concepts associated with security architecture and engineering include the following:

  • Threat modeling
  • Least privilege (and need to know)
  • Defense in depth
  • Secure defaults
  • Fail securely
  • Separation of duties
  • Keep it simple
  • Zero trust
  • Privacy by design
  • Trust but verify
  • Shared responsibility

These principles and concepts are discussed in detail in the remainder of this section.

Threat modeling

Threat modeling is a type of risk analysis used to identify security defects in the design phase of an information system or business process. Threat modeling is most often applied to software applications, but it can be used for OSes, devices, and business processes with equal effectiveness.

Threat modeling is typically attack-centric; threat modeling is most often used to identify vulnerabilities that can be exploited by an attacker in information systems.

Threat modeling is most effective when performed during the design phase of an information system, application, or process. When threats and their mitigation are identified during the design phase, much effort is saved by the avoidance of fixes in a completed system.

Although there are different approaches to threat modeling, the typical steps are

  • Identifying threats
  • Determining and diagramming potential attacks
  • Performing reduction analysis
  • Remediating threats

Identifying threats

Threat identification is the first step in threat modeling. Threats are those actions that an attacker may be able to perform successfully if corresponding vulnerabilities are present in the application, system, or process.

For software applications, two mnemonics are used as a memory aid during threat modeling:

  • STRIDE, a list of basic threats (developed by Microsoft):
    • Spoofing of user identity
    • Tampering
    • Repudiation
    • Information disclosure
    • Denial of service
    • Elevation of privilege
  • DREAD, an older technique used for assessing threats:
    • Damage
    • Reproducibility
    • Exploitability
    • Affected users
    • Discoverability

Although these mnemonics themselves don’t contain threats, they do assist the person performing threat modeling by reminding them of basic threat categories (STRIDE) and their analysis (DREAD).

Tip Appendices D and E in NIST SP800-30, Guide for Conducting Risk Assessments, are excellent general-purpose sources for threats.

Determining and diagramming potential attacks

After threats have been identified, threat modeling continues through the creation of diagrams that illustrate attacks on an application or system. An attack tree can be developed, outlining the steps required to attack a system. Figure 5-1 illustrates an attack tree for a mobile banking application.

Schematic illustration of attack tree for a mobile banking application.

© John Wiley & Sons, Inc.

FIGURE 5-1: Attack tree for a mobile banking application.

Remember An attack tree illustrates the steps used to attack a target system.

Performing reduction analysis

When you’re performing a threat analysis on a complex application or a system, it is likely that many similar elements will represent duplications of technology. Reduction analysis is an optional step in threat modeling that prevents duplication of effort. It doesn’t make sense to spend a lot of time analyzing different components in an environment if all of them have the same technology and configuration.

Here are typical examples:

  • An application contains several form fields (derived from the same source code) that request bank account numbers. Because all the field input modules use the same code, detailed analysis needs to be done only once.
  • An application sends several different types of messages over the same Transport Layer Security connection. Because the same certificate and connection are being used, a detailed analysis of the TLS connection needs to be done only once.

Remediating threats

As in routine risk analysis, the next step in threat analysis is enumerating potential measures to mitigate the identified threat. Because the nature of threats varies widely, remediation may consist of carrying out one or more of the following tasks for each risk:

  • Change source code (such as adding functions to closely examine input fields and filter out injection attacks).
  • Change configuration (such as switching to a more secure encryption algorithm or expiring passwords more frequently).
  • Change business process (such as adding or changing steps in a process or procedure to record or examine key data).
  • Change personnel such as providing training or moving responsibility for a task to another person).

Remember Recall that the four options for risk treatment are mitigation, transfer, avoidance, and acceptance. In the case of threat modeling, some threats may be accepted as they are.

Least privilege (and need to know)

The principle of least privilege states that people should have the capability to perform only the tasks (or access only the data) required to perform their primary jobs— no more.

Giving a person more privileges and access than required increases risk and invites trouble. Offering the capability to perform more than the job requires may become a temptation that results, sooner or later, in an abuse of privilege.

Giving a user full permissions on a network share rather than just read and modify rights to a specific directory, for example, opens the door not only to abuse of those privileges (such as reading or copying other sensitive information on the network share), but also to costly mistakes (such as accidentally deleting a file — or the entire directory!). As a starting point, organizations should approach permissions with a “deny all” mentality and add needed permissions as required.

Warning Giving users excessive privileges in network shares makes them vulnerable to ransomware attacks.

The concept of need to know states that only people with a valid business justification should have access to specific information or functions. In addition to having a need to know, a person must have an appropriate security clearance level to be granted access. Conversely, a person with the appropriate security clearance level but without a need to know should not be granted access.

One of the most difficult challenges in managing need to know is the use of controls that enforces the concept. Information owners need to be able to distinguish genuine need from curiosity and proceed accordingly.

Tip Least privilege is closely related to separation of duties and responsibilities, described later in this section. Distributing the duties and responsibilities of a given job function among several people means that those people require fewer privileges on a system or resource.

Remember The principle of least privilege states that people should have the fewest privileges necessary to perform their tasks.

Several important concepts associated with need to know and least privilege include

  • Entitlement: When a new user account is provisioned in an organization, the permissions granted to that account must be appropriate for the level of access required by the user. In too many organizations, human resources simply instructs the IT department to give a new user “whatever so-and-so (another user in the same department) has access to.” Instead, entitlement needs to be based on the principle of least privilege.
  • Aggregation: When people transfer between jobs and/or departments within an organization (see the section on job rotations later in this chapter), they often need different access and privileges to do their new jobs. Far too often, organizational security processes do not adequately ensure that access rights that a person no longer requires are revoked. Instead, people accumulate privileges, and over a period of many years, they can have far more access and privileges than they need. This process is known as aggregation, and it’s the antithesis of least privilege.

    Privilege creep and accumulation of privileges are others terms commonly used in this context.

  • Transitive trust: Trust relationships (in the context of security domains) are often established within and between organizations to facilitate ease of access and collaboration. A trust relationship enables subjects (such as users or processes) in one security domain to access objects (such as servers or applications) in another security domain. (See chapters 5 and 7 for more about objects and subjects.) A transitive trust extends access privileges to the subdomains of a security domain (analogous to inheriting permissions to subdirectories within a parent directory structure). Instead, a nontransitive trust should be implemented by requiring access to each subdomain to be explicitly granted based on the principle of least privilege, rather than inherited.

Defense in depth

Defense in depth is a strategy for resisting attacks. A system that employs defense in depth has two or more layers of protective controls designed to protect the system or data stored there.

An example defense-in-depth architecture would consist of a database protected by several components, such as

  • Screening router
  • Firewall
  • Intrusion prevention system
  • Hardened OS
  • OS-based network access filtering

All the layers listed here help protect the database. In fact, each by itself offers nearly complete protection. But when considered together, all these controls offer a varied (in effect, deeper) defense — hence, the term defense in depth.

True defense in depth employs heterogeneous, versus homogeneous, protection. Employing two back-to-back firewalls of the same make and model, for example, constitutes a poor implementation of defense in depth: a security flaw in one of the firewalls is likely to be present in the other one. A better example of defense in depth would be back-to-back firewalls of different makes (such as one made by Cisco and the other made by Palo Alto Networks); a security flaw in one is unlikely to be present in the other.

Remember Defense in depth refers to the use of multiple layers of protection.

Secure defaults

The concept of secure defaults encompasses several techniques, including

  • Secure by design: The relationship of components in a system lends to its resilience to attack.
  • Secure by default: Configuration settings and other options are adjusted to secure settings.
  • Secure by deployment: The procedures used to implement a system don’t compromise its security.

These techniques ensure that the design of new information systems includes inherent security in all phases of development and implementation. When the techniques are performed correctly, little or no retrofit to a system will be required after it is tested by security specialists who use techniques such as threat modeling and penetration testing.

Fail securely

Fail securely is a concept that describes the result of the failure of a control or safeguard. A control or safeguard is said to fail securely if its failure does not result in a reduction in protection. Consider a door that is used to control personnel access to a secure location. If the mechanism used to admit authorized personnel to the secure location fails, the door should remain locked, meaning that it is secure and continues to block unauthorized access.

Fail securely replaces the terms fail open and fail closed. These two older terms were sometimes confusing, depending on the context of a control. In some examples, failing open was secure, but in other examples, failing closed was secure. The confusion was not unlike the use of a double negative, such as a security door that is not secure in certain circumstances. Conversations that included fail open and fail closed often digressed into discussions of the meaning of the terms and whether failing open or failing closed was good or bad. Fortunately, fail securely came to the rescue, helping us better understand the context of a conversation.

Separation of duties

The concept of separation of duties (SoD, or segregation of duties and responsibilities) ensures that no single person has complete authority and control of a critical system or process. SoD is discussed further in Chapter 9.

Keep it simple

It is often said that complexity is the enemy of security and, conversely, that simplicity is the friend of security. These adages reflect the realization that more complex environments are inherently more difficult to secure, and the security posture of such an environment is harder to understand because of the higher number of components.

In information security, simplicity often calls for consistency of approach to system and data protection. Elegance of design is another way to think about simplicity. In security, less is more: Given two identical environments, the one with a simple yet effective design will be easier for engineers to understand than a complex architecture.

Security engineers and specialists often call on the KISS (Keep It Simple, Stupid) principle. No, we’re not calling you or anyone stupid. We didn’t make up this principle, but we do see it cited often.

Zero trust

The concept of zero trust has been around for a long time but is now gaining a lot of favor. Zero trust (ZT) is a popular buzzword these days, although it is not always well understood. We want you to be buzzword-compliant, so read on to find out more.

Zero trust is an about-face to the earlier notion that all devices within an organization’s network were considered to be trustworthy. Organizations have been compromised countless times because of this fateful assumption, often because attackers found it way too easy to attack trusted systems and endpoints; they usually gained carte blanche access to other systems because the compromised system was considered to be trustworthy.

Zero trust is not a product, tool, or technique; it’s a design principle that is implemented in different ways to ensure that systems retain their security and integrity. Here are some examples of zero trust in action:

  • An endpoint is not permitted to connect to a network (whether onsite, wired, wireless, or remote) until it can prove that its antivirus and other mechanisms are functioning properly and are up to date.
  • A user is not permitted to perform a high-risk or high-value transaction until they reauthenticate, proving that they are still in control.
  • A system is not permitted to communicate with another system until each is able to authenticate to the other.
  • A user is not permitted to access files or directories in a file share unless they demonstrate a need to know.
  • A newly acquired piece of open-source software is not considered to be secure until it can be analyzed by a source code analyzer to ensure that it is free of exploitable vulnerabilities.
  • New executable programs are not permitted to run on a system until they have been vetted by security personnel and added to a whitelist.

Privacy by design

Privacy (as we discuss more fully in Chapter 3) includes measures not only to protect information about people, but also to ensure the proper uses of personal information. Focusing on proper use here, the principle of privacy by design ensures that information systems have several capabilities, including

  • Providing mechanisms to control who has access to personal information
  • Providing visibility into the uses of personal information
  • Providing visibility into the movement of personal information as it enters, moves about, and leaves the organization
  • Providing means for performing anonymization and pseudonymization of individual records and entire databases
  • Providing means for removing business records containing personal information when they reached their retention life
  • Providing means for easily determining the uses of personal information for specific persons upon request
  • Providing means for excluding specific people from various types of processing upon request of those people (a process commonly known as opt-out)
  • Alerting management when new or unauthorized uses of personal information occur

Since the passage of recent privacy laws (generally starting with the European General Data Protection Regulation [GDPR]), it’s not enough for organizations simply to protect personal information. Now organizations must build structures that provide visibility into and control of the uses of personal information so that organizations do not run afoul of these new laws.

We’ll further explain some of the preceding terms. Organizations are realizing that the consequences of failing to use and protect personal information properly are climbing rapidly, with potential fines that can wipe out an organization’s profitability. New privacy laws incentivize organizations to remove personal information from their databases as soon as that information is no longer needed. The rights of data subjects to opt out and to be forgotten can compel organizations to build mechanisms to remove them from their records. Techniques that organizations can use include

  • Anonymization: An organization can remove from its databases specific fields that identify a data subject. These fields might include a subject’s name, address, government identifiers such as social insurance or driver’s license number, email address, and phone number.
  • Pseudonymization: An organization can substitute pseudonyms for personal data in identifiable fields so that the records no longer relate to specific people.

Although pseudonymisation has many uses, it should be distinguished from anonymization, as it may provide only limited protection for the identity of data subjects because may allow identification using indirect means. Where a pseudonym is used, it may be possible to identify the data subject by analyzing the underlying or related data When done properly, these two techniques constitute the effective removal of a data subject from an organization’s records.

Warning Before removing records from a database upon the request of a data subject, organizations must also consider minimum retention periods required by other laws. Generally, those other laws will prevail.

Tip Readers who want to understand more about data privacy can pick up a copy of Certified Information Privacy Manager All-In-One Exam Guide (www.mhprofessional.com/cipm-certified-information-privacy-manager-all-in-one-exam-guide-2315615) or Certified Data Privacy Solutions Engineer All-In-One Exam Guide (www.mhprofessional.com/cdpse-certified-data-privacy-solutions-engineer-all-in-one-exam-guide-2261479).

Trust but verify

The concept trust but verify was made popular in the 1980s, when President Ronald Reagan enacted a treaty with the Soviet Union that included provisions for each country to not only enforce the limitation of nuclear armaments, but also inspect the other’s nuclear arsenal to confirm compliance with the treaty.

In information security, the principle means that certain controls or mechanisms should be examined or tested periodically to ensure that they comply with policies or requirements. Although examining or testing a system is an operational activity performed on a system after it has been designed and implemented, the design of a system should permit it to be examined. Here are a couple of examples:

  • A system’s source code should not be obfuscated (at least in testing phases), because doing so would inhibit the use of source-code inspection tools.
  • It should be possible to inspect the contents of encrypted channels to verify their proper use.
  • Password hashes should be cracked periodically to ensure that passwords comply with policy.
  • Access logs should be checked periodically to see whether personnel attempt to access workspaces they are not authorized to enter.

Shared responsibility

This is a fundamental truth that is not universally understood: Cloud providers do not take care of information security — not all of it, anyway. More breaches and information leaks than we can count have occurred because organizations and people did not understand this concept (and because of lack of training and plain old sloppiness).

Better cloud service providers — and by this, we mean Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) — have developed specific documents known as shared responsibility matrices, often in visual form, so that their customers have a clearer idea of what security controls are taken care of by the service provider and what controls are the responsibility of the customer. Sometimes, however, specific service providers don’t provide clear guidance, in which case a skilled information security specialist needs to examine the characteristics of the service and discern the responsibility boundaries. However you get to this clear determination, it’s critical that organizations understand precisely what they should be doing with regard to security and privacy and what the service provider is supposed to be doing.

Figures 5-2 and 5-3 show typical shared responsibility matrices from Amazon Web Services (AWS) and Microsoft Azure, respectively. Note that the matrices visually depict the areas in which AWS and Azure provide security and those in which customers are required to provide security.

An illustration of AWS shared responsibility matrix.

Source: AWS

FIGURE 5-2: AWS shared responsibility matrix.

An illustration of Azure shared responsibility matrix.

Source: Microsoft

FIGURE 5-3: Azure shared responsibility matrix.

Examples of what shared responsibility means at various levels for different cloud services include the following:

  • Access control: Generally, a SaaS, IaaS, and PaaS provider is going to take care of administrative access to the system or underlying system you use. It’s your responsibility, however, to create, allocate, and manage all user access to the system you run in that environment.
  • Network security: Generally, a PaaS and SaaS provider will employ firewalls and other capabilities to protect their environments from many kinds of attacks. Most IaaS services provide no such controls, however. If you want firewalls to protect your IaaS OSes, you have to implement them yourself.
  • System security: SaaS and PaaS providers will ensure that underlying OSes are current, patched, and hardened. But in an IaaS environment, you are required to configure and manage OSes with whatever level of security you want.
  • Source-code security: Generally, a PaaS and SaaS provider will employ means to verify that the software it provides is reasonably free of exploitable defects. But you must do all the work on your own to ensure that any software you develop and use in a PaaS or IaaS environment is free of defects.
  • Physical security: Virtually all SaaS, PaaS, and IaaS providers are going to take care of physical security (and environmental) concerns for all system components they provide. If part of an overall system resides in your own premises, however, you have to protect those systems with whatever measures you deem appropriate.

Understand the Fundamental Concepts of Security Models

Security models help us understand complex security mechanisms in information systems by illustrating concepts that can be used to analyze an existing system or design a new one.

Crossreference This section covers Objective 3.2 of the Security Architecture and Engineering domain in the CISSP Exam Outline (May 1, 2021).

Models are used to express access control requirements in a theoretical or mathematical framework and precisely describe or quantify real access control systems. Several important access control models include

  • Biba
  • Bell-LaPadula
  • Access Matrix
  • Discretionary Access Control
  • Mandatory Access Control
  • Take-Grant
  • Clark-Wilson
  • Information Flow
  • Noninterference

These models are discussed in the following sections.

Remember The Bell-LaPadula, Access Matrix, Mandatory Access Control, Discretionary Access Control, and Take-Grant models address the confidentiality of stored information. The Biba and Clark-Wilson models address the integrity of stored information.

Biba

The Biba integrity model (sometimes referred to as Bell-LaPadula upside down) was the first formal integrity model. Biba is a lattice-based model that addresses the first goal of integrity: ensuring that modifications to data aren’t made by unauthorized users or processes. (See Chapter 3 for a complete discussion of the three goals of integrity.) Biba defines the following two properties:

  • Simple integrity property: A subject can’t read information from an object that has a lower integrity level than the subject (also called no read down).
  • *-integrity property (star integrity property): A subject can’t write information to an object that has a higher integrity level than the subject (also known as no write up).

Bell-LaPadula

The Bell-LaPadula model was the first formal confidentiality model of a mandatory access control system. (We discuss mandatory and discretionary access controls in Chapter 7.) It was developed for the U.S. Department of Defense (DoD) to formalize a multilevel security policy. As we discuss in Chapter 3, the DoD classifies information based on sensitivity at three basic levels: Confidential, Secret, and Top Secret. To access classified information (and systems), a person must have access (a clearance level equal to or exceeding the classification of the information or system) and need to know (legitimately need of access to perform a required job function). The Bell-LaPadula model implements the access component of this security policy.

Bell-LaPadula is a state machine model that addresses only the confidentiality of information. The basic premise of the model is that information can’t flow downward — that is, that information at a higher level is not permitted to be copied or moved to a lower level. Bell-LaPadula defines the following two properties:

  • Simple security property (ss property): A subject can’t read information from an object that has a higher sensitivity label than the subject (also known as no read up).
  • *-property (star property): A subject can’t write information to an object that has a lower sensitivity label than the subject (also known as no write down).

Bell-LaPadula also defines two additional properties that give it the flexibility of a discretionary access control model:

  • Discretionary security property: This property determines access based on an access matrix (see the following section).
  • Trusted subject: A trusted subject is an entity that can violate the *-property but not its intent.

Tip A state machine is an abstract model used to design computer programs. The state machine illustrates which state the program will be in at any time.

Access Matrix

An Access Matrix model, in general, provides object access rights (read/write/execute) to subjects in a discretionary access control (DAC) system. An access matrix consists of access control lists (columns) and capability lists (rows). See Table 5-1 for an example.

TABLE 5-1 An Access Matrix Example

Subject/Object

Directory: H/R

File: Personnel

Process: LPD

Thomas

Read

Read/Write

Execute

Lisa

Read

Read

Execute

Harold

None

None

None

Discretionary Access Control

A DAC system is one in which the owners of specific objects (typically, files and/or directories) can adjust access permissions at their discretion. No central administrator is needed to adjust permissions. The underlying OS enforces these access rights by permitting or denying access to specific objects.

Mandatory Access Control

A Mandatory Access Control (MAC) system is one that is controlled by a central administrator who determines access rights to objects The OS enforces these access rights by permitting or denying access to specific objects.

Take-Grant

Take-Grant systems specify the rights that a subject can transfer to or from another subject or object. These rights are defined through four basic operations: create, revoke, take, and grant.

Clark-Wilson

The Clark-Wilson integrity model establishes a security framework for use in commercial activities, such as the banking industry. Clark-Wilson addresses all three goals of integrity and identifies special requirements for inputting data based on the following items and procedures:

  • Unconstrained data items: Data outside the control area, such as input data.
  • Constrained data items (CDIs): Data inside the control area. (Integrity must be preserved.)
  • Integrity verification procedures: Check validity of CDIs.
  • Transformation procedures: Maintain integrity of CDIs.

The Clark-Wilson integrity model is based on the concept of a well-formed transaction, in which a transaction is sufficiently ordered and controlled that it maintains internal and external consistency.

Information Flow

An Information Flow model is a type of access control model based on the flow of information rather than on imposing access controls. Objects are assigned a security class and value, and their direction of flow — from one application to another or from one system to another — is controlled by a security policy. This model type is useful for analyzing covert channels through detailed analysis of the flow of information in a system, including the sources of information and the paths of flow.

Noninterference

A Noninterference model ensures that the actions of different objects and subjects aren’t seen by (and don’t interfere with) other objects and subjects on the same system.

Remember The design of specific access control systems often borrows from one or more of the models described in this section. All access control systems, for example, are either MAC or DAC, and many can also be Noninterference.

Select Controls Based Upon Systems Security Requirements

Designing and building secure software is critical to information security, but the systems that software runs on must themselves be securely designed and built. Selecting appropriate controls is essential to designing a secure computing architecture. Numerous systems security evaluation models exist to help you select the right controls and countermeasures for your environment.

Crossreference This section covers Objective 3.3 of the Security Architecture and Engineering domain in the CISSP Exam Outline (May 1, 2021).

Various security controls and countermeasures that should be applied to security architecture, as appropriate, include defense in depth, system hardening, implementation of heterogeneous environments, and designing system resilience. Often, these controls are enacted based upon high-level requirements that are usually determined by the context or use of a system. When baseline controls are chosen and implemented, the risk management life cycle (discussed in Chapter 3) will, over time, determine the need for additional controls as well as changes to existing controls.

Examples of contexts and uses of information systems include

  • Services to U.S. government agencies: Often, systems that provide services to U.S. government agencies are required to employ controls from NIST SP800-53 (Security and Privacy Controls for Information Systems and Organizations).
  • Processing of credit card data: Systems that store, process, or transmit credit card data are required to comply with all the requirements in the Payment Card Industry Data Security Standard (PCI DSS).
  • Processing of health-care information: Systems that store, process, or transmit patient health information are required to comply with requirements enacted by the Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health (HITECH).
  • Processing of personal financial information: Systems that store, process, or transmit personal financial information are subject to privacy requirements in the Gramm-Leach-Bliley Act.
  • Processing of personal information: Laws in many countries and U.S. states are strengthening requirements for the protection and proper use of personal information.

Evaluation criteria

Evaluation criteria provide a standard for quantifying the security of a computer system or network. These criteria include the Trusted Computer System Evaluation Criteria (TCSEC), Trusted Network Interpretation (TNI), European Information Technology Security Evaluation Criteria (ITSEC), and the Common Criteria.

Trusted Computer System Evaluation Criteria

The Trusted Computer System Evaluation Criteria (TCSEC), commonly known as the Orange Book, is part of the Rainbow Series developed for the U.S. DoD by the National Computer Security Center (NCSC). It’s the formal implementation of the Bell-LaPadula model. The evaluation criteria were developed to achieve the following objectives:

  • Measurement: Provide a metric for assessing comparative levels of trust between computer systems
  • Guidance: Identify standard security requirements that vendors must build into systems to achieve a given trust level
  • Acquisition: Provide customers a standard for specifying acquisition requirements and identifying systems that meet those requirements

The four basic control requirements identified in the Orange Book are

  • Security policy: The rules and procedures by which a trusted system operates. Specific TCSEC requirements include
    • DAC: Owners of objects are able to assign permissions to other subjects.
    • MAC: Permissions to objects are managed centrally by an administrator.
    • Object reuse: The confidentiality of objects that are reassigned after initial use is protected. A deleted file still exists on storage media, for example; only the file allocation table and the first character of the file have been modified. Thus, residual data may be restored, which describes the problem of data remanence. Object-reuse requirements define procedures for erasing the data.
    • Labels: Sensitivity labels are required in MAC-based systems. (Read more about information classification in Chapter 3.) Specific TCSEC labeling requirements include integrity, export, and subject/object labels.
  • Assurance: Guarantees that a security policy is implemented correctly. Specific TCSEC requirements (listed here) are classified as operational assurance requirements:
    • System architecture: TCSEC requires features and principles of system design that implement specific security features.
    • System integrity: Hardware and firmware operate properly and are tested to verify proper operation.
    • Covert channel analysis: TCSEC requires covert channel analysis that detects unintended communication paths not protected by a system’s normal security mechanisms. A covert storage channel conveys information by altering stored system data. A covert timing channel conveys information by altering a system resource’s performance or timing.

      Remember A systems or security architect must understand covert channels and how they work to prevent the use of covert channels in the system environment.

    • Trusted facility management: A specific person is assigned to administer the security-related functions of a system. This requirement is closely related to the concepts of least privilege, separation of duties, and need to know.
    • Trusted recovery: This requirement ensures that security isn’t compromised in the event of a system crash or failure. The process involves two primary activities: failure preparation and system recovery.
    • Security testing: This requirement specifies required testing by the developer and the NCSC.
    • Design specification and verification: This requirement calls for mathematical and automated proof that the design description is consistent with the security policy.
    • Configuration management: This requirement calls for identifying, controlling, accounting for, and auditing all changes made to the Trusted Computing Base during the design, development, and maintenance phases of a system’s life cycle.
    • Trusted distribution: This requirement protects a system during transport from a vendor to a customer.
  • Accountability: The ability to associate users and processes with their actions. Specific TCSEC requirements include
    • Identification and authentication: Systems need to track who performs what activities. We discuss this topic in Chapter 7.
    • Trusted path: A direct communications path between the user and the Trusted Computing Base (TCB) doesn’t require interaction with untrusted applications or OS layers.
    • Audit: Security-related activities in a trusted system are recorded, examined, analyzed, and reviewed.
  • Documentation: Specific TCSEC requirements include
    • Security Features User’s Guide: This document is a user manual for the system.
    • Trusted Facility Manual: This document is the system administrator’s and/or security administrator’s manual.
    • Test documentation: According to the TCSEC manual, this documentation must be in a position to “show how the security mechanisms were tested, and results of the security mechanisms’ functional testing.”
    • Design documentation: This documentation defines system boundaries and internal components, such as the Trusted Computing Base.

Remember The Orange Book defines four major hierarchical classes of security protection and numbered subclasses (higher numbers indicate higher security):

  • D: Minimal protection
  • C: Discretionary protection (C1 and C2)
  • B: Mandatory protection (B1, B2, and B3)
  • A: Verified protection (A1)

These classes are further defined in Table 5-2.

TABLE 5-2 TCSEC Classes

Class

Name

Sample Requirements

D

Minimal protection

These requirements are reserved for systems that fail evaluation.

C1

Discretionary protection (DAC)

The system doesn’t need to distinguish between individual users and types of access.

C2

Controlled access protection (DAC)

The system must distinguish between individual users and types of access; object reuse security features are required.

B1

Labeled security protection (MAC)

Sensitivity labels are required for all subjects and storage objects.

B2

Structured protection (MAC)

Sensitivity labels are required for all subjects and objects; trusted path requirements apply.

B3

Security domains (MAC)

Access control lists are specifically required; system must protect against covert channels.

A1

Verified design (MAC)

Formal top-level specification is required; configuration management procedures must be enforced throughout the entire system life cycle.

Beyond A1

 

Self-protection and reference monitors are implemented in the Trusted Computing Base, which is verified to source-code level.

Tip You don’t need to know the specific requirements of each TCSEC level for the CISSP exam, but you should know at what levels DAC and MAC are implemented and the relative trust levels of the classes, including numbered subclasses.

Major limitations of the Orange Book include the following:

  • It addresses only confidentiality issues. It doesn’t include integrity and availability.
  • It isn’t applicable to most commercial systems.
  • It emphasizes protection from unauthorized access despite statistical evidence that many security violations involve insiders.
  • It doesn’t address networking issues.

Trusted Network Interpretation

Part of the Rainbow Series, like TCSEC (discussed in the preceding section), Trusted Network Interpretation (TNI) addresses confidentiality and integrity in trusted computer/communications network systems. Within the Rainbow Series, it’s known as the Red Book.

Part I of the TNI is a guideline for extending the system protection standards defined in the TCSEC (the Orange Book) to networks. Part II of the TNI describes additional security features such as communications integrity, protection from denial of service, and transmission security.

European Information Technology Security Evaluation Criteria

Unlike TCSEC, the European Information Technology Security Evaluation Criteria (ITSEC) addresses confidentiality, integrity, and availability, as well as evaluating an entire system, defined as a target of evaluation (TOE) rather than a single computing platform.

ITSEC evaluates functionality (security objectives, or why; security-enforcing functions, or what; and security mechanisms, or how) and assurance (effectiveness and correctness) separately. The 10 functionality (F) classes and 7 evaluation (E) (assurance) levels are listed in Table 5-3.

Tip You don’t need to know the specific requirements of each ITSEC level for the CISSP exam, but you should know how the basic functionality levels (F-C1 through F-B3) and evaluation levels (E0 through E6) correlate to TCSEC levels.

TABLE 5-3 ITSEC Functionality (F) Classes and Evaluation (E) Levels Mapped to TCSEC Levels

(F) Class

(E) Level

Description

NA

E0

Equivalent to TCSEC level D

F-C1

E1

Equivalent to TCSEC level C1

F-C2

E2

Equivalent to TCSEC level C2

F-B1

E3

Equivalent to TCSEC level B1

F-B2

E4

Equivalent to TCSEC level B2

F-B3

E5

Equivalent to TCSEC level B3

F-B3

E6

Equivalent to TCSEC level A1

F-IN

NA

TOEs with high integrity requirements

F-AV

NA

TOEs with high availability requirements

F-DI

NA

TOEs with high integrity requirements during data communication

F-DC

NA

TOEs with high confidentiality requirements during data communication

F-DX

NA

Networks with high confidentiality and integrity requirements

Common Criteria

The Common Criteria for Information Technology Security Evaluation (usually called Common Criteria) is an international effort to standardize and improve existing European and North American evaluation criteria. The Common Criteria has been adopted as an international standard in ISO/IEC 15408. The Common Criteria defines eight evaluation assurance levels, which are listed in Table 5-4.

Tip You don’t need to know the specific requirements of each Common Criteria level for the CISSP exam, but you should understand the basic evaluation hierarchy (EAL0 through EAL7, in order of increasing levels of trust).

System certification and accreditation

System certification is a formal methodology for comprehensive testing and documentation of information system security safeguards, both technical and nontechnical, in a given environment by using established evaluation criteria (the TCSEC).

TABLE 5-4 The Common Criteria

Level

TCSEC Equivalent

ITSEC Equivalent

Description

EAL0

N/A

N/A

Inadequate assurance

EAL1

N/A

N/A

Functionally tested

EAL2

C1

E1

Structurally tested

EAL3

C2

E2

Methodically tested and checked

EAL4

B1

E3

Methodically designed, tested, and reviewed

EAL5

B2

E4

Semiformally designed and tested

EAL6

B3

E5

Semiformally verified design and tested

EAL7

A1

E6

Formally verified design and tested

Accreditation is official, written approval of the operation of a specific system in a specific environment, as documented in the certification report. Accreditation is normally granted by a senior executive or designated approving authority (DAA), a term used in the U.S. military and government. This DAA is normally a senior official, such as a commanding officer.

System certification and accreditation must be updated when any changes are made in the system or environment, and they must be revalidated periodically, typically every three years.

The certification and accreditation process has been formally implemented in U.S. military and government organizations as the Defense Information Technology Security Certification and Accreditation Process(DITSCAP) and National Information Assurance Certification and Accreditation Process (NIACAP), respectively. U.S. government agencies that use cloud-based systems and services are required to undergo FedRAMP or Cybersecurity Maturity Model Certification (CMMC) certification and accreditation processes (described in this chapter). These important processes are used to make sure that a new or changed system has the proper design and operational characteristics and is suitable for a specific task.

DITSCAP

The Defense Information Technology Security Certification and Accreditation Process (DITSCAP) formalizes the certification and accreditation process for U.S. DoD information systems through four distinct phases:

  • Definition: Determines security requirements by defining the organization and system’s mission, environment, and architecture.
  • Verification: Ensures that a system undergoing development or modification remains compliant with the System Security Authorization Agreement, which is a baseline security configuration document.
  • Validation: Confirms compliance with the System Security Authorization Agreement.
  • Post accreditation: Represents ongoing activities required to maintain compliance and address new and evolving threats throughout a system’s life cycle.

NIACAP

The National Information Assurance Certification and Accreditation Process (NIACAP) formalizes the certification and accreditation process for U.S. government national security information systems. NIACAP consists of four phases — definition, verification, validation, and post accreditation — that generally correspond to the DITSCAP phases. Additionally, NIACAP defines three types of accreditation:

  • Site accreditation: All applications and systems at a specific location are evaluated.
  • Type accreditation: A specific application or system for multiple locations is evaluated.
  • System accreditation: A specific application or system at a specific location is evaluated.

FedRAMP

The Federal Risk and Authorization Management Program (FedRAMP) is a standardized approach to assessments, authorization, and continuous monitoring of cloud-based service providers. This program represents a change from controls-based security to risk-based security.

CMMC

The Cybersecurity Maturity Model Certification (CMMC) is an assessment program used to evaluate the security of service providers that provide information system-related services to U.S. government agencies. CMMC is aligned with the NIST SP-171 (“Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations”) standard.

DCID 6/3

The Director of Central Intelligence Directive 6/3 is the process used to protect sensitive information that’s stored on computers used by the U.S. Central Intelligence Agency.

Understand Security Capabilities of Information Systems

Basic concepts related to security architecture include the Trusted Computing Base, Trusted Platform Module, secure modes of operation, open and closed systems, protection rings, security modes, and recovery procedures.

Crossreference This section covers Objective 3.4 of the Security Architecture and Engineering domain in the CISSP Exam Outline (May 1, 2021).

Trusted Computing Base

A Trusted Computing Base (TCB) is the entire complement of protection mechanisms within a computer system (including hardware, firmware, and software) that’s responsible for enforcing a security policy. A security perimeter is the boundary that separates the TCB from the rest of the system.

Remember A TCB is the total combination of protection mechanisms within a computer system (including hardware, firmware, and software) that’s responsible for enforcing a security policy.

Access control is the ability to permit or deny the use of an object (a passive entity, such as a system or file) by a subject (an active entity, such as a person or process).

A reference monitor is a system component that enforces access controls on an object. Stated another way, a reference monitor is an abstract machine that mediates all access to an object by a subject.

Remember A security kernel is the combination of hardware, firmware, and software elements in a TCB that implements the reference monitor concept. A security kernel must

  • Mediate all access
  • Be protected from modification
  • Be verified as correct

Trusted Platform Module

A Trusted Platform Module (TPM) performs sensitive cryptographic functions on a physically separate, dedicated microprocessor. The TPM specification was written by the Trusted Computing Group and is an international standard (ISO/IEC 11889 Series).

A TPM generates and stores cryptographic keys and performs the following functions:

  • Attestation: Enables third-party verification of the system state, using a cryptographic hash of the known-good hardware and software configuration.
  • Binding: Binds a unique cryptographic key to specific hardware.
  • Sealing: Encrypts data with a unique cryptographic key and ensures that ciphertext can be decrypted only if the hardware is in a known-good state.

Common TPM uses include ensuring platform integrity, full disk encryption, password and cryptographic key protection, and digital rights management.

Secure modes of operation

Security modes are used in MAC systems to enforce different levels of security. Techniques and concepts related to secure modes of operation include

  • Abstraction: The process of viewing an application from its highest-level functions, which makes all lower-level functions into abstractions. Lower-level functions are treated as black boxes — known to work, even if we don’t know how.
  • Data hiding: An object-oriented term that refers to the practice of encapsulating an object within another to hide the first object’s functioning details.
  • System high mode: A system that operates at the highest level of information classification. Any user who wants to access such a system must have clearance at or above the information classification level.
  • Security kernel: Composed of hardware, software, and firmware components that mediate access and functions between subjects and objects. The security kernel is part of the protection rings model, in which the OS kernel occupies the innermost ring, and rings farther from the innermost ring represent fewer access rights. The security kernel is the innermost ring and has full access to all system hardware and data. User programs occupy outer rings and have fewer access privileges.
  • Reference monitor: A component implemented by the security kernel that enforces access controls on data and devices on a system. In other words, when a user tries to access a file, the reference monitor ultimately performs the “Is this person allowed to access this file?” function.

Remember The system’s reference monitor enforces access controls on a system.

Open and closed systems

An open system is a vendor-independent system that complies with a published and accepted standard. This compliance with open standards promotes interoperability between systems and components made by different vendors. Additionally, open systems can be independently reviewed and evaluated, which facilitates the identification of bugs and vulnerabilities and the rapid development of solutions and updates. Examples of open systems include the Linux OS, the Open Office desktop productivity system, and the Apache web server.

A closed system uses proprietary hardware and/or software that may not be compatible with other systems or components. Source code for software in a closed system normally isn’t available to customers or researchers. Examples of closed systems include the Microsoft Windows OS, the Oracle database management system, and Apple’s iTunes.

Technicalstuff The terms open systems and closed systems also refer to a system’s access model. A closed system does not allow access by default, whereas an open system does.

Memory protection

Virtually all of today’s OSes are multiprocessing — that is, several processes can occupy system memory and be processing simultaneously. OSes employ a means of process isolation so that each process is prevented from accessing memory allocated to all other processes. Although process isolation is automatic and usually considered to be effective, some species of malware have been able to exploit OS kernel weaknesses and access memory allocated to other processes. For this reason, it’s often wise to employ obfuscation techniques or encryption to hide sensitive data in memory, such as encryption keys, and to deallocate or overwrite those memory locations when such data is no longer needed.

Encryption and decryption

Encryption and decryption can be thought of as being forms of access control, wherein data is converted to ciphertext with an encryption key; any person who is in possession of the correct decryption key may access the plaintext form of this information, but any person who lacks the decryption key may not access it. Encryption and decryption concepts are discussed later in this chapter.

Protection rings

The concept of protection rings implements multiple concentric domains with increasing levels of trust near the center. The most privileged ring is identified as Ring 0 and normally includes the OS security kernel. Additional system components are placed in the appropriate concentric ring according to the principle of least privilege and to provide isolation, so that a breach of a component in one protection ring does not automatically provide access to components in more privileged rings. The MIT MULTICS OS (whose ashes gave rise to Unix) implements the concept of protection rings in its architecture, as did Novell Netware. Figure 5-4 depicts an operating system protection ring model.

Schematic illustration of protection rings provide layers of defense in a system.

Image courtesy of authors

FIGURE 5-4: Protection rings provide layers of defense in a system.

Security modes

A system’s security mode of operation describes how a system handles stored information at various classification levels. Several security modes of operation, based on the classification level of information being processed on a system and the clearance level of authorized users, have been defined. These designations, typically used for U.S. military and government systems, include

  • Dedicated: All authorized users must have a clearance level equal to or higher than the highest level of information processed on the system and a valid need to know.
  • System High: All authorized users must have a clearance level equal to or higher than the highest level of information processed on the system, but a valid need to know isn’t necessarily required.
  • Multilevel: Information at different classification levels is stored or processed on a trusted computer system (a system that employs all necessary hardware and software assurance measures and meets the specified requirements for reliability and security). Authorized users must have an appropriate clearance level, and access restrictions are enforced by the system accordingly.
  • Limited access: Authorized users aren’t required to have a security clearance, but the highest level of information on the system is Sensitive but Unclassified.

Remember A trusted computer system is a system with a TCB.

Security modes of operation generally come into play in environments that contain highly sensitive information, such as government and military environments. Most private and education systems run in multilevel mode, meaning they contain information at all sensitivity levels. See Chapter 3 for more on security clearance levels.

Recovery procedures

A hardware or software failure can potentially compromise a system’s security mechanisms. Security designs that protect a system during a hardware or software failure include

  • Fault-tolerant systems: These systems continue to operate after the failure of a computer or network component. The system must be capable of detecting and correcting — or circumventing — a fault.
  • Fail-safe systems: When a hardware or software failure is detected, program execution is terminated, and the system is protected from compromise.
  • Fail-soft (resilient) systems: When a hardware or software failure is detected, certain noncritical processing is terminated, and the computer or network continues to function in a degraded mode.
  • Failover systems: When a hardware or software failure is detected, the system automatically transfers processing to a component, such as a clustered server.

Assess and Mitigate the Vulnerabilities of Security Architectures, Designs, and Solution Elements

In this section, we discuss the techniques used to identify and fix vulnerabilities in systems. We will also briefly discuss techniques for security assessments and testing, which are fully explored in Chapter 8.

Crossreference This section covers Objective 3.5 of the Security Architecture and Engineering domain in the CISSP Exam Outline (May 1, 2021).

Unless detected (and corrected) by an experienced security analyst, many weaknesses may be present in a system and permit exploitation, attack, or malfunction. These vulnerabilities include

  • Covert channels: Covert channels are unknown, hidden communications that take place within the medium of a legitimate communications channel.
  • Rootkits: By their very nature, rootkits are designed to subvert system architecture by inserting themselves into an environment in a way that makes it difficult or impossible to detect. Some rootkits run as a hypervisor and change the computer’s OS into a guest, which changes the basic nature of the system in a powerful but subtle way. We wouldn’t normally discuss malware in a chapter on computer and security architecture, but rootkits are game-changers that warrant mention because they use various techniques to hide themselves from the target system.
  • Race conditions: Software code in multiprocessing and multiuser systems, unless it’s very carefully designed and tested, can result in critical errors that are difficult to find. A race condition is a flaw in a system where the output or result of an activity in the system is unexpectedly tied to the timing of other events. The term race condition comes from the idea of two events or signals racing to influence an activity.

    The most common race condition is the time-of-check-to-time-of-use bug, caused by changes in a system between the checking of a condition and the use of the results of that check. Two programs that both try to open a file for exclusive use may open the file, even though only one should be able to do so.

  • State attacks: Web-based applications use session management to distinguish users from one another. The mechanisms used by the web application to establish sessions must be able to resist attack. Primarily, the algorithms used to create session identifiers must not permit an attacker to steal session identifiers or guess other users’ session identifiers. A successful attack would result in an attacker’s taking over another user’s session, which can lead to the compromise of confidential data, fraud, and monetary theft.
  • Emanations: The unintentional emissions of electromagnetic or acoustic energy from a system can be intercepted by others and possibly used to illicitly obtain information from the system. A common form of undesired emanations is radiated energy from CRT computer monitors. (Yes, cathode-ray tubes are still out there, and not just in old movies!) A third party can discover what data is being displayed on a CRT by intercepting radiation emanating from the display adapter or monitor from as far as several hundred meters. Also, a third party can eavesdrop on a network if it has one or more unterminated coaxial cables in its cable plant.

Client-based systems

The design vulnerabilities often found on endpoints involve defects in client-side code in browsers and applications. The defects most often found include the following:

  • Sensitive data left behind in the file system: Generally, this data consists of temporary files and cache files, which may be accessible by other users and processes on the system.
  • Sensitive data residing in memory: Although OS process isolation is supposed to protect data in a program’s memory space, some exploits are able to access protected memory.
  • Unprotected local data: Local data stores may have loose permissions and lack encryption.
  • Vulnerable applets: Many browsers and other client applications employ applets for viewing documents and video files. Often, the applets themselves may have exploitable weaknesses.
  • Unprotected or weakly protected communications: Data transmitted between the client and other systems may use weak encryption or no encryption at all.
  • Weak or nonexistent authentication: Authentication methods on the client, or between the client and server systems, may be unnecessarily weak. This weakness permits an adversary to access the application, local data, or server data without authenticating.

Other weaknesses may be present in client systems. For a complete understanding of application weaknesses, consult https://owasp.org.

Identifying weaknesses like the preceding examples requires using one or more of the following techniques:

  • OS examination
  • Network sniffing
  • Code review
  • Manual testing and observation

Server-based systems

Design vulnerabilities found on servers are the same as for client-based systems, discussed in the preceding section. The terms client and server have to do only with perspective. In both cases, software is running on a system.

Database systems

Database management systems are nearly as complex as the OSes on which they reside. Vulnerabilities in database management systems include

  • Loose access permissions: Like applications and OSes, database management systems have schemes of access controls that are often designed far too loosely, which permits more access to critical and sensitive information than is appropriate. Another aspect of loose access permissions is an excessive number of people who have privileged access. Finally, there can be failures to implement cryptography as an access control when appropriate.
  • Excessive retention of sensitive data: Keeping sensitive data longer than necessary increases the effect of a security breach.
  • Aggregation of personally identifiable information: The practice known as aggregation of data about citizens is a potentially risky undertaking that can result in an organization’s possessing sensitive personal information. Sometimes, this aggregation happens when an organization deposits historic data from various sources into a data warehouse, bringing disparate sensitive data is together for the first time. The result is a gold mine or a time bomb, depending on how you look at it.

Database security defects can be identified through manual examination or automated tools. Mitigation may be as easy as changing access permissions or as complex as redesigning the database schema and related application software programs.

Cryptographic systems

Cryptographic systems are especially apt to contain vulnerabilities, for the simple reason that people focus on the cryptographic algorithm but fail to implement it properly. Like any powerful tool, a cryptographic system is useless at best and dangerous at worst if the operator doesn’t know how to use it.

Following are some ways in which a cryptographic system may be vulnerable:

  • Use of outdated algorithms: Developers and engineers must be careful to select robust encryption algorithms. Furthermore, algorithms in use should be reviewed at least once per year to ensure that they continue to be sufficient.
  • Use of untested algorithms: Engineers sometimes make the mistake of either home-brewing a cryptographic system or using one that is clearly insufficient. It’s best to use one of many publicly available cryptosystems that have stood the test of repeated scrutiny.
  • Failure to encrypt encryption keys: A proper cryptosystem sometimes requires encryption keys themselves to be encrypted.
  • Weak cryptographic keys: A great algorithm is all but undone if the initialization vector is too small or if the keys are too short or too simple.
  • Insufficient protection of cryptographic keys: A cryptographic system is only as strong as the protection of its encryption keys. If too many people have access to keys, or if the keys are not sufficiently protected, an intruder may be able to compromise the system simply by stealing and using the keys. Separate encryption keys should be used for the data encryption key (DEK) used to encrypt/decrypt data and the key encryption key (KEK) used to encrypt/decrypt the data encryption key.

These and other vulnerabilities in cryptographic systems can be detected and mitigated through peer reviews of cryptosystems, assessments by qualified external parties, and application of corrective actions to defects.

Industrial control systems

Industrial control systems (ICSes) represent a wide variety of means for monitoring and controlling machinery of various kinds, including power generation, distribution, and consumption; natural gas and petroleum pipelines; municipal water, irrigation, and waste systems; traffic signals; manufacturing; and package distribution.

Other related terms in common use include

  • Supervisory Control and Data Acquisition (SCADA): This somewhat-antiquated term refers to networks and systems used to monitor and control industrial systems, often related to public utilities.
  • Operational technology(OT): Organizations generally represent operational technology as the network and systems infrastructure that is not part of the corporate IT infrastructure. In many organizations, separate teams manage operational and information technology.

Weaknesses in industrial control systems include the following:

  • Loose access permissions: Access to monitoring or controls of ICSes is often set too loosely, thereby enabling some users or systems access to more data and control than they need.
  • Failure to change default access credentials: All too often, organizations implement ICS components and fail to change the default administrative credentials on those components, making it far too easy for intruders to take over the ICS.
  • Access from personally owned devices: In the name of convenience, some organizations permit personnel to control machinery from personally owned smartphones and tablets. This weakness vastly increases the system’s attack surface and provides opportunities for intruders to access and control critical machinery.
  • Lack of malware control: Many ICSes lack security components that detect and block malware and other malicious activity, making it too easy for intruders to get in.
  • Failure to air-gap the ICS: Many organizations fail to air-gap (isolate) the ICS from the rest of its corporate network (or the Internet — gasp!), thereby enabling excessive opportunities for malware and intruders to access the ICS via a corporate network where users invite malware through phishing and other means.
  • Failure to update ICS components: Although the manufacturers of ICS components are notorious for failing to issue security patches, organizations are equally culpable in their failure to install these patches when they do arrive.

These vulnerabilities can be mitigated through a systematic process of establishing good controls, testing control effectiveness, and applying corrective action when controls are found to be ineffective.

Cloud-based systems

The U.S. National Institute of Standards and Technology (NIST) defines three cloud computing service models as follows:

  • SaaS: Customers are provided access to an application running on a cloud infrastructure. The application is accessible from various client devices and interfaces, but customers have no knowledge of, and do not manage or control, the underlying cloud infrastructure. Customers may have access to limited user-specific application settings.
  • PaaS: Customers can deploy supported applications to the provider’s cloud infrastructure, but they have no knowledge of, and do not manage or control, the underlying cloud infrastructure. Customers have control of the deployed applications and limited configuration settings for the application-hosting environment.
  • IaaS: Customers can provision processing, storage, networks, and other computing resources, and they can deploy and run OSes and applications, but they have no knowledge of, and do not manage or control, the underlying cloud infrastructure. Customer have control of OSes, storage, and deployed applications, as well as some networking components (such as host firewalls).

NIST defines four cloud computing deployment models:

  • Public: A cloud infrastructure that is open to use by the general public. It’s owned, managed, and operated by a third party (or parties) and exists on the cloud provider’s premises.
  • Community: A cloud infrastructure that is used exclusively by a specific group of organizations.
  • Private: A cloud infrastructure that is used exclusively by a single organization. It may be owned, managed, and operated by the organization or a third party (or a combination of both), and may exist on- or off-premises.
  • Hybrid: A cloud infrastructure that is composed of two or more of the aforementioned deployment models, bound together by standardized or proprietary technology that enables data and application portability (such as failover to a secondary data center for disaster recovery or content delivery networks across multiple clouds).

Major public cloud service providers such as AWS, Azure, Google Cloud Platform, and Oracle Cloud Platform provide customers not only virtually unlimited compute and storage at scale, but also security capabilities that often exceed the capabilities of the customers themselves. These services do not mean that cloud-based systems are inherently secure, however. The shared responsibility model is used by public cloud service providers to clearly define which aspects of security the provider is responsible for and which aspects the customer is responsible for. SaaS models place the most responsibility on the cloud service provider, typically including securing the following:

  • Applications and data
  • Run-time ware and middleware
  • Servers, virtualization, and OSes
  • Storage and networking
  • Physical data center

The customer is always ultimately responsible for the security and privacy of its data. Additionally, identity and access management (IAM) is typically the customer’s responsibility.

In a PaaS model, the customer is typically responsible for the security of its applications and data, as well as identity and access management.

In an IaaS model, the customer is typically responsible for the security of its applications and data, run-time ware and middleware, and OSes. The cloud service provider is typically responsible for the security of networking and the data center (although cloud service providers generally do not provide firewalls). Virtualization, server, and storage security may be managed by either the cloud service provider or the customer.

Tip The Cloud Security Alliance publishes the Cloud Controls Matrix, which provides a framework for information security that is specifically designed for the cloud industry. The CCM is available at https://cloudsecurityalliance.org/research/cloud-controls-matrix.

Distributed systems

Distributed systems are systems with components scattered throughout physical and logical space. Often, these components are owned and/or managed by different groups or organizations, sometimes in different countries. Some components may be privately used, and others represent services available to the public (such as Google Maps). Vulnerabilities in distributed systems include

  • Loose access permissions: Individual components in a distributed system may have individual, separate access control systems, or there may be one overarching access control system for all the distributed system’s components. Either way, there are too many opportunities for access permissions to be too loose, thereby giving some subjects access to more data and functions than they need.
  • Unprotected or weakly protected communications: Data transmitted between the server and other systems (including clients) may be using either weak encryption or no encryption at all.
  • Weak security inheritance: What we mean here is that in a distributed system, one component with weak security may compromise the security of the entire system. A publicly accessible component may have direct open access to other components, for example, bypassing local controls in those other components.
  • Lack of centralized security and control: Distributed systems that are controlled by more than one organization may lack overall oversight of security management and security operations, especially peer-to-peer systems, which are often run by end users on lightly managed or unmanaged endpoints.
  • Critical paths: A critical-path weakness occurs when a system’s continued operation depends on the availability of a single component.

All these weaknesses can be present in simpler environments. These weaknesses and other defects can be detected through the use of security scanning tools or manual techniques, and corrective actions can be taken to mitigate those defects.

Tip High-quality standards for cloud computing — for cloud service providers as well as organizations that use cloud services — are available on the websites of the Cloud Security Alliance (https://cloudsecurityalliance.org) and the European Network and Information Security Agency (https://www.enisa.europa.eu).

Internet of Things

The security of Internet of Things (IoT) devices and systems is a rapidly evolving area of information security. IoT sensors and devices collect large amounts of both potentially sensitive data and seemingly innocuous data, and are used to control physical systems and environments. Under certain circumstances, however, practically any data that is collected can be used for nefarious purposes, and devices can be subverted to affect physical environments. As a result, security must be a critical design consideration for IoT devices and systems, includes not only securing the data stored on the systems, but also how the data is collected, transmitted, processed, and used.

Many networking and communications protocols are commonly used in IoT devices, including the following:

  • IPv6 over low-power wireless personal area networks
  • 5G
  • Wi-Fi
  • Bluetooth, Bluetooth Mesh, and Bluetooth Low-Energy
  • Thread
  • Zigbee

The security of these various protocols and their implementations must be carefully considered in the design of secure IoT devices and systems.

Microservices

Microservices represent a variety of software-based services running on systems in a distributed environment. Using older technology terms, you could consider microservices to be like software program subroutines that run on various systems and are written in various computer languages. Put another way, you could think of microservices as being a more mature form of mashups, which are web applications that use content from various sources displayed through a single user interface.

Microservices are generally developed and deployed with a DevOps or DevSecOps model; typically, they communicate by using standard message-based protocols such as HTTP.

Vulnerabilities in microservices appear in several ways, including the following:

  • Application software: All the techniques used to ensure that software is free of security defects also apply to application software. Chapter 10 fully explores this topic.
  • Server subsystem vulnerabilities: The underlying subsystems that host microservices such as web servers and deeper layers such as database management systems require the usual hardening, configuration management, patching, and life-cycle management operations to ensure that they’re reasonably free of exploitable defects. This classic vulnerability-management challenge faces every organization today.
  • OS vulnerabilities: The OSes supporting microservices must be actively managed through hardening, configuration management, and vulnerability management tools and processes to ensure that they don’t have exploitable vulnerabilities that attackers could use to gain a foothold into the environment.
  • Access management: All layers of a microservices environment must employ hardened authentication controls to prevent intruders from attacking the environment.

It’s imperative that microservices environments be fully included in all traditional IT service management processes so that they are actively managed, protected, and monitored, just like all other types of server and endpoint OSes, subsystems, software, and source code.

Containerization

Containers are relatively new innovations in virtualization environments. Instead of running multiple instantiations of software programs in their own virtual OS machines, programs are run in isolated containers within a single OS instance. The practice of building and managing containers is known as containerization.

For the purposes of information security, you can think of containerization as being like virtualization. Vulnerabilities can exist in several layers, including

  • Application software: The software running in a container must be managed like software in all other forms to ensure that it’s free of exploitable defects. See Chapter 10 for comprehensive coverage of this topic.
  • Container engines: The subsystem that runs the containerization environment must be managed like any software subsystem, requiring active techniques such as hardening, configuration management, access management, patching, and life-cycle management to ensure that no exploitable defects or configurations could permit a successful attack.
  • OS: Like OSes used for every other purpose, OSes in containerization environments must be actively managed by the usual IT and security operations processes to prevent attacks.
  • Access management: All layers of a container environment must employ hardened authentication controls to prevent intruders from successfully attacking the environment.
  • Operations: Like every kind of IT environment, a containerization environment must be actively operated, monitored, and managed to ensure that it is running properly, and that all signs of malfunction and intrusion are detected and acted on.

Serverless

Serverless computing is a cloud-native development model in which virtual infrastructure (such as virtual machines or containers) is abstracted from developers, allowing them to build and run applications without having to manage the underlying infrastructure. Serverless applications are deployed using container services such as Kubernetes that automatically launch on demand when required. When an event triggers code to run, the cloud service provider dynamically allocates resources, and when it finishes executing, these resources are released. This system brings cost and resource efficiencies while also releasing developers from routine tasks, such as application scaling and server provisioning. The term serverless computing is something of a misnomer, as a server OS and infrastructure indeed exist, but they are abstracted away from the customer and provisioned, scaled, and managed by the service provider.

Using serverless applications requires a paradigm shift in how organizations approach security. Instead of building security around the application infrastructure, the developers need to build security around the functions within the applications hosted by the cloud service provider. There are two major security areas of serverless cloud infrastructure that require special attention: secure coding and identity and access management.

Vulnerabilities in a serverless environment are the same as those in software of every other type. Software developers must be trained in secure software development, and tooling must be used to identify source-code defects that must be fixed before the software is placed into production. The serverless environment must be actively monitored for security events that could be signs of intrusion.

A serverless environment must include hardened authentication controls to prevent successful intrusions by attackers. The security of the most hardened software is all for naught if the administrative interface is exposed to the Internet with simple authentication and credentials such as admin/admin.

Embedded systems

Embedded systems encompass the wide variety of systems and devices that are Internet-connected. Mainly, we’re talking about devices that are not human-connected in the computing sense. Examples of such devices include

  • Automobiles and other vehicles
  • Home appliances, such as washers and dryers, ranges and ovens, refrigerators, thermostats, televisions, videogame players, video surveillance systems, and home automation systems
  • Medical care devices, such as IV pumps and monitors
  • Heating, ventilation, and air conditioning systems
  • Commercial video surveillance and key card systems
  • Automated payment kiosks, fuel pumps, and automated teller machines
  • Network devices such as routers, switches, modems, and firewalls

These devices often run embedded systems, which are specialized OSes designed to run on devices that lacking computerlike human interaction through a keyboard or display. The devices still have an OS that is very similar to that on endpoints such as laptops and mobile devices.

Design defects in this class of devices include

  • Lack of a security patching mechanism: Most of these devices lack any means of remediating security defects that are found after manufacture.
  • Lack of antimalware mechanisms: Most of these devices have no built-in defenses against attack.
  • Lack of robust authentication: Many of these devices have simple, easily-guessed default login credentials that cannot be changed (or, at best, are rarely changed) by their owners.
  • Lack of monitoring capabilities: Many of these devices lack any means of sending security and event alerts.

Because the majority of these devices cannot be altered, mitigation of these defects typically involves isolating these devices on separate, heavily guarded networks that have tools in place to detect and block attacks.

Tip Many manufacturers of embedded, network-enabled devices do not permit customers to alter their configuration or apply security settings. As a result, organizations are compelled to place these devices on separate, guarded networks.

High-performance computing systems

High-performance computing (HPC) refers to the use of supercomputers or grid computing to solve problems that require computationally intensive processing. Topics addressed by HPC include weather forecasting and climatology, quantum mechanics, oil and gas exploration, seismology, and cryptanalysis.

HPC systems are generally characterized by having large numbers of CPUs and large amounts of memory, facilitating a high number of floating-point operations per second. Historically, HPC systems used specialized operating systems, but increasingly, Linux is used.

HPC environments use some form of parallel processing, in which computational tasks are distributed across large numbers of processors. Either a single program will execute across multiple threads, or several programs communicate by using some form of inter-process communication.

Edge computing systems

Edge computing refers to the architecture of a highly distributed environment in which computing resources are deployed near the edges of the environment, close to where data is acquired from outside. Edge computing is all about server placement in a network to reduce latency and improve performance.

The vulnerabilities in an edge computing environment are virtually the same as in any other, including

  • Application software: All application software running in an edge computing environment must be managed like application software in any other environment. These management activities include all those that are part of the systems development life cycle. Chapter 10 explores this topic more fully.
  • Subsystems: Subsystems such as web servers and database management systems must be actively managed like those in other environments. Typical activities include access management, configuration management, monitoring, patching, and life-cycle management.
  • OS: OSes in edge computing environments should be managed with techniques used on servers in other types of environments, including access management, hardening, configuration management, patching, and monitoring.
  • Network devices: All network devices in an edge computing environment must be hardened, patched, and actively managed in typical IT life-cycle management processes.
  • Architecture: The relationship between systems, networks, and any network or computing zones, segmentation, data flow, and other aspects of architecture must be closely examined by security specialists or engineers to ensure that the architecture is free of design flaws that could aid an intruder.
  • Operations: As in every other type of IT environment, active IT service management and security operations must be in place to monitor the performance, health, and security of an edge computing environment.

Virtualized systems

Virtualization is the practice of implementing multiple instances of OSes in a single hardware platform. Virtualization makes the use of computing hardware more efficient and flexible. But organizations must be mindful of certain risks associated with virtualization, including

  • Hypervisor management and protection: The hypervisor manages various OS instances and uses isolation and other protection, not unlike process isolation within an OS. Like OSes, hypervisors must be hardened, patched, and managed carefully to prevent various types of attacks.
  • Virtual-machine sprawl: In the old days, you could not implement a new OS without going through the corporate purchasing process to buy a server. In a virtual-machine environment, a new OS can be built with a few clicks. Thus, more discipline is required to control the creation and use of virtual machines.

Virtual desktop infrastructure is the practice of implementing centrally stored and managed desktop OSes that execute on individual endpoints. This practice can reduce the cost of endpoint management, as well as prevent information leakage by keeping sensitive data on central servers. Endpoints assume the role of terminals, and all processing and data manipulation is performed on servers.

Web-based systems

Web-based systems contain many components, including application code, database management systems, OSes, middleware, and the web-server software itself. These components may, individually and collectively, have security design or implementation defects. Some of those defects are

  • Failure to block injection attacks: Attacks such as JavaScript injection and SQL injection can permit an attacker to cause a web application to malfunction and expose sensitive internally stored data.
  • Defective authentication: A website has many ways to implement authentication — too many to list here. Authentication is essential to get right, but many sites fail to do so.
  • Defective session management: Web servers create logical sessions to keep track of individual users. Many websites’ session management mechanisms are vulnerable to abuse, notably that permit an attacker to take over another user’s session.
  • Failure to block cross-site scripting attacks: Websites may fail to examine and sanitize input data. As a result, attackers can create attacks that send malicious content to users.
  • Failure to block cross-site request forgery attacks: Websites that fail to employ proper session and session context management can be vulnerable to attacks in which users are tricked into sending commands to websites that may cause them harm. Here’s an example we like to use: An attacker tricks a user into clicking a link that actually takes the user to a URL like http://bank.com/transfer?tohackeraccount:amount=99999.99.
  • Failure to protect direct object references: Websites can be tricked into accessing and sending data to a user who is not authorized to view or modify it.

These vulnerabilities can be mitigated in three main ways:

  • Training developers in the techniques of safe software development
  • Including security in the development life cycle
  • Using dynamic and static application scanning tools

Tip For an in-depth review of vulnerabilities in web-based systems, read the “Top 10” list at https://www.owasp.org.

Mobile systems

Mobile systems include OSes and applications on smartphones, tablets, phablets, smart watches, and wearables. The most popular OS platforms for mobile systems are Apple iOS, Android, and Windows 10.

The vulnerabilities of mobile systems include

  • Lack of robust resource access controls: History has shown us that some mobile OSes lack robust controls that govern which apps are permitted to access resources on the mobile device, including
    • Locally stored data
    • Contact lists
    • Camera and photo library
    • Email messages
    • Location services
    • Microphone
  • Insufficient security screening of applications: Some mobile platform environments are quite good at screening out applications that contain security flaws or outright break the rules, but other platforms apparently have more of an “anything goes” policy. Beware: Your mobile app may be doing more than advertised.
  • Lax default security settings: Many mobile platforms lack enforcement of basic security. Some platforms don’t require devices to lock automatically or have lock codes, for example.

In a managed corporate environment, the use of an MDM system can mitigate many or all of these risks. But individual users must do the right thing by using strong security settings.

Select and Determine Cryptographic Solutions

Cryptography (from the Greek kryptos, meaning hidden, and graphia, meaning writing) is the science of encrypting and decrypting communications to make them incomprehensible to everyone all but the intended recipient.

Crossreference This section covers Objective 3.6 of the Security Architecture and Engineering domain in the CISSP Exam Outline (May 1, 2021).

Cryptography can be used to achieve several goals of information security:

  • Confidentiality: Cryptography protects the confidentiality or secrecy of information. Even when the transmission or storage medium has been compromised, the encrypted information is practically useless to unauthorized people who don’t have the proper encryption keys.
  • Integrity: Cryptography can be used to ensure the integrity or accuracy of information through the use of hashing algorithms and message digests.
  • Authentication: Cryptography can be used for authentication and nonrepudiation services through digital signatures, digital certificates, or a public key infrastructure (PKI).

Remember The CISSP exam tests your ability to apply general cryptographic concepts to real-world issues and problems. You don’t have to memorize cryptographic algorithms or the step-by-step operation of various cryptographic systems. But you should have a firm grasp of cryptographic concepts and technologies, as well as their specific strengths, weaknesses, uses, and applications.

Warning Don’t confuse these three points with the C-I-A triad, which we discuss in Chapter 3: The C-I-A triad deals with confidentiality, integrity, and availability; cryptography does nothing to ensure availability.

Cryptography has evolved into a complex science (some people say an art), presenting many great promises and challenges in the field of information security. The basics of cryptography include various terms and concepts, the individual components of the cryptosystem, and the classes and types of ciphers.

Plaintext and ciphertext

A plaintext message is a message in its original readable format or a ciphertext message that has been properly decrypted (unscrambled) to produce the original readable plaintext message.

A ciphertext message is a plaintext message that has been transformed (encrypted) into a scrambled message that’s unintelligible. This term doesn’t apply to messages from your boss, which may also happen to be unintelligible.

Encryption and decryption

Encryption (or enciphering) is the process of converting plaintext communications to ciphertext. Decryption (or deciphering) reverses that process, converting ciphertext to plaintext. (See Figure 5-5.)

Schematic illustration of encryption and decryption.

© John Wiley & Sons, Inc.

FIGURE 5-5: Encryption and decryption.

Traffic on a network can be encrypted via end-to-end or link encryption.

End-to-end encryption

With end-to-end encryption, packets are encrypted once at the original encryption source and then decrypted only at the final decryption destination. The advantages of end-to-end encryption are speed and overall security. For the packets to be routed properly, however, only the data is encrypted, not the routing information.

Link encryption

Link encryption requires each node (such as a router) to have separate key pairs for its upstream and downstream neighbors. Packets are encrypted and decrypted, and then re-encrypted at every node along the network path.

The following example, as shown in Figure 5-6, illustrates link encryption:

  1. Computer 1 encrypts a message by using Secret Key A and then transmits the message to Router 1.
  2. Router 1 decrypts the message by using Secret Key A, reencrypts the message by using Secret Key B, and then transmits the message to Router 2.
  3. Router 2 decrypts the message by using Secret Key B, reencrypts the message by using Secret Key C, and then transmits the message to Computer 2.
  4. Computer 2 decrypts the message by using Secret Key C.
Schematic illustration of link encryption.

© John Wiley & Sons, Inc.

FIGURE 5-6: Link encryption.

The advantage of using link encryption is that the entire packet (including routing information) is encrypted. But link encryption has the two disadvantages:

  • Latency: Packets must be encrypted/decrypted at every node, which creates latency (delay) in the transmission of those packets.
  • Inherent vulnerability: If a node is compromised or a packet’s decrypted contents are cached in a node, the message can be compromised.

Putting it all together: The cryptosystem

A cryptosystem is the hardware or software implementation that transforms plaintext to ciphertext (encrypting it) and back to plaintext (decrypting it).

An effective cryptosystem must have the following properties:

  • The encryption and decryption process is efficient for all possible keys within the cryptosystem’s keyspace.

    Tip A keyspace is the range of all possible values for a key in a cryptosystem.

  • The cryptosystem is easy to use. A cryptosystem that is difficult to use might be used improperly, leading to data loss or compromise.
  • The strength of the cryptosystem depends on the secrecy of the cryptovariables (keys) rather than the secrecy of the algorithm.

Tip A restricted algorithm refers to a cryptographic algorithm that must be kept secret to provide security. Restricted or proprietary algorithms are not very effective, because effectiveness depends on keeping the algorithm itself secret, rather than the complexity of the algorithm and the high number of variable solutions of the algorithm. Restricted and proprietary algorithms are therefore are not commonly used today. They are generally used only for applications that require minimal security.

Cryptosystems are typically composed of two basic elements:

  • Cryptographic algorithm: Also called a cipher, a cryptographic algorithm details the step-by-step mathematical function used to produce ciphertext (encipher) and plaintext (decipher).
  • Cryptovariable: Also called a key, the cryptovariable is a secret value applied to the algorithm. The strength and effectiveness of the cryptosystem largely depend on the secrecy and strength of the cryptovariable.

Key clustering occurs when identical ciphertext messages are generated from a plaintext message with the same encryption algorithm but different encryption keys. Key clustering indicates a weakness in a cryptographic algorithm because it statistically reduces the number of key combinations that must be attempted in a brute-force attack.

Remember A cryptosystem consists of the cryptographic algorithm (cipher) and the cryptovariable (key), as well as all the possible plaintexts and ciphertexts produced by the cipher and key.

Remember An analogy of a cryptosystem is a deadbolt lock. A deadbolt lock can be easily identified, and its inner working mechanisms aren’t closely guarded state secrets. What makes a deadbolt lock effective is the individual key that controls a specific lock on a specific door. But if the key is weak (imagine only one or two notches on a flat key) or not well protected (left under your doormat), the lock won’t protect your belongings as well. Similarly, if an attacker is able to determine what cryptographic algorithm (lock) was used to encrypt a message, it should still be protected, because you’re using a strong key that you’ve kept secret rather than a six-character password that you wrote on a scrap of paper and left under your mouse pad.

Classes of ciphers

Ciphers are cryptographic transformations. The two main classes of ciphers used in symmetric key algorithms are block and stream (see “Cryptographic Methods,” later in this chapter), which describe how the ciphers operate on input data.

Remember The two main classes of ciphers are block and stream.

Block ciphers

Block ciphers operate on a single fixed block (typically, 128 bits) of plaintext to produce the corresponding ciphertext. When a given key is used in a block cipher, the same plaintext block always produces the same ciphertext block. Advantages of block ciphers compared with stream ciphers are

  • Reusable keys: Key management is much easier.
  • Interoperability: Block ciphers are more widely supported.

Block ciphers are typically implemented in software. Examples of block ciphers include AES, DES, Blowfish, Twofish, and RC5.

Stream ciphers

Stream ciphers operate in real time on a continuous stream of data, typically bit by bit. Stream ciphers generally work faster than block ciphers and require less code to implement. But the keys in a stream cipher are generally used only once (see the nearby sidebar “A disposable cipher: The one-time pad”) and then discarded. Key management becomes a serious problem. When a stream cipher is used, the same plaintext bit or byte produces a different ciphertext bit or byte every time it is encrypted. Stream ciphers are typically implemented in hardware.

Examples of stream ciphers include Salsa20 and RC4.

Remember People often assume that traffic such as a streaming video from a service such as YouTube is encrypted with a stream cipher. Such content consists of individual TCP/IP packets that are encrypted with a block cipher.

Types of ciphers

The two basic types of ciphers are substitution and transposition. Both are involved in the process of transforming plaintext into ciphertext.

Remember Most modern cryptosystems use both substitution and permutation to achieve encryption.

Substitution ciphers

Substitution ciphers replace bits, characters, or character blocks in plaintext with alternate bits, characters, or character blocks to produce ciphertext. A classic example of a substitution cipher is one that Julius Caesar used: He swapped letters of the message with other letters from the same alphabet. In a simple substitution cipher using the standard English alphabet, a cryptovariable (key) is added modulo 26 to the plaintext message. In modulo 26 addition, the remainder is the final result for any sum equal to or greater than 26. A basic substitution cipher in which the word “boy” is encrypted by adding three characters using modulo 26 math produces the following result:

b

o

y

PLAINTEXT

2

15

25

NUMERIC VALUE

+

3

3

3

SUBSTITUTION VALUE

5

18

2

MODULO 26 RESULT

e

r

b

CIPHERTEXT

A substitution cipher may be

  • Monoalphabetic: A single alphabet is used to encrypt the entire plaintext message.
  • Polyalphabetic: This more complex substitution uses a different alphabet to encrypt each bit, character, or character block of a plaintext message.

A more modern example of a substitution cipher is the S-boxes (substitution boxes) employed in the Data Encryption Standard (DES) algorithm. The S-boxes in that algorithm produce a nonlinear substitution (6 bits in, 4 bits out).

Transposition

Transposition ciphers rearrange bits, characters, or character blocks in plaintext to produce ciphertext. In a simple columnar transposition cipher, a message might be read horizontally but written vertically to produce the ciphertext, as in the following example,

THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG

written in nine columns as

THEQUICKB
ROWNFOXJU
MPSOVERTH
ELAZYDOG

and then transposed (encrypted) vertically as

TRMEHOPLEWSAQNOZUFVYIOEDCXROKJTGBUH

The original letters of the plaintext message are the same; only the order has been changed to achieve encryption.

DES performs permutations through the use of P-boxes (permutation boxes) to spread the influence of a plaintext character over many characters so that they’re not easily traced back to the S-boxes used in the substitution cipher.

Other types of ciphers include

  • Codes: Codes include words and phrases to communicate a secret message.
  • Running (or book) ciphers: The key is page 137 of The Catcher in the Rye, for example, and text on that page is added modulo 26 to perform encryption/decryption.
  • Vernam ciphers: Also known as one-time pads, these ciphers are keystreams that can be used only once. We discuss these ciphers in the earlier sidebar “A disposable cipher: The one-time pad.”
  • Concealment ciphers: These ciphers include steganography, which we discuss in the nearby sidebar “Cryptography alternatives.”

Cryptographic life cycle

The cryptographic life cycle is the sequence of events that occurs throughout the use of cryptographic controls in a system. These steps include

  • Development of requirements for a cryptosystem
  • Selection of cryptographic controls
  • Implementation of the cryptosystem
  • Examination of cryptosystem for proper implementation, effective key management, and efficacy of cryptographic algorithms
  • Rotation of cryptographic keys
  • Mitigation of any defects identified

These steps are not altogether different from the selection, implementation, examination, and correction of any other type of security control in a network and computing environment. Like virtually any other components in a network and computing environment, components in a cryptosystem must be examined periodically to ensure that they are still effective and being operated properly.

Cryptographic methods

Cryptographic methods include symmetric, asymmetric, elliptic curves, and quantum.

Symmetric

Symmetric key cryptography — also known as symmetric algorithm, secret key, single key, and private key cryptography — uses a single key to encrypt and decrypt information. Two parties (for our example, Thomas and Richard) can exchange an encrypted message by using the following procedure:

  1. The sender (Thomas) encrypts the plaintext message with a secret key known only to the intended recipient (Richard).
  2. The sender transmits the encrypted message to the intended recipient.
  3. The recipient decrypts the message with the same secret key to obtain the plaintext message.

For an attacker (Harold) to read the message, he must do one of the following things:

  • Guess the secret key (by using a brute-force attack, for example).
  • Obtain the secret key by using the rubber-hose technique. This technique is another form of brute-force attack. Humans are typically the weakest link, and neither Thomas nor Richard has much tolerance for pain.
  • Get the secret key through social engineering. Thomas and Richard both like money and may be all too willing to help Harold’s Nigerian uncle claim his vast fortune.
  • Intercept the secret key during the initial exchange.

The following list includes the main disadvantages of symmetric systems:

  • Distribution: Secure distribution of secret keys is required, either through out-of-band methods or asymmetric systems.
  • Scalability: A different key is required for each pair of communicating parties.
  • Limited functionality: Symmetric systems can’t provide authentication or nonrepudiation (discussed later in this chapter).

Symmetric systems also have many advantages:

  • Speed: Symmetric systems are much faster than asymmetric systems.
  • Strength: Strength is gained when the algorithm uses a large key (128 bit, 192 bit, 256 bit, or larger).
  • Availability: Many algorithms available for organizations to select and use.

Symmetric key algorithms include DES, Triple DES (3DES), Advanced Encryption Standard (AES), International Data Encryption Algorithm (IDEA), and Rivest Cipher 5 (RC5).

Remember Symmetric key systems use a shared secret key.

DATA ENCRYPTION STANDARD

In the early 1970s, NIST solicited vendors to submit encryption algorithm proposals to be evaluated by the National Security Agency in support of a national cryptographic standard. This new encryption standard was used for private-sector and sensitive but unclassified government data. In 1974, IBM submitted a 128-bit algorithm known as Lucifer. After some modifications (the algorithm was shortened to 56 bits, and the S-boxes were changed), the IBM proposal was endorsed by the National Security Agency and formally adopted as the DES. It was published in Federal Information Processing Standard (FIPS) PUB 46 in 1977 (updated and revised in 1988 as FIPS PUB 46-1) and American National Standards Institute (ANSI) X3.92 in 1981.

Remember DES is a block cipher that uses a 56-bit key.

The DES algorithm is a symmetric (or private) key cipher consisting of an algorithm and a key. The algorithm is a 64-bit block cipher based on a 56-bit symmetric key. It consists of 56 key bits plus 8 parity bits. Alternatively, you can think of it as being 8 bytes, with each byte containing 7 key bits and 1 parity bit.) During encryption, the original message (plaintext) is divided into 64-bit blocks. Operating on a single block at a time, the algorithm splits each 64-bit plaintext block into two 32-bit blocks. Under control of the 56-bit symmetric key, 16 rounds of transpositions and substitutions are performed on each character to produce the ciphertext output.

Technicalstuff A parity bit is used to detect errors in a bit pattern. If the bit pattern has 55 key bits (ones and zeros) that add up to an even number, an odd-parity bit should be a one, making the total of the bits — including the parity bit — an odd number. For an even-parity bit, if the 55 key bits add up to an even number, the parity bit should be a zero, making the total of the bits — including the parity bit — an even number. If an algorithm uses even parity, and the resulting bit pattern (including the parity bit) is an odd number, the transmission has been corrupted.

Technicalstuff A round is a transformation (permutations and substitutions) that an encryption algorithm performs on a block of plaintext to convert (encrypt) it to ciphertext.

The four distinct modes of operation (the mode of operation defines how the plaintext/ciphertext blocks are processed) in DES are Electronic Code Book, Cipher Block Chaining, Cipher Feedback, and Output Feedback.

TRIPLE DES

The Triple Data Encryption Standard (3DES) effectively extended the life of the DES algorithm. In 3DES implementations, a message is encrypted by using one key, encrypted by using a second key, and then encrypted again by using either the first key or a third key.

ADVANCED ENCRYPTION STANDARD

In May 2002, NIST announced the Rijndael Block Cipher as the new standard to implement the Advanced Encryption Standard (AES), which replaced DES as the U.S. government standard for encrypting sensitive but unclassified data. AES was subsequently approved for encrypting classified U.S. government data up to the top secret level (using 192- or 256-key lengths).

The Rijndael Block Cipher, developed by Dr. Joan Daemen and Dr. Vincent Rijmen, uses variable block and key lengths (128, 192, or 256 bits) and 10 to 14 rounds. It was designed to be simple, resistant to known attacks, and fast. It can be implemented in either hardware or software and has relatively low memory requirements.

Remember AES is based on the Rijndael Block Cipher.

Until recently, the only known successful attacks against AES were side-channel attacks, which don’t attack the encryption algorithm directly; instead, they attack the system on which the encryption algorithm is implemented. Side-channel attacks using cache-timing techniques are most common against AES implementations. In 2009, a theoretical related-key attack against AES was published. The attack method is considered to be theoretical because although it reduces the mathematical complexity required to break an AES key, it is still well beyond the computational capability available today.

BLOWFISH AND TWOFISH

The Blowfish algorithm operates on 64-bit blocks, employs 16 rounds, and uses variable key lengths of up to 448 bits. The Twofish algorithm, a finalist in the AES selection process, is a symmetric block cipher that operates on 128-bit blocks, employing 16 rounds with variable key lengths up to 256 bits. Both Blowfish and Twofish were designed by Bruce Schneier (and others) and are freely available in the public domain. (Neither algorithm has been patented.) To date, no known successful cryptanalytic attacks have been made against either algorithm.

RIVEST CIPHERS

Dr. Ron Rivest, Dr. Adi Shamir, and Dr. Len Adleman invented the RSA (Rivest, Shamir, Adleman) algorithm and founded the company RSA Data Security. The Rivest ciphers are a series of symmetric algorithms that include the following:

  • RC2: A block-mode cipher that encrypts 64-bit blocks of data by using a variable-length key.
  • RC4: A stream cipher (data is encrypted in real time) that uses a variable-length key (128 bits is standard).
  • RC5: Similar to RC2 but including a variable-length key (0 to 2,048 bits), variable block size (32, 64, or 128 bits), and a variable number of processing rounds (0 to 255).
  • RC6: Derived from RC5 and a finalist in the AES selection process. It uses a 128-bit block size and variable-length keys of 128, 192, or 256 bits.

Note: RC1 was never published, and RC3 was broken during development.

IDEA CIPHER

The International Data Encryption Algorithm (IDEA) Cipher evolved from the Proposed Encryption Standard and the Improved Proposed Encryption Standard, developed in 1990. IDEA is a block cipher that operates on 64-bit plaintext blocks by using a 128-bit key. IDEA performs eight rounds on 16-bit sub-blocks and can operate in four distinct modes similar to DES. The IDEA Cipher provides stronger encryption than RC4 and 3DES, but because it is patented, it’s not widely used today. The patents were set to expire in various countries between 2010 and 2012. It is currently used in some software applications, including Pretty Good Privacy email.

Asymmetric

Asymmetric key cryptography (also known as asymmetric algorithm cryptography or public-key cryptography) uses two separate keys: one key to encrypt and a different key to decrypt information. These keys are known as public and private key pairs. When two parties want to exchange an encrypted message by using asymmetric key cryptography, they follow these steps, shown in Figure 5-7:

  1. The sender (Thomas) encrypts the plaintext message with the intended recipient’s (Richard) public key.
  2. This produces a ciphertext message that can be transmitted to the intended recipient (Richard).
  3. The recipient (Richard) decrypts the message with his private key, known only to him.
Schematic illustration of sending a message using asymmetric key cryptography.

Image courtesy of authors

FIGURE 5-7: Sending a message using asymmetric key cryptography.

Only the private key can decrypt the message; thus, an attacker (Harold) who possesses only the public key can’t decrypt the message. Not even the original sender can decrypt the message. This use of an asymmetric key system is known as a secure message. A secure message guarantees the confidentiality of the message.

Remember Asymmetric key systems use a public key and a private key.

Remember Secure message format uses the recipient’s private key to protect confidentiality.

If the sender wants to guarantee the authenticity of a message (or, more correctly, the authenticity of the sender), they can digitally sign the message with this procedure, shown in Figure 5-8:

  1. The sender (Thomas) digitally signs the plaintext message with his own private key.
  2. The sender transmits the signed message to the intended recipient (Richard).
  3. To verify that the message is from the purported sender, the recipient (Richard) applies the sender’s (Thomas’s) public key (which is known to every Tom, Dick, and Harry).
Schematic illustration of verifying message authenticity using asymmetric key cryptography.

Image courtesy of authors

FIGURE 5-8: Verifying message authenticity using asymmetric key cryptography.

An attacker can also verify the authenticity of the message, of course. This use of an asymmetric key system is known as an open message format because it guarantees only authenticity, not confidentiality.

Remember Open message format uses the sender’s private key to ensure authenticity.

If the sender wants to guarantee both the confidentiality and authenticity of a message, they can do so by using this procedure, shown in Figure 5-9:

  1. The sender (Thomas) encrypts the message first with the intended recipient’s (Richard’s) public key and then signs with his own private key.
  2. The sender transmits the ciphertext message to the intended recipient (Richard).
  3. The recipient (Richard) uses the sender’s (Thomas’s) public key to verify the authenticity of the message and then uses his own private key to decrypt the message’s contents.
Schematic illustration of encrypting and signing a message using asymmetric key cryptography.

Image courtesy of authors

FIGURE 5-9: Encrypting and signing a message using asymmetric key cryptography.

If an attacker intercepts the message, they can apply the sender’s public key, but then they have an encrypted message that they can’t decrypt without the intended recipient’s private key. Thus, both confidentiality and authenticity are assured. This use of an asymmetric key system is known as a secure and signed message format.

Remember A secure and signed message format uses the sender’s private key and the recipient’s public key to protect confidentiality and ensure authenticity.

A public key and a private key are mathematically related, but theoretically, no one can compute or derive the private key from the public key. This property of asymmetric systems is based on the concept of a one-way function. A one-way function is a problem that you can easily compute in one direction but not in the reverse direction. In asymmetric key systems, a trapdoor (private key) resolves the reverse operation of the one-way function.

A trapdoor one-way function is like a lock box that is supplied to the user in an opened configuration. Any user may place an item inside the box and then close the lid, which latches the lock closed as it does so. Only the person who has the key can then open the box to obtain the item inside. In this analogy, the lock box itself is the public key, and the key that opens the box is the private key.

Because of their complexity, asymmetric key systems are more commonly used for key management or digital signatures than for encryption of bulk information. Often, a hybrid system is employed, using an asymmetric system to securely distribute the secret keys of a symmetric key system that’s used to encrypt the data.

The main disadvantage of asymmetric systems is their lower speed. Because of the types of algorithms that are used to achieve the one-way hash functions, very large keys are required. (A 128-bit symmetric key has strength equivalent to that of a 2,304-bit asymmetric key.) Those large keys in turn require more computational power, causing a significant loss of speed (up to 10,000 times slower than a comparable symmetric key system).

Asymmetric systems also have many significant advantages, including

  • Extended functionality: Asymmetric key systems can provide both confidentiality and authentication; symmetric systems can provide only confidentiality.
  • Scalability: Because symmetric key systems require secret key exchanges among all the communicating parties, their scalability is limited. Asymmetric key systems, which do not require secret key exchanges, resolve key management issues associated with symmetric key systems and therefore are more scalable.

Asymmetric key algorithms include RSA, Diffie-Hellman Key Exchange, El Gamal, Merkle-Hellman (Trapdoor) Knapsack, and Elliptic Curve, which we talk about in the following sections.

RSA

The RSA algorithm is a key transport algorithm based on the difficulty of factoring a number that’s the product of two large prime numbers (typically, 512 bits). Two users (Thomas and Richard) can securely transport symmetric keys by using RSA as follows:

  1. Thomas creates a symmetric key, encrypts it with Richard’s public key, and transmits it to Richard.
  2. Richard decrypts the symmetric key by using his own private key.

Remember RSA is an asymmetric key algorithm based on factoring prime numbers.

DIFFIE-HELLMAN KEY EXCHANGE

Dr. Whitfield Diffie and Dr. Martin Hellman published a paper titled “New Directions in Cryptography” that detailed a new paradigm for secure key exchange based on discrete logarithms. Diffie-Hellman is described as a key agreement algorithm. Two users (Thomas and Richard, who have never met) can exchange symmetric keys by using Diffie-Hellman as follows and as depicted in Figure 5-10:

  1. Thomas and Richard obtain each other’s public keys.
  2. Thomas and Richard ten combine their own private keys with the public key of the other person, producing a symmetric key that only the two users involved in the exchange know.
Schematic illustration of Diffie-Hellman key exchange is used to generate a symmetric key for two users.

Image courtesy of authors

FIGURE 5-10: Diffie-Hellman key exchange is used to generate a symmetric key for two users.

Diffie-Hellman is vulnerable to man-in-the-middle attacks, in which an attacker intercepts the public keys during the initial exchange and substitutes their own private key to create a session key that can decrypt the session. (You can read more about these attacks in the section “Man-in-the-middle” later in this chapter.) A separate authentication mechanism is necessary to protect against this type of attack, ensuring that the two parties communicating in the session are in fact the legitimate parties.

Remember Diffie-Hellman is an asymmetric key algorithm based on discrete logarithms.

EL GAMAL

El Gamal is an unpatented, asymmetric key algorithm based on the discrete logarithm problem used in Diffie-Hellman (discussed in the preceding section). El Gamal extends the functionality of Diffie-Hellman to include encryption and digital signatures.

MERKLE-HELLMAN (TRAPDOOR) KNAPSACK

The Merkle-Hellman (Trapdoor) Knapsack, published in 1978, employs a unique approach to asymmetric cryptography. It’s based on the problem of determining what items, in a set of items that have fixed weights, can be combined to obtain a given total weight. Knapsack was broken in 1982.

Remember Knapsack is an asymmetric key algorithm based on fixed weights.

ELLIPTIC CURVE

Elliptic Curves is far more difficult to compute than conventional discrete logarithm problems or factoring prime numbers. (A 160-bit EC key is equivalent to a 1,024-bit RSA key.) The use of smaller keys means that Elliptic Curve is significantly faster than other asymmetric algorithms (and many symmetric algorithms) and can be widely implemented in various hardware applications, including wireless devices and smart cards.

Remember Elliptic Curve is more efficient than other asymmetric key systems, and many symmetric key systems because it can use a smaller key.

QUANTUM COMPUTING

Quantum computing is an emerging computing processor design that uses the properties of quantum states to perform computation. Although quantum computing is still in its infancy, it may someday pose a significant threat to encryption. A quantum computer may eventually be able to break the most advanced encryption in very short periods of time.

Realizing that quantum computing may eventually be used to break cryptosystems, cryptographers are revisiting the designs of their cryptosystems and developing new ways to ensure that they can resist quantum-computing cryptanalysis.

Remember Quantum computing is currently a theoretical threat to cryptosystems.

Public key infrastructure

A public key infrastructure (PKI) is an arrangement whereby a designated authority stores encryption keys or certificates (an electronic document that uses the public key of an organization or person to establish identity and a digital signature to establish authenticity) associated with users and systems, thereby enabling secure communications through the integration of digital signatures, digital certificates, and other services necessary to ensure confidentiality, integrity, authentication, nonrepudiation, and access control.

Remember The four basic components of a PKI are the Certificate Authority, Registration Authority, repository, and archive:

  • Certificate Authority: The Certificate Authority (CA) comprises hardware, software, and the personnel administering the PKI. It issues certificates, maintains and publishes status information and Certificate Revocation Lists, and maintains archives.
  • Registration Authority: The Registration Authority (RA) also comprises hardware, software, and the personnel administering the PKI. It’s responsible for verifying certificate contents for the CA.
  • Repository: A repository is a system that accepts certificates and Certificate Revocation Lists from a CA and distributes them to authorized parties.
  • Archive: An archive offers long-term storage of archived information from the CA.

Key management practices

Like physical keys, encryption keys must be safeguarded. Most successful attacks against encryption exploit some vulnerability in key management functions rather than some inherent weakness in the encryption algorithm. Following are the major functions associated with managing encryption keys:

  • Generation: Keys must be generated randomly on a secure system, and the generation sequence itself shouldn’t provide potential clues regarding the contents of the keyspace. Generated keys shouldn’t be displayed in the clear.
  • Distribution: Keys must be securely distributed. Distribution is a major vulnerability in symmetric key systems. Using an asymmetric system to distribute secret keys securely is one solution.
  • Installation: Key installation is often a manual process. This process should ensure that the key isn’t compromised during installation, incorrectly entered, or too difficult to be used readily.
  • Storage: Keys must be stored on protected or encrypted storage media, or the application using the keys should include safeguards that prevent extraction of the keys.
  • Change: Keys, like passwords, should be changed regularly, relative to the value of the information being protected and the frequency of use. Keys used frequently are more likely to be compromised through interception and statistical analysis.
  • Control: Key control addresses the proper use of keys. Different keys have different functions and may be approved for only certain levels of classification.
  • Disposal: Keys (and any distribution media) must be properly disposed of, erased, or destroyed so that the key’s contents are not disclosed, possibly providing an attacker insight into the key management system.

A cryptoperiod is the length of time that an encryption key can be considered valid. Various factors influence the length of the cryptoperiod, including the length of the key and the strength of the encryption algorithm. When an encryption key has reached the end of its cryptoperiod, it should be discarded, and a new key should be generated. This process may require deciphering existing ciphertext and encrypting it the new key.

Remember The seven key management issues are generation, distribution, installation, storage, change, control, and disposal.

Digital signatures and digital certificates

Message authentication guarantees the authenticity and integrity of a message by ensuring that

  • A message hasn’t been altered (maliciously or accidentally) during transmission.
  • A message isn’t a replay of a previous message.
  • The message was sent from the origin stated (and is not a forgery).
  • The message is sent to the intended recipient.

Checksums, CRC values, and parity checks are examples of basic message authentication and integrity controls. More-advanced message authentication is performed by using digital signatures and message digests.

Remember Digital signatures and message digests can be used to provide message authentication.

The Digital Signature Standard (DSS), published by NIST in Federal Information Processing Standard (FIPS) 186-4, specifies three acceptable algorithms in its standard: the RSA Digital Signature Algorithm; the Digital Signature Algorithm (DSA); which is based on a modified El Gamal algorithm; and the Elliptic Curve Digital Signature Algorithm.

A digital signature is a simple way to verify the authenticity (and integrity) of a message. Instead of encrypting a message with the intended receiver’s public key, the sender encrypts it with their own private key. The sender’s public key properly decrypts the message, authenticating the originator of the message. This process is known as an open message format in asymmetric key systems, as we discuss in the section “Asymmetric” earlier in this chapter.

Nonrepudiation

To repudiate is to deny. Nonrepudiation means that an action (such as an online transaction, email communication, and so on) or occurrence can’t be easily denied. Nonrepudiation is a related function of identification and authentication and accountability. It’s difficult for a user to deny sending an email message that was digitally signed with that user’s private key, for example. Likewise, it’s difficult to deny responsibility for an enterprise-wide outage if the accounting logs positively identify you (from username and strong authentication) as the poor soul who inadvertently issued the write-erase command on the core routers two seconds before everything dropped!

Integrity (hashing)

It’s often impractical to encrypt a message with the receiver’s public key to protect confidentiality and then encrypt the entire message again by using the sender’s private key to protect authenticity and integrity. Instead, a representation of the encrypted message is encrypted with the sender’s private key to produce a digital signature. The intended recipient decrypts this representation by using the sender’s public key and then independently calculates the expected results of the decrypted representation by using the same known one-way hashing algorithm. If the results are the same, the integrity of the original message is assured. This representation of the entire message is known as a message digest.

To digest means to reduce or condense something, and a message digest does precisely that. (Conversely, indigestion means to expand, like gases. How do you spell relief?) A message digest is a condensed representation of a message; think Reader’s Digest. Ideally, a message digest has the following properties:

  • The original message can’t be re-created from the message digest.
  • Finding a message that produces a particular digest shouldn’t be computationally feasible.
  • No two messages should produce the same message digest (a situation known as a collision).
  • The message digest should be calculated by using the entire contents of the original message; it shouldn’t be a representation of a representation.

Message digests are produced by using a one-way hash function. There are several types of one-way hashing algorithms (digest algorithms), including MD5, SHA-2 variants, and HMAC.

Warning A collision results when two messages produce the same digest or when a message produces the same digest as a different message.

Remember A one-way function ensures that the same key can’t encrypt and decrypt a message in an asymmetric key system. One key encrypts the message (produces ciphertext), and a second key (the trapdoor) decrypts the message (produces plaintext), effectively reversing the one-way function.

A one-way hashing algorithm produces a hashing value (or message digest) that can’t be reversed; that is, it can’t be decrypted. In other words, no trapdoor exists for a one-way hashing algorithm. The purpose of a one-way hashing algorithm is to ensure integrity and authentication.

Remember MD5, SHA-2, SHA-3, and HMAC are examples of commonly used message authentication algorithms.

MD

MD (Message Digest) is a family of one-way hashing algorithms developed by Dr. Ron Rivest that includes MD (obsolete), MD2, MD3 (not widely used), MD4, MD5, and MD6:

  • MD2: Developed in 1989 and still widely used today, MD2 takes a variable-size input (message) and produces a fixed-size output (128-bit message digest). MD2 is very slow (originally developed for 8-bit computers) and highly susceptible to collisions.
  • MD4: Developed in 1990, MD4 produces a 128-bit digest and is used to compute NT-password hashes for various Microsoft Windows OSes, including NT, XP, and Vista. An MD4 hash is typically represented as a 32-digit hexadecimal number. Several known weaknesses are associated with MD4, and it’s also susceptible to collision attacks.
  • MD5: Developed in 1991, MD5 is one of the most popular hashing algorithms in use today, commonly used to store passwords and to check the integrity of files. Like MD2 and MD4, MD5 produces a 128-bit digest. Messages are processed in 512-bit blocks, using four rounds of transformation. The resulting hash is typically represented as a 32-digit hexadecimal number. MD5 is also susceptible to collisions and is now considered to be cryptographically broken by the U.S. Department of Homeland Security.
  • MD6: Developed in 2008, MD6 uses very large input message blocks (up to 512 bytes) and produces variable-length digests (up to 512 bits). MD6 was originally submitted for consideration as the new SHA-3 standard but was eliminated from further consideration after the first round in July 2009. Unfortunately, the first widespread use of MD6 (albeit unauthorized and illicit) was in the Conficker.B worm in late 2008, shortly after the algorithm was published.

SHA

Like MD, SHA (Secure Hash Algorithm) is another family of one-way hash functions. The SHA family of algorithms is designed by the U.S. National Security Agency and published by NIST. The SHA family of algorithms includes SHA-1, SHA-2, and SHA-3:

  • SHA-1: Published in 1995, SHA-1 takes a variable-size input (message) and produces a fixed-size output (160-bit message digest versus MD5’s 128-bit message digest). SHA-1 processes messages in 512-bit blocks and adds padding to a message length, if necessary, to produce a total message length that’s a multiple of 512. Note that SHA-1 is no longer considered to be a viable hash algorithm.
  • SHA-2: Published in 2001, SHA-2 consists of four hash functions — SHA-224, SHA-256, SHA-384, and SHA-512 — that have digest lengths of 224, 256, 384, and 512 bits, respectively. SHA-2 processes messages in 512-bit blocks for the 224, 256, and 384 variants, and 1,024-bit blocks for SHA-512.
  • SHA-3: Published in 2015, SHA-3 includes SHA3-224, SHA3-256, SHA3-384, and SHA3-512, which produce digests of 224, 256, 384, and 512 bits, respectively. SHAKE128 and SHAKE256 are also variants of SHA3.

Warning The SHA-1 digest algorithm is now considered to be obsolete. SHA-3 should be used instead. Similarly, the MD-4 digest algorithm is obsolete, and MD-5 should be used.

HMAC

The Hashed Message Authentication Code (or Checksum) (HMAC) further extends the security of the MD5 and SHA-1 algorithms through the concept of a keyed digest. HMAC incorporates a previously shared secret key and the original message into a single message digest. Thus, even if an attacker intercepts a message, modifies its contents, and calculates a new message digest, the result doesn’t match the receiver’s hash calculation because the modified message’s hash doesn’t include the secret key.

Understand Methods of Cryptanalytic Attacks

Attackers employ a variety of methods in their attempts to crack a cryptosystem. The following sections provide a brief overview of the most common methods.

Crossreference This section covers Objective 3.7 of the Security Architecture and Engineering domain in the CISSP Exam Outline (May 1, 2021).

Brute force

In a brute-force (or exhaustion) attack, the cryptanalyst attempts every possible combination of key patterns, sometimes using rainbow tables and specialized or scalable computing architectures. This type of attack can be very time-intensive (up to several hundred million years) and resource-intensive, depending on the length of the key, the speed of the attacker’s computer, and the life span of the attacker.

Technicalstuff A rainbow table is a precomputed table used to reverse cryptographic hash functions in a specific algorithm. Examples of password-cracking programs that use rainbow tables include Ophcrack and RainbowCrack.

Ciphertext only

In a ciphertext-only attack, the cryptanalyst obtains the ciphertext of several messages, all encrypted by using the same encryption algorithm, but they don’t have the associated plaintext. The cryptanalyst attempts to decrypt the data by searching for repeating patterns and using statistical analysis. Certain words in the English language, such as the and or, occur frequently, for example. This type of attack is generally difficult and requires a large sample of ciphertext.

Known plaintext

In a known-plaintext attack, the cryptanalyst has obtained the ciphertext and corresponding plaintext of several past messages, which they use to decipher new messages.

Frequency analysis

Frequency analysis is a method of attack in which an attacker examines ciphertext in an attempt to correlate commonly used words such as the and and to discover the encryption key or the algorithm in use.

Chosen ciphertext

In a chosen-ciphertext attack, the cryptanalyst selects a sample of ciphertext (or plaintext) and obtains the corresponding plaintext (or ciphertext). Several types of chosen ciphertext attacks exist, including

  • Chosen plaintext: The cryptanalyst chooses plaintext to be encrypted, and the corresponding ciphertext is obtained.
  • Adaptive chosen plaintext: The cryptanalyst chooses plaintext to be encrypted; then, based on the resulting ciphertext, they choose another sample to be encrypted.
  • Chosen ciphertext: The cryptanalyst chooses ciphertext to be decrypted, and the corresponding plaintext is obtained.
  • Adaptive chosen ciphertext: The cryptanalyst chooses ciphertext to be decrypted; then, based on the resulting ciphertext, they choose another sample to be decrypted.

Implementation attacks

Implementation attacks attempt to exploit some weakness in the cryptosystem, such as vulnerability in a protocol or algorithm.

Side channel

A side-channel attack is an attack in which the attacker is observing one or more characteristics of a system to discover its secrets. In an attack against a cryptosystem, a side-channel attack attempts to learn more about the cryptosystem, usually to obtain an encryption key. Several methods are used in a side-channel attack, including

  • Remanence: Examination of the contents of deleted files
  • Acoustics: Examination of the sound produced during computation
  • Power analysis: Analysis of the consumption of electric power during computation
  • Electromagnetic: Examination of the electromagnetic radiation or electric fields during computation
  • Timing: Analysis of the time required to perform various computations

Tip A side-channel attack can allow an attacker to learn about a cryptosystem through observation and inference.

Fault injection

Fault injection refers to techniques used to stress a system to see how it will behave. When applying fault injection to a cryptosystem, an attacker may be attempting to see whether the cryptosystem can be tricked into malfunctioning (for example, revealing plaintext when an unusual key value, such as null, is entered or a buffer overflow attack is executed) or to trick it into revealing secrets about the cryptosystem.

You could consider fault injection to be a form of fuzzing. In most cases, a cryptosystem is just a program running an algorithm, and that program may have flaws if its inputs are not sanitized properly.

This topic is a specific case in the larger field of software security. If this topic floats your boat, you’ll want to bookmark this page and head over to Chapter 10.

Timing

A timing (or replay) attack occurs when a session key is intercepted and used against a later encrypted session between the same two parties. Replay attacks can be countered by incorporating a time stamp in the session key.

Man in the middle

A man-in-the-middle (MITM) attack involves an attacker intercepting messages between two parties on a network and potentially modifying the original message.

In this type of attack, an attacker encrypts known plaintext with each possible key on one end, decrypts the corresponding ciphertext with each possible key, and then compares the results in the middle. Although commonly classified as a brute-force attack, this kind of attack may also be considered to be an analytic attack because it involves some differential analysis.

Pass the hash

Pass the hash is an authentication-bypass attack in which an attacker steals password hashes and uses them to authenticate to a system that uses NTLM authentication. To employ a pass the hash attack, the attacker must first obtain a system’s password hashes, generally through another attack.

If an attacker is able to obtain password hashes for a system but cannot successfully execute a pass-the-hash attack, the attacker can also use a rainbow table or employ brute-force password cracking techniques to obtain plaintext passwords, which can be used to log in to a target system.

Kerberos exploitation

Kerberos is a cryptosystem used for authentication and access control in distributed environments. Microsoft Active Directory and other environments such as X use Kerberos.

Attackers may choose to attack Active Directory servers, particularly the Key Distribution Service Account, to gain broad access to an environment. A successful attack can give the attacker the ability to forge valid ticket-granting tickets, thus giving them access to virtually all network resources. Such an attack is called a golden ticket attack.

A golden ticket attack can be difficult to detect except through inference by observing the behavior of authenticated users. Although prevention through techniques such as effective vulnerability management and access governance is essential, we know that we cannot stop all attacks at initial stages; thus, we also need to detect them through techniques such as user entity and behavior analytics.

Ransomware

If you’ve read this entire section, you may wonder why ransomware is included in cryptosystem attacks. That’s a good question, and here’s the answer: Ransomware is not so much an attack on a cryptosystem; the cryptosystem itself is the attack weapon. We thought this section was a good place to mention this topic. (ISC)2 also mentions ransomware in section 3.7 of the CBK, so we’re sort of obligated to discuss it here anyway.

Ransomware is an attack on a system in which the attacker, after somehow successfully gaining user-level or administrative-level access on a system, encrypts data on the system and displays a message to the user, informing them of the attack and demanding a ransom if the user wants to recover their encrypted data. Variants of ransomware will also upload the plaintext data to the attacker’s server before encryption and then threaten to publish the stolen data. Further variants also inform identifiable people that their personal information has been stolen.

The best mitigations against ransomware include

  • Robust asset management and vulnerability management: Organizations that keep all systems up to date and patched can dramatically reduce the likelihood of a successful ransomware attack.
  • Robust antimalware: Antimalware that employs both signature and heuristics-based techniques can significantly reduce the chances of a successful attack.
  • Application whitelisting: Ransomware is generally unable to execute successfully on a target system that uses application whitelisting.
  • Strict access control: Ransomware can succeed where users have broad access to network shares. Limitations on write access to network shares can dramatically reduce the potency of a ransomware attack.
  • Intrusion prevention systems: Because most ransomware depends on command-and-control traffic to obtain encryption keys, effective intrusion prevention can block ransomware from executing.
  • Enterprise backup: Instead of paying ransoms to recover encrypted data (which, according to the Federal Bureau of Investigation, results in recovery only about half the time), organizations can recover data from backups.

Organizations that are concerned about ransomware should perform threat modeling and other forms of risk analysis to determine what measures should be employed to reduce the probability and effect of a ransomware attack.

Apply Security Principles to Site and Facility Design

Securely designed and built software running on securely designed and built systems must be operated in securely designed and built facilities. Otherwise, an adversary with unrestricted access to a system and its installed software will inevitably succeed in compromising your security efforts. Astute organizations involve security professionals during the design, planning, and construction of new or renovated locations and facilities. Proper site- and facility-requirements planning during the early stages of construction helps ensure that a new building or data center is adequate, safe, and secure, \which can help an organization avoid costly situations later.

Crossreference This section covers Objective 3.8 of the Security Architecture and Engineering domain in the CISSP Exam Outline (May 1, 2021).

The principles of Crime Prevention through Environmental Design (CPTED), published in 1971, have been widely adopted by security practitioners in the design of public and private buildings, offices, communities, and campuses. CPTED focuses on designing facilities by using techniques such as unobstructed areas, creative lighting, and functional landscaping, which naturally deter crime through positive psychological effects. By making it difficult for a criminal to hide, gain access to a facility, escape a location, or otherwise perpetrate an illegal and/or violent act, such techniques may cause a would-be criminal to decide against attacking a target or victim and create an environment that’s perceived as being safer for people who use the area regularly. CPTED consists of three basic strategies:

  • Natural access control: Uses security zones (defensible space) to limit or restrict movement and differentiate between public, semiprivate, and private areas that require different levels of protection. Natural access control can be accomplished by limiting points of entry into a building and using structures such as sidewalks and lighting to guide visitors to main entrances and reception areas. Target hardening complements natural access controls by using mechanical and/or operational controls, such as window and door locks, alarms, security guards, guard docs, picture identification requirements, and visitor sign-in/sign-out procedures.
  • Natural surveillance: Reduces criminal threats by making intruder activity more observable and easier to detect. Natural surveillance can be accomplished by maximizing visibility and activity in strategic areas, such as by placing windows to overlook streets and parking areas, landscaping to eliminate hidden areas and create clear lines of sight, installing open railings on stairways to improve visibility, and using numerous low-intensity lighting fixtures to eliminate shadows and reduce security-camera glare or blind spots (particularly at night).
  • Territorial reinforcement: Creates a sense of pride and ownership, which causes intruders to stand out and encourages people to report suspicious activity instead of ignoring it. Territorial reinforcement is accomplished through maintenance activities (picking up litter, cleaning up graffiti, repairing broken windows, and replacing light bulbs), assigning people to be responsible for an area or space, placing amenities (such as benches and water fountains) in common areas, and displaying prominent signage where appropriate. It can also include scheduled activities, such as corporate-sponsored beautification projects and company picnics.

Location, location, location! Although, to a certain degree, this bit of conventional business wisdom may be less important to profitability in the age of e-commerce, it’s still a critical factor in physical security. Important factors in considering a location include

  • Climatology and natural disasters: Although an organization is unlikely to choose a geographic location solely based on the likelihood of hurricanes or earthquakes, these factors must be considered when designing a safe and secure facility. Related factors may include flood plains, the location of evacuation routes, and the adequacy of civil and emergency preparedness.
  • Local hazards: Are high-risk conditions or activities nearby, such as hazardous-materials storage, railway freight lines, or flight paths for the local airport? Is the area heavily industrialized (so that air and noise pollution, including vibration, might affect your systems)?
  • Crime rate: Consider whether a location being considered is in or near a high-crime area and whether features or conditions could invite criminal elements.
  • Visibility: Will your employees and facilities be targeted for crime, terrorism, vandalism, or social unrest? Is the site near another high-visibility organization that may attract undesired attention? Is your facility located near a government or military target? Keeping a low profile is generally best because you avoid unwanted and unneeded attention; avoid external building markings when possible.
  • Accessibility: Consider local traffic patterns, convenience to airports, proximity to emergency services (police, fire, and medical facilities), and availability of adequate housing. Will on-call employees have to drive for an hour to respond when your organization needs them?
  • Utilities: It is important to understand where the facility is located in the power grid, and whether electrical power is stable and clean. Also determine whether fiber optic cable is already in place to support current and future telecommunications requirements. Finally, determine whether electric utility and telecommunications facilities feature geographic diversity, in which multiple feeds originating from different locations enter the building.
  • Joint tenants: Will you have full access to all necessary environmental controls? Can (and should) physical security costs and responsibilities be shared between joint tenants? Are other tenants potential high-visibility targets? Do other tenants take security as seriously as your organization does?

Tip If your organization already occupies a facility, and no preoccupation security assessment was performed, consider performing an assessment to identify and document any possible risks. Although it’s unlikely that your organization is going to relocate because of the presence of one or more risks, knowing about them is better than not.

Design Site and Facility Security Controls

The CISSP candidate must understand the various threats to physical security; the elements of site- and facility-requirements planning and design; and various physical security controls, including access controls, technical controls, environmental and life safety controls, and administrative controls. In addition, you must know how to support the implementation and operation of these controls, as covered in this section.

Crossreference This section covers Objective 3.9 of the Security Architecture and Engineering domain in the CISSP Exam Outline (May 1, 2021).

Many physical and technical controls should be considered during the initial design of a secure facility to reduce costs and improve the overall effectiveness of these controls. Building design considerations include

  • Exterior walls: Ideally, exterior walls should be able to withstand high winds (tornadoes and hurricanes/typhoons) and reduce electronic emanations that can be detected and used to re-create high-value data (such as government or military data). If possible, the use of exterior windows should be avoided throughout the building, particularly on lower levels. Metal bars over windows or reinforced windows on lower levels may be necessary. Any windows should be fixed (meaning that you can’t open them), shatterproof, and sufficiently opaque to conceal inside activities.
  • Interior walls: Interior walls adjacent to secure or restricted areas must extend from the floor to the ceiling (through raised flooring and drop ceilings) and must comply with applicable building and fire codes. Walls adjacent to storage areas (such as closets containing janitorial supplies, paper, media, or other flammable materials) must meet minimum fire ratings, which are typically higher than for other interior walls. Ideally, bulletproof walls should protect the most sensitive areas.
  • Security zones: Access controls to sensitive areas within work facilities should be enacted so that only authorized personnel may access them. This consideration will influence floor plans so that interior security zones are protected.
  • Floors: Flooring (both slab and raised) must be capable of bearing loads in accordance with local building codes (typically, 150 pounds per square foot). Additionally, raised flooring must have a nonconductive surface and be grounded properly to reduce safety risks.
  • Ceilings: Weight-bearing and fire ratings must be considered. Drop ceilings may temporarily conceal intruders and small water leaks; conversely, stained drop-ceiling tiles can reveal leaks while temporarily impeding water damage.
  • Doors: Doors and locks must be sufficiently strong and well designed to resist forcible entry, and they need a fire rating equivalent to that of adjacent walls. Emergency exits must remain unlocked from the inside and should also be clearly marked, as well as monitored or alarmed. Electronic lock mechanisms and other access control devices should fail open (unlock) in the event of an emergency to permit people to exit the building. Many doors swing out to facilitate emergency exiting; thus, door hinges are located on the outside of the room or building. These hinges must be secured to prevent an intruder from easily lifting hinge pins and removing the door. Magnetic locks should be inspected to identify signs of tampering.
  • Lighting: Exterior lighting for all physical spaces and buildings in the security perimeter (including entrances and parking areas) should be sufficient to provide safety for personnel, as well as to discourage prowlers and casual intruders.
  • Wiring: All wiring, conduits, and cable runs must comply with building and fire codes, and must be properly protected. Plenum cabling must be used below raised floors and above drop ceilings, because PVC-clad cabling releases toxic chemicals when it burns.

    Technicalstuff A plenum is the vacant area above a drop ceiling or below a raised floor. A fire in these areas can spread very rapidly, carrying smoke and noxious fumes to other areas of a burning building. For this reason, non-PVC-coated cabling, known as plenum cabling, must be used in these areas in most jurisdictions.

  • Electricity and heating/cooling/air conditioning(HVAC): Electrical load and heating, cooling, and air conditioning must be planned carefully to ensure that sufficient power is available in the right locations and that proper climate ranges (temperature and humidity) are maintained.
  • Pipes: Locations of shutoff valves for water, steam, or gas pipes should be identified and marked appropriately. Drains should have positive flow, carrying drainage away from the building.
  • Fire detection and prevention: Inert gas fire suppression should be used in all rooms where IT equipment resides. Wet and dry pipe should not be used in those locations if local fire codes permit. Advanced smoke detectors can provide earlier warning of precombustion activities.
  • Lightning strikes: Approximately 10,000 fires are started every year by lightning strikes in the United States alone, despite the fact that only 20 percent of all lightning ever reaches the ground. Lightning can heat the air in immediate contact with the stroke to 54,000° Fahrenheit (F), which translates to 30,000° Celsius (C), and lightning can discharge 100,000 amperes of electrical current. Now that’s an inrush!
  • Magnetic fields: Monitors and magnetic-based storage media such as backup tape and hard disk drives can be permanently damaged or erased by magnetic fields.
  • Sabotage/terrorism/war/theft/vandalism: Both internal and external threats must be considered. A heightened security posture is also prudent during certain other disruptive situations, including labor disputes, corporate downsizing, hostile terminations, bad publicity, demonstrations/protests, and civil unrest.
  • Equipment failure: Equipment failures are inevitable. Maintenance and support agreements, ready spare parts, and redundant systems can mitigate the effects.
  • Loss of communications and utilities: This category includes voice and data communications, water, and electricity. Loss of communications and utilities may happen because of any of the factors discussed in the preceding items, as well as human error.
  • Vibration and movement: Causes may include earthquakes, landslides, and explosions. Equipment may also be damaged by sudden or severe vibrations, falling objects, or equipment racks tipping over. More seriously, vibrations or movement may weaken structural integrity, causing a building to collapse or otherwise be unusable.
  • Severe weather: This category includes hurricanes, tornadoes, high winds, severe thunderstorms and lightning, rain, snow, sleet, and ice. Such forces of nature may cause fires, water damage and flooding, structural damage, loss of communications and utilities, and hazards to personnel.
  • Personnel loss: This category includes illness, injury, death, transfer, labor disputes, resignations, and terminations. The negative effects of a personnel loss can be mitigated through good security practices, such as documented procedures, job rotations, cross-training, and redundant functions.

Tip Although much of the information in this section may seem to be common sense, the CISSP exam asks very specific and detailed questions about physical security, and many candidates lack practical experience in fighting fires, so don’t underestimate the importance of physical security — in real life and on the CISSP exam!

Wiring closets, server rooms, and more

Wiring closets, intermediate distribution facilities (IDFs), server rooms, data centers, and media and evidence storage facilities contain high-value equipment and/or media that is critical to ongoing business operations or support of investigations. Physical security controls often found in these locations include

  • Strong access controls: Typically, this category includes the use of key cards, as well as a PIN pad or biometric. Only those personnel who have a need to know should be authorized to access high-security areas.
  • Fire suppression: Often, you’ll find inert gas fire suppression instead of water sprinklers, because water can damage computing equipment in case of discharge.
  • Video surveillance: Cameras are fixed at entrances to wiring closets, distribution frames, and data center entrances, as well as in the interiors of those facilities, to observe the goings-on of both authorized personnel and intruders.
  • Visitor log: All visitors, who generally require a continuous escort, are required to sign a visitor log.
  • Asset check-in/check-out log: All personnel are required to log the introduction and removal of any equipment and media.

Tip In many jurisdictions, visible notices are required when video surveillance is employed. These notices can serve as deterrent controls.

Restricted and work area security

High-security work areas often employ physical security controls above and beyond those used in ordinary work areas. In addition to key card access control systems and video surveillance, additional physical security controls may include

  • Multifactor key card entry: In addition to using key cards, employees may be required to use a PIN pad or biometric device to access restricted areas.
  • Mantraps: A set of interlocked double doors or turnstiles can be used to prevent tailgating.
  • Security guards: Guards are present at ingress and egress points, and roam within the facility to be on the alert for unauthorized personnel or unauthorized activities.
  • Guard dogs: Guard dogs provide additional deterrence against unauthorized entry and also assist in the capture of unauthorized personnel in a facility.
  • Security walls and fences: Restricted facilities may employ one or more security walls and fences to keep unauthorized personnel away from facilities. General height requirements for fencing are listed in Table 5-5.
  • Security lighting: Restricted facilities may have additional lighting to expose and deter any would-be intruders.
  • Security gates, crash gates, and bollards: These controls limit the movement of vehicles near a facility to reduce vehicle-borne threats.
  • Sally ports: These controlled entries provide better control for admitting authorized personnel into a work facility while making it more difficult for unauthorized people to enter.

TABLE 5-5 General Fencing Height Requirements

Height

General Effect

3–4 feet (1 meter)

Deters casual trespassers

6–7 feet (2 meters)

Too high to climb easily

8 feet (2.4 meters) plus three-strand barbed wire

Deters determined intruders

Work-area security also makes us think of various safety issues, all of which are important to the security professional, although one or more of the following may be managed by facilities or other personnel:

  • First aid: From simple kits for simple matters to those for more complicated situations such as broken bones, first aid kits ensure treatment, comfort, and even life safety for personnel on the job.
  • Emergency food and water: Supplies of emergency food and water can be vital for the survival of workers occupying a building when disasters of various types occur.
  • Automated external defibrillators: These portable devices can save a person’s life. They’re kept on hand in many workplaces, including stores and commercial aircraft.
  • Evacuation signage and drills: This aspect of security involves exit signs, mustering stations, and personnel trained in ensuring the safe exit of all personnel in the event of an emergency.
  • Active assailant response: Simple changes in building design and training for personnel can greatly influence the survivability of an active-assailant situation, in which one or more people are attempting to harm others in a workplace or school.

Utilities and heating, ventilation, and air conditioning

Environmental and life safety controls such as utilities and heating, ventilation, and air conditioning (HVAC) are necessary for maintaining a safe and acceptable operating environment for computers, equipment, and personnel.

HVAC systems maintain the proper environment for computers and personnel. HVAC-requirements planning involves making complex calculations based on numerous factors, including the average BTUs (British Thermal Units) produced by the estimated computers and personnel occupying a given area, the size of the room, insulation characteristics, and ventilation systems.

The ideal temperature range for computer equipment is between 50°F and 80°F (10°C and 27°C). At temperatures as low as 100°F (38°C), magnetic storage media can be damaged.

Remember The ideal temperature range for computer equipment is between 50°F and 80°F (10°C and 27°C).

The ideal humidity range for computer equipment is between 40 and 60 percent. Higher humidity causes condensation and corrosion. Lower humidity increases the potential for static electricity.

Doors and side panels on computer equipment racks should be kept closed (and locked, as a form of physical access control) to ensure proper airflow for cooling and ventilation. When possible, empty spaces in equipment racks (such as a half-filled rack or gaps between installed equipment) should be covered with blanking panels to reduce hot and cold air mixing between the hot side or hot aisle (typically, the power-supply side of the equipment) and the cold side or cold aisle (typically, the front of the equipment). Such mixing of hot and cold air can reduce the efficiency of cooling systems.

Heating and cooling systems should be maintained properly, and air filters should be cleaned regularly to reduce dust contamination and fire hazards.

Most gas-discharge fire suppression systems automatically shut down HVAC systems before discharging, but a separate emergency power-off switch should be installed near exits to facilitate manual shutdown in an emergency.

Ideally, HVAC equipment should be dedicated, controlled, and monitored. If the systems aren’t dedicated or independently controlled, proper liaison with the building manager is necessary to ensure that everyone knows who to call when problems occur. Monitoring systems should alert the appropriate personnel when operating thresholds are exceeded.

Environmental issues

Water damage (and damage from liquids in general) can be caused by many things, including pipe breakage, firefighting, leaking roofs, spilled drinks, flooding, and tsunamis. Wet computers and other electrical equipment pose a potentially lethal hazard.

Both preventive and detective controls are used to ensure that water in unwanted places does not disrupt business operations or destroy expensive assets. Common features include

  • Water diversion: Barriers of various types help prevent water from entering sensitive areas.
  • Water detection alarms: Sensors that detect the presence of water can alert personnel of the matter and provide valuable time before damage occurs.

Contaminants in the air, unless filtered out by HVAC systems, can be irritating or harmful to personnel and to equipment. A build-up of carbon dioxide and carbon monoxide can also be injurious and even cause death. Air quality sensors can be used to detect and particulates, contaminants, CO2, and CO and alert facilities personnel.

Fire prevention, detection, and suppression

Threats from a fire can be potentially devastating and lethal. Proper precautions, preparation, and training not only help limit the spread of fire and damage, but also (and more important) save lives.

Remember Saving human lives is the first priority in any life-threatening situation.

Other hazards associated with fires include smoke, explosions, building collapse, release of toxic materials or vapors, and water damage.

For a fire to burn, it requires three elements: heat, oxygen, and fuel. These three elements are sometimes referred to as the fire triangle, which is depicted in Figure 5-11. Fire suppression and extinguishing systems fight fires by removing one of these three elements or by temporarily breaking the chemical reaction among these three elements, separating the fire triangle. Fires are classified according to fuel type, as listed in Table 5-6.

Schematic illustration of a fire needs these three elements to burn.

© John Wiley & Sons, Inc.

FIGURE 5-11: A fire needs these three elements to burn.

TABLE 5-6 Fire Classes and Suppression/Extinguishing Methods

Class

Description (Fuel)

Extinguishing Method

A

Common combustibles, such as paper, wood, furniture, and clothing

Water or soda acid

B

Burnable fuels, such as gasoline or oil

CO2 or soda acid

C

Electrical fires, such as computers or electronics

CO2 (Note: The most important step in fighting a fire in this class is turning off the electricity first.)

D

Special fires, such as combustible metals

May require total immersion or other special techniques

K (or F)

Cooking oils or fats

Water mist or fire blankets

Tip You must be able to describe Class A, B, and C fires and their primary extinguishing methods. Class D and K (or F) are not as common as computer fires (unless your server room happens to be located directly above the deep-fat fryers of a local bar and hot-wings restaurant).

Fire detection and suppression systems are some of the most essential life safety controls for protecting facilities, equipment, and (most important) human lives. The three main types of fire detection systems are

  • Heat-sensing: These devices sense either temperatures exceeding a predetermined level (fixed-temperature detectors) or rapidly rising temperatures (rate-of-rise detectors). Fixed-temperature detectors are more common and exhibit a lower false-alarm rate than rate-of-rise detectors.
  • Flame-sensing: These devices sense the flicker (pulsing) of flames or the infrared energy of a flame. These systems are relatively expensive but provide extremely rapid response time.
  • Smoke-sensing: These devices detect smoke, which is one of the byproducts of fire. The four types of smoke detectors are
    • Photoelectric: These detectors sense variations in light intensity.
    • Beam: Similar to the photoelectric type, these detectors sense when smoke interrupts beams of light.
    • Ionization: These detectors sense disturbances in the normal ionization current of radioactive materials.
    • Aspirating: These detectors draw air into a sampling chamber and sense minute amounts of smoke.

Remember The three main types of fire detection systems are heat-sensing, flame-sensing, and smoke-sensing.

The two primary types of fire suppression systems are

  • Water sprinkler: Water extinguishes fire by removing the heat element from the fire triangle, and it’s most effective against Class A fires. Water is the primary fire-extinguishing agent for all business environments. Although water can potentially damage equipment, it’s one of the most effective, inexpensive, readily available, and least harmful (to humans) extinguishing agents available. The four variations of water sprinkler systems are

    • Wet-pipe (or closed-head): Most commonly used and considered the most reliable. Pipes are always charged with water and ready for activation. Typically, a fusible link in the nozzle melts or ruptures, opening a gate valve that releases the water flow. Disadvantages include flooding because of nozzle or pipe failure and because of frozen pipes in cold weather.
    • Dry-pipe: No standing water in the pipes. At activation, a clapper valve opens, air is blown out of the pipe, and water flows. This type of system is less efficient than the wet-pipe system but reduces the risk of accidental flooding; the time delay provides an opportunity to shut down computer systems (or remove power) if conditions permit.
    • Deluge: Operates similarly to a dry-pipe system but is designed to deliver large volumes of water quickly. Deluge systems typically are not used for information processing areas.
    • Preaction: Combines wet- and dry-pipe systems. Pipes are initially dry. When a heat sensor is triggered, the pipes are charged with water, and an alarm is activated. Water isn’t actually discharged until a fusible link melts (as in wet-pipe systems). This system is recommended for information processing areas because it reduces the risk of accidental discharge by permitting manual intervention.

    Remember The four main types of water sprinkler systems are wet-pipe, dry-pipe, deluge, and preaction.

  • Gas discharge: Gas discharge systems may be portable (such as a CO2 extinguisher) or fixed (beneath a raised floor). These systems are typically classified according to the extinguishing agent that’s employed. These agents include
    • Carbon dioxide (CO2): CO2 is a colorless, odorless gas that extinguishes fire by removing the oxygen element from the fire triangle. CO2 is most effective against Class B and C fires. Because this gas removes oxygen, it is potentially lethal and therefore is best suited for use in unmanned areas or on a delay (including manual override) in staffed areas.

      CO2 is also used in portable fire extinguishers, which should be located near all exits and within 50 feet (15 meters) of any electrical equipment. All portable fire extinguishers (CO2, water, and soda acid) should be clearly marked (listing the extinguisher type and the fire classes it can be used for) and periodically inspected. Additionally, all personnel should receive training in the proper use of fire extinguishers.

    • Soda acid: Includes a variety of chemical compounds that extinguish fires by removing the fuel element (suppressing the flammable components of the fuel) of the fire triangle. Soda acid is most effective against Class A and B fires. It is not used for Class C fires because of the highly corrosive nature of many of the chemicals used.
    • Inert gas-discharge: Gas-discharge systems suppress fire by separating the elements of the fire triangle; they are most effective against Class B and C fires. Inert gases don’t damage computer equipment, don’t leave liquid or solid residue, mix thoroughly with the air, and spread extremely quickly. But in concentrations higher than 10 percent, the gases are harmful if inhaled, and some types degrade into toxic chemicals (hydrogen fluoride, hydrogen bromide, and bromine) when used on fires that burn at temperatures above 900°F (482°C).

      Halon used to be the gas of choice in gas-discharge fire suppression systems. But because of Halon’s ozone-depleting characteristics, the Montreal Protocol of 1987 prohibited the further production and installation of Halon systems (beginning in 1994) and encouraged the replacement of existing systems. Acceptable replacements include FM-200 (most effective), CEA-410 or CEA-308, NAF-S-III, FE-13, Argon or Argonite, and Inergen.

      Remember Halon is an ozone-depleting substance. Acceptable replacements include FM-200, CEA-410 or CEA-308, NAF-S-III, FE-13, Argon or Argonite, and Inergen.

Power

General considerations for electrical power include having one or more dedicated feeders from one or more utility substations or power grids, as well as ensuring that adequate physical access controls are implemented for electrical distribution panels and circuit breakers. An emergency power-off switch should be installed near major systems and exit doors to shut down power in case of fire or electrical shock. Additionally, a backup power source should be established, such as a diesel or natural-gas power generator, along with an uninterruptible power supply (UPS). Backup power should be provided for critical facilities and systems, including emergency lighting, fire detection and suppression, mainframes and servers (and certain workstations), HVAC, physical access control systems, and telecommunications equipment.

Warning Although natural gas can be a cleaner alternative than diesel for backup power, in terms of air and noise pollution, it’s generally not used for emergency life systems (such as emergency lighting and fire protection systems) because the fuel source (natural gas) can’t be locally stored, so the system relies instead on an external fuel source that must be supplied by pipelines.

Protective controls for electrostatic discharge include the following:

  • Maintain proper humidity levels (40 to 60 percent).
  • Ensure proper grounding.
  • Use antistatic flooring, antistatic carpeting, and floor mats.
  • Forbid crepe-soled shoes. (Okay, we’re mostly kidding on this one.)

Protective controls for electrical noise include the following:

  • Install power-line conditioners.
  • Ensure proper grounding.
  • Use shielded cabling.

A UPS is perhaps the most important protection against electrical anomalies because it provides clean power to sensitive systems and a temporary power source during electrical outages (blackouts, brownouts, and sags). This power supply must be sufficient to shut down the protected systems properly.

Remember A UPS shouldn’t be used as a backup power source. A UPS — even a building UPS — is designed to provide temporary power, typically for 5 to 30 minutes, to give a backup generator time to start or to allow a controlled, proper shutdown of protected systems.

Sensitive equipment can be damaged or affected by various electrical hazards and anomalies, including

  • Electrostatic discharge (ESD): The ideal humidity range for computer equipment is 40 to 60 percent. Higher humidity causes condensation and corrosion. Lower humidity increases the potential for static electricity. A static charge of as little as 40 volts can damage sensitive circuits, and 2,000 volts can cause a system shutdown. The minimum discharge that can be felt by humans is 3,000 volts, and electrostatic discharges of more than 25,000 volts are possible. If you can feel it, it’s a problem for your equipment!

    Remember The ideal humidity range for computer equipment is 40 to 60 percent. Also, remember that it’s not the volts that kill; it’s the amps!

  • Electrical noise: This category includes electromagnetic interference (EMI) and radio frequency interference (RFI). Electromagnetic interference is generated by the different charges among the three electrical wires (hot, neutral, and ground) and can be common-mode noise (caused by hot and ground) or traverse-mode noise (caused by a difference in power between the hot and neutral wires). Radio frequency interference is caused by electrical components, such as fluorescent lighting and electric cables. A transient is a momentary line-noise disturbance.
  • Electrical anomalies: These anomalies include the ones listed in Table 5-7.

TABLE 5-7 Electrical Anomalies

Electrical Event

Definition

Blackout

Total loss of power

Fault

Momentary loss of power

Brownout

Prolonged drop in voltage

Sag

Short drop in voltage

Inrush

Initial power rush

Spike

Momentary rush of power

Surge

Prolonged rush of power

Voltage drop

Decrease in electric voltage

Tip You may want to come up with some meaningless mnemonic for the list in Table 5-7, such as Bob Frequently Buys Shoes In Shoe Stores Verbosely. You need to know these terms for the CISSP exam.

Warning Surge protectors and surge suppressors provide only minimal protection for sensitive computer systems, and they’re commonly (and dangerously) used to overload an electrical outlet or as a daisy-chained extension cord. The protective circuitry in most of these units costs less than $1 (compare the cost of a low-end surge protector with that of a 6-foot extension cord), and you get what you pay for. These glorified extension cords provide only minimal spike protection. True, a surge protector provides more protection than nothing at all, but don’t be lured into complacency by these units; check them regularly for proper use and operation, and don’t accept them as being viable alternatives to a UPS.