Chapter 5
IN THIS CHAPTER
Adopting secure design principles
Understanding security models
Choosing the right controls and countermeasures
Using security capabilities in information systems
Assessing and mitigating vulnerabilities
Deciphering cryptographic concepts and fundamentals
Getting physical with physical security design concepts
Security must be part of the design of information systems, as well as the facilities housing information systems and workers, which is covered in the Security Architecture and Engineering domain. This domain represents 13 percent of the CISSP certification exam.
It is a natural human tendency to build things without first considering their design or security implications. A network engineer who is building a new network may just start plugging cables into routers and switches without thinking about the overall design — much less any security or privacy considerations. Similarly, a software engineer assigned to write a new program is apt to begin coding without planning the program’s architecture or design.
If we observe the outside world and the consumer products that are available, sometimes we see egregious usability and security flaws that make us wonder how the person or organization was ever allowed to participate in its design and development.
The engineering processes that require the inclusion of secure design principles include the following:
The application development life cycle also includes security considerations that are nearly identical to the security engineering principles discussed here. Application development is covered in Chapter 10.
Design principles and concepts associated with security architecture and engineering include the following:
These principles and concepts are discussed in detail in the remainder of this section.
Threat modeling is a type of risk analysis used to identify security defects in the design phase of an information system or business process. Threat modeling is most often applied to software applications, but it can be used for OSes, devices, and business processes with equal effectiveness.
Threat modeling is typically attack-centric; threat modeling is most often used to identify vulnerabilities that can be exploited by an attacker in information systems.
Threat modeling is most effective when performed during the design phase of an information system, application, or process. When threats and their mitigation are identified during the design phase, much effort is saved by the avoidance of fixes in a completed system.
Although there are different approaches to threat modeling, the typical steps are
Threat identification is the first step in threat modeling. Threats are those actions that an attacker may be able to perform successfully if corresponding vulnerabilities are present in the application, system, or process.
For software applications, two mnemonics are used as a memory aid during threat modeling:
Although these mnemonics themselves don’t contain threats, they do assist the person performing threat modeling by reminding them of basic threat categories (STRIDE) and their analysis (DREAD).
After threats have been identified, threat modeling continues through the creation of diagrams that illustrate attacks on an application or system. An attack tree can be developed, outlining the steps required to attack a system. Figure 5-1 illustrates an attack tree for a mobile banking application.
© John Wiley & Sons, Inc.
FIGURE 5-1: Attack tree for a mobile banking application.
When you’re performing a threat analysis on a complex application or a system, it is likely that many similar elements will represent duplications of technology. Reduction analysis is an optional step in threat modeling that prevents duplication of effort. It doesn’t make sense to spend a lot of time analyzing different components in an environment if all of them have the same technology and configuration.
Here are typical examples:
As in routine risk analysis, the next step in threat analysis is enumerating potential measures to mitigate the identified threat. Because the nature of threats varies widely, remediation may consist of carrying out one or more of the following tasks for each risk:
The principle of least privilege states that people should have the capability to perform only the tasks (or access only the data) required to perform their primary jobs— no more.
Giving a person more privileges and access than required increases risk and invites trouble. Offering the capability to perform more than the job requires may become a temptation that results, sooner or later, in an abuse of privilege.
Giving a user full permissions on a network share rather than just read and modify rights to a specific directory, for example, opens the door not only to abuse of those privileges (such as reading or copying other sensitive information on the network share), but also to costly mistakes (such as accidentally deleting a file — or the entire directory!). As a starting point, organizations should approach permissions with a “deny all” mentality and add needed permissions as required.
The concept of need to know states that only people with a valid business justification should have access to specific information or functions. In addition to having a need to know, a person must have an appropriate security clearance level to be granted access. Conversely, a person with the appropriate security clearance level but without a need to know should not be granted access.
One of the most difficult challenges in managing need to know is the use of controls that enforces the concept. Information owners need to be able to distinguish genuine need from curiosity and proceed accordingly.
Several important concepts associated with need to know and least privilege include
Aggregation: When people transfer between jobs and/or departments within an organization (see the section on job rotations later in this chapter), they often need different access and privileges to do their new jobs. Far too often, organizational security processes do not adequately ensure that access rights that a person no longer requires are revoked. Instead, people accumulate privileges, and over a period of many years, they can have far more access and privileges than they need. This process is known as aggregation, and it’s the antithesis of least privilege.
Privilege creep and accumulation of privileges are others terms commonly used in this context.
Defense in depth is a strategy for resisting attacks. A system that employs defense in depth has two or more layers of protective controls designed to protect the system or data stored there.
An example defense-in-depth architecture would consist of a database protected by several components, such as
All the layers listed here help protect the database. In fact, each by itself offers nearly complete protection. But when considered together, all these controls offer a varied (in effect, deeper) defense — hence, the term defense in depth.
True defense in depth employs heterogeneous, versus homogeneous, protection. Employing two back-to-back firewalls of the same make and model, for example, constitutes a poor implementation of defense in depth: a security flaw in one of the firewalls is likely to be present in the other one. A better example of defense in depth would be back-to-back firewalls of different makes (such as one made by Cisco and the other made by Palo Alto Networks); a security flaw in one is unlikely to be present in the other.
The concept of secure defaults encompasses several techniques, including
These techniques ensure that the design of new information systems includes inherent security in all phases of development and implementation. When the techniques are performed correctly, little or no retrofit to a system will be required after it is tested by security specialists who use techniques such as threat modeling and penetration testing.
Fail securely is a concept that describes the result of the failure of a control or safeguard. A control or safeguard is said to fail securely if its failure does not result in a reduction in protection. Consider a door that is used to control personnel access to a secure location. If the mechanism used to admit authorized personnel to the secure location fails, the door should remain locked, meaning that it is secure and continues to block unauthorized access.
Fail securely replaces the terms fail open and fail closed. These two older terms were sometimes confusing, depending on the context of a control. In some examples, failing open was secure, but in other examples, failing closed was secure. The confusion was not unlike the use of a double negative, such as a security door that is not secure in certain circumstances. Conversations that included fail open and fail closed often digressed into discussions of the meaning of the terms and whether failing open or failing closed was good or bad. Fortunately, fail securely came to the rescue, helping us better understand the context of a conversation.
The concept of separation of duties (SoD, or segregation of duties and responsibilities) ensures that no single person has complete authority and control of a critical system or process. SoD is discussed further in Chapter 9.
It is often said that complexity is the enemy of security and, conversely, that simplicity is the friend of security. These adages reflect the realization that more complex environments are inherently more difficult to secure, and the security posture of such an environment is harder to understand because of the higher number of components.
In information security, simplicity often calls for consistency of approach to system and data protection. Elegance of design is another way to think about simplicity. In security, less is more: Given two identical environments, the one with a simple yet effective design will be easier for engineers to understand than a complex architecture.
Security engineers and specialists often call on the KISS (Keep It Simple, Stupid) principle. No, we’re not calling you or anyone stupid. We didn’t make up this principle, but we do see it cited often.
The concept of zero trust has been around for a long time but is now gaining a lot of favor. Zero trust (ZT) is a popular buzzword these days, although it is not always well understood. We want you to be buzzword-compliant, so read on to find out more.
Zero trust is an about-face to the earlier notion that all devices within an organization’s network were considered to be trustworthy. Organizations have been compromised countless times because of this fateful assumption, often because attackers found it way too easy to attack trusted systems and endpoints; they usually gained carte blanche access to other systems because the compromised system was considered to be trustworthy.
Zero trust is not a product, tool, or technique; it’s a design principle that is implemented in different ways to ensure that systems retain their security and integrity. Here are some examples of zero trust in action:
Privacy (as we discuss more fully in Chapter 3) includes measures not only to protect information about people, but also to ensure the proper uses of personal information. Focusing on proper use here, the principle of privacy by design ensures that information systems have several capabilities, including
Since the passage of recent privacy laws (generally starting with the European General Data Protection Regulation [GDPR]), it’s not enough for organizations simply to protect personal information. Now organizations must build structures that provide visibility into and control of the uses of personal information so that organizations do not run afoul of these new laws.
We’ll further explain some of the preceding terms. Organizations are realizing that the consequences of failing to use and protect personal information properly are climbing rapidly, with potential fines that can wipe out an organization’s profitability. New privacy laws incentivize organizations to remove personal information from their databases as soon as that information is no longer needed. The rights of data subjects to opt out and to be forgotten can compel organizations to build mechanisms to remove them from their records. Techniques that organizations can use include
Although pseudonymisation has many uses, it should be distinguished from anonymization, as it may provide only limited protection for the identity of data subjects because may allow identification using indirect means. Where a pseudonym is used, it may be possible to identify the data subject by analyzing the underlying or related data When done properly, these two techniques constitute the effective removal of a data subject from an organization’s records.
The concept trust but verify was made popular in the 1980s, when President Ronald Reagan enacted a treaty with the Soviet Union that included provisions for each country to not only enforce the limitation of nuclear armaments, but also inspect the other’s nuclear arsenal to confirm compliance with the treaty.
In information security, the principle means that certain controls or mechanisms should be examined or tested periodically to ensure that they comply with policies or requirements. Although examining or testing a system is an operational activity performed on a system after it has been designed and implemented, the design of a system should permit it to be examined. Here are a couple of examples:
This is a fundamental truth that is not universally understood: Cloud providers do not take care of information security — not all of it, anyway. More breaches and information leaks than we can count have occurred because organizations and people did not understand this concept (and because of lack of training and plain old sloppiness).
Better cloud service providers — and by this, we mean Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) — have developed specific documents known as shared responsibility matrices, often in visual form, so that their customers have a clearer idea of what security controls are taken care of by the service provider and what controls are the responsibility of the customer. Sometimes, however, specific service providers don’t provide clear guidance, in which case a skilled information security specialist needs to examine the characteristics of the service and discern the responsibility boundaries. However you get to this clear determination, it’s critical that organizations understand precisely what they should be doing with regard to security and privacy and what the service provider is supposed to be doing.
Figures 5-2 and 5-3 show typical shared responsibility matrices from Amazon Web Services (AWS) and Microsoft Azure, respectively. Note that the matrices visually depict the areas in which AWS and Azure provide security and those in which customers are required to provide security.
Source: AWS
FIGURE 5-2: AWS shared responsibility matrix.
Source: Microsoft
FIGURE 5-3: Azure shared responsibility matrix.
Examples of what shared responsibility means at various levels for different cloud services include the following:
Security models help us understand complex security mechanisms in information systems by illustrating concepts that can be used to analyze an existing system or design a new one.
Models are used to express access control requirements in a theoretical or mathematical framework and precisely describe or quantify real access control systems. Several important access control models include
These models are discussed in the following sections.
The Biba integrity model (sometimes referred to as Bell-LaPadula upside down) was the first formal integrity model. Biba is a lattice-based model that addresses the first goal of integrity: ensuring that modifications to data aren’t made by unauthorized users or processes. (See Chapter 3 for a complete discussion of the three goals of integrity.) Biba defines the following two properties:
The Bell-LaPadula model was the first formal confidentiality model of a mandatory access control system. (We discuss mandatory and discretionary access controls in Chapter 7.) It was developed for the U.S. Department of Defense (DoD) to formalize a multilevel security policy. As we discuss in Chapter 3, the DoD classifies information based on sensitivity at three basic levels: Confidential, Secret, and Top Secret. To access classified information (and systems), a person must have access (a clearance level equal to or exceeding the classification of the information or system) and need to know (legitimately need of access to perform a required job function). The Bell-LaPadula model implements the access component of this security policy.
Bell-LaPadula is a state machine model that addresses only the confidentiality of information. The basic premise of the model is that information can’t flow downward — that is, that information at a higher level is not permitted to be copied or moved to a lower level. Bell-LaPadula defines the following two properties:
Bell-LaPadula also defines two additional properties that give it the flexibility of a discretionary access control model:
An Access Matrix model, in general, provides object access rights (read/write/execute) to subjects in a discretionary access control (DAC) system. An access matrix consists of access control lists (columns) and capability lists (rows). See Table 5-1 for an example.
TABLE 5-1 An Access Matrix Example
Subject/Object |
Directory: H/R |
File: Personnel |
Process: LPD |
---|---|---|---|
Thomas |
Read |
Read/Write |
Execute |
Lisa |
Read |
Read |
Execute |
Harold |
None |
None |
None |
A DAC system is one in which the owners of specific objects (typically, files and/or directories) can adjust access permissions at their discretion. No central administrator is needed to adjust permissions. The underlying OS enforces these access rights by permitting or denying access to specific objects.
A Mandatory Access Control (MAC) system is one that is controlled by a central administrator who determines access rights to objects The OS enforces these access rights by permitting or denying access to specific objects.
Take-Grant systems specify the rights that a subject can transfer to or from another subject or object. These rights are defined through four basic operations: create, revoke, take, and grant.
The Clark-Wilson integrity model establishes a security framework for use in commercial activities, such as the banking industry. Clark-Wilson addresses all three goals of integrity and identifies special requirements for inputting data based on the following items and procedures:
The Clark-Wilson integrity model is based on the concept of a well-formed transaction, in which a transaction is sufficiently ordered and controlled that it maintains internal and external consistency.
An Information Flow model is a type of access control model based on the flow of information rather than on imposing access controls. Objects are assigned a security class and value, and their direction of flow — from one application to another or from one system to another — is controlled by a security policy. This model type is useful for analyzing covert channels through detailed analysis of the flow of information in a system, including the sources of information and the paths of flow.
A Noninterference model ensures that the actions of different objects and subjects aren’t seen by (and don’t interfere with) other objects and subjects on the same system.
Designing and building secure software is critical to information security, but the systems that software runs on must themselves be securely designed and built. Selecting appropriate controls is essential to designing a secure computing architecture. Numerous systems security evaluation models exist to help you select the right controls and countermeasures for your environment.
Various security controls and countermeasures that should be applied to security architecture, as appropriate, include defense in depth, system hardening, implementation of heterogeneous environments, and designing system resilience. Often, these controls are enacted based upon high-level requirements that are usually determined by the context or use of a system. When baseline controls are chosen and implemented, the risk management life cycle (discussed in Chapter 3) will, over time, determine the need for additional controls as well as changes to existing controls.
Examples of contexts and uses of information systems include
Evaluation criteria provide a standard for quantifying the security of a computer system or network. These criteria include the Trusted Computer System Evaluation Criteria (TCSEC), Trusted Network Interpretation (TNI), European Information Technology Security Evaluation Criteria (ITSEC), and the Common Criteria.
The Trusted Computer System Evaluation Criteria (TCSEC), commonly known as the Orange Book, is part of the Rainbow Series developed for the U.S. DoD by the National Computer Security Center (NCSC). It’s the formal implementation of the Bell-LaPadula model. The evaluation criteria were developed to achieve the following objectives:
The four basic control requirements identified in the Orange Book are
Covert channel analysis: TCSEC requires covert channel analysis that detects unintended communication paths not protected by a system’s normal security mechanisms. A covert storage channel conveys information by altering stored system data. A covert timing channel conveys information by altering a system resource’s performance or timing.
A systems or security architect must understand covert channels and how they work to prevent the use of covert channels in the system environment.
These classes are further defined in Table 5-2.
TABLE 5-2 TCSEC Classes
Class |
Name |
Sample Requirements |
---|---|---|
D |
Minimal protection |
These requirements are reserved for systems that fail evaluation. |
C1 |
Discretionary protection (DAC) |
The system doesn’t need to distinguish between individual users and types of access. |
C2 |
Controlled access protection (DAC) |
The system must distinguish between individual users and types of access; object reuse security features are required. |
B1 |
Labeled security protection (MAC) |
Sensitivity labels are required for all subjects and storage objects. |
B2 |
Structured protection (MAC) |
Sensitivity labels are required for all subjects and objects; trusted path requirements apply. |
B3 |
Security domains (MAC) |
Access control lists are specifically required; system must protect against covert channels. |
A1 |
Verified design (MAC) |
Formal top-level specification is required; configuration management procedures must be enforced throughout the entire system life cycle. |
Beyond A1 |
|
Self-protection and reference monitors are implemented in the Trusted Computing Base, which is verified to source-code level. |
Major limitations of the Orange Book include the following:
Part of the Rainbow Series, like TCSEC (discussed in the preceding section), Trusted Network Interpretation (TNI) addresses confidentiality and integrity in trusted computer/communications network systems. Within the Rainbow Series, it’s known as the Red Book.
Part I of the TNI is a guideline for extending the system protection standards defined in the TCSEC (the Orange Book) to networks. Part II of the TNI describes additional security features such as communications integrity, protection from denial of service, and transmission security.
Unlike TCSEC, the European Information Technology Security Evaluation Criteria (ITSEC) addresses confidentiality, integrity, and availability, as well as evaluating an entire system, defined as a target of evaluation (TOE) rather than a single computing platform.
ITSEC evaluates functionality (security objectives, or why; security-enforcing functions, or what; and security mechanisms, or how) and assurance (effectiveness and correctness) separately. The 10 functionality (F) classes and 7 evaluation (E) (assurance) levels are listed in Table 5-3.
TABLE 5-3 ITSEC Functionality (F) Classes and Evaluation (E) Levels Mapped to TCSEC Levels
(F) Class |
(E) Level |
Description |
---|---|---|
NA |
E0 |
Equivalent to TCSEC level D |
F-C1 |
E1 |
Equivalent to TCSEC level C1 |
F-C2 |
E2 |
Equivalent to TCSEC level C2 |
F-B1 |
E3 |
Equivalent to TCSEC level B1 |
F-B2 |
E4 |
Equivalent to TCSEC level B2 |
F-B3 |
E5 |
Equivalent to TCSEC level B3 |
F-B3 |
E6 |
Equivalent to TCSEC level A1 |
F-IN |
NA |
TOEs with high integrity requirements |
F-AV |
NA |
TOEs with high availability requirements |
F-DI |
NA |
TOEs with high integrity requirements during data communication |
F-DC |
NA |
TOEs with high confidentiality requirements during data communication |
F-DX |
NA |
Networks with high confidentiality and integrity requirements |
The Common Criteria for Information Technology Security Evaluation (usually called Common Criteria) is an international effort to standardize and improve existing European and North American evaluation criteria. The Common Criteria has been adopted as an international standard in ISO/IEC 15408. The Common Criteria defines eight evaluation assurance levels, which are listed in Table 5-4.
System certification is a formal methodology for comprehensive testing and documentation of information system security safeguards, both technical and nontechnical, in a given environment by using established evaluation criteria (the TCSEC).
TABLE 5-4 The Common Criteria
Level |
TCSEC Equivalent |
ITSEC Equivalent |
Description |
---|---|---|---|
EAL0 |
N/A |
N/A |
Inadequate assurance |
EAL1 |
N/A |
N/A |
Functionally tested |
EAL2 |
C1 |
E1 |
Structurally tested |
EAL3 |
C2 |
E2 |
Methodically tested and checked |
EAL4 |
B1 |
E3 |
Methodically designed, tested, and reviewed |
EAL5 |
B2 |
E4 |
Semiformally designed and tested |
EAL6 |
B3 |
E5 |
Semiformally verified design and tested |
EAL7 |
A1 |
E6 |
Formally verified design and tested |
Accreditation is official, written approval of the operation of a specific system in a specific environment, as documented in the certification report. Accreditation is normally granted by a senior executive or designated approving authority (DAA), a term used in the U.S. military and government. This DAA is normally a senior official, such as a commanding officer.
System certification and accreditation must be updated when any changes are made in the system or environment, and they must be revalidated periodically, typically every three years.
The certification and accreditation process has been formally implemented in U.S. military and government organizations as the Defense Information Technology Security Certification and Accreditation Process(DITSCAP) and National Information Assurance Certification and Accreditation Process (NIACAP), respectively. U.S. government agencies that use cloud-based systems and services are required to undergo FedRAMP or Cybersecurity Maturity Model Certification (CMMC) certification and accreditation processes (described in this chapter). These important processes are used to make sure that a new or changed system has the proper design and operational characteristics and is suitable for a specific task.
The Defense Information Technology Security Certification and Accreditation Process (DITSCAP) formalizes the certification and accreditation process for U.S. DoD information systems through four distinct phases:
The National Information Assurance Certification and Accreditation Process (NIACAP) formalizes the certification and accreditation process for U.S. government national security information systems. NIACAP consists of four phases — definition, verification, validation, and post accreditation — that generally correspond to the DITSCAP phases. Additionally, NIACAP defines three types of accreditation:
The Federal Risk and Authorization Management Program (FedRAMP) is a standardized approach to assessments, authorization, and continuous monitoring of cloud-based service providers. This program represents a change from controls-based security to risk-based security.
The Cybersecurity Maturity Model Certification (CMMC) is an assessment program used to evaluate the security of service providers that provide information system-related services to U.S. government agencies. CMMC is aligned with the NIST SP-171 (“Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations”) standard.
The Director of Central Intelligence Directive 6/3 is the process used to protect sensitive information that’s stored on computers used by the U.S. Central Intelligence Agency.
Basic concepts related to security architecture include the Trusted Computing Base, Trusted Platform Module, secure modes of operation, open and closed systems, protection rings, security modes, and recovery procedures.
A Trusted Computing Base (TCB) is the entire complement of protection mechanisms within a computer system (including hardware, firmware, and software) that’s responsible for enforcing a security policy. A security perimeter is the boundary that separates the TCB from the rest of the system.
Access control is the ability to permit or deny the use of an object (a passive entity, such as a system or file) by a subject (an active entity, such as a person or process).
A reference monitor is a system component that enforces access controls on an object. Stated another way, a reference monitor is an abstract machine that mediates all access to an object by a subject.
A Trusted Platform Module (TPM) performs sensitive cryptographic functions on a physically separate, dedicated microprocessor. The TPM specification was written by the Trusted Computing Group and is an international standard (ISO/IEC 11889 Series).
A TPM generates and stores cryptographic keys and performs the following functions:
Common TPM uses include ensuring platform integrity, full disk encryption, password and cryptographic key protection, and digital rights management.
Security modes are used in MAC systems to enforce different levels of security. Techniques and concepts related to secure modes of operation include
An open system is a vendor-independent system that complies with a published and accepted standard. This compliance with open standards promotes interoperability between systems and components made by different vendors. Additionally, open systems can be independently reviewed and evaluated, which facilitates the identification of bugs and vulnerabilities and the rapid development of solutions and updates. Examples of open systems include the Linux OS, the Open Office desktop productivity system, and the Apache web server.
A closed system uses proprietary hardware and/or software that may not be compatible with other systems or components. Source code for software in a closed system normally isn’t available to customers or researchers. Examples of closed systems include the Microsoft Windows OS, the Oracle database management system, and Apple’s iTunes.
Virtually all of today’s OSes are multiprocessing — that is, several processes can occupy system memory and be processing simultaneously. OSes employ a means of process isolation so that each process is prevented from accessing memory allocated to all other processes. Although process isolation is automatic and usually considered to be effective, some species of malware have been able to exploit OS kernel weaknesses and access memory allocated to other processes. For this reason, it’s often wise to employ obfuscation techniques or encryption to hide sensitive data in memory, such as encryption keys, and to deallocate or overwrite those memory locations when such data is no longer needed.
Encryption and decryption can be thought of as being forms of access control, wherein data is converted to ciphertext with an encryption key; any person who is in possession of the correct decryption key may access the plaintext form of this information, but any person who lacks the decryption key may not access it. Encryption and decryption concepts are discussed later in this chapter.
The concept of protection rings implements multiple concentric domains with increasing levels of trust near the center. The most privileged ring is identified as Ring 0 and normally includes the OS security kernel. Additional system components are placed in the appropriate concentric ring according to the principle of least privilege and to provide isolation, so that a breach of a component in one protection ring does not automatically provide access to components in more privileged rings. The MIT MULTICS OS (whose ashes gave rise to Unix) implements the concept of protection rings in its architecture, as did Novell Netware. Figure 5-4 depicts an operating system protection ring model.
Image courtesy of authors
FIGURE 5-4: Protection rings provide layers of defense in a system.
A system’s security mode of operation describes how a system handles stored information at various classification levels. Several security modes of operation, based on the classification level of information being processed on a system and the clearance level of authorized users, have been defined. These designations, typically used for U.S. military and government systems, include
Security modes of operation generally come into play in environments that contain highly sensitive information, such as government and military environments. Most private and education systems run in multilevel mode, meaning they contain information at all sensitivity levels. See Chapter 3 for more on security clearance levels.
A hardware or software failure can potentially compromise a system’s security mechanisms. Security designs that protect a system during a hardware or software failure include
In this section, we discuss the techniques used to identify and fix vulnerabilities in systems. We will also briefly discuss techniques for security assessments and testing, which are fully explored in Chapter 8.
Unless detected (and corrected) by an experienced security analyst, many weaknesses may be present in a system and permit exploitation, attack, or malfunction. These vulnerabilities include
Race conditions: Software code in multiprocessing and multiuser systems, unless it’s very carefully designed and tested, can result in critical errors that are difficult to find. A race condition is a flaw in a system where the output or result of an activity in the system is unexpectedly tied to the timing of other events. The term race condition comes from the idea of two events or signals racing to influence an activity.
The most common race condition is the time-of-check-to-time-of-use bug, caused by changes in a system between the checking of a condition and the use of the results of that check. Two programs that both try to open a file for exclusive use may open the file, even though only one should be able to do so.
The design vulnerabilities often found on endpoints involve defects in client-side code in browsers and applications. The defects most often found include the following:
Other weaknesses may be present in client systems. For a complete understanding of application weaknesses, consult https://owasp.org
.
Identifying weaknesses like the preceding examples requires using one or more of the following techniques:
Design vulnerabilities found on servers are the same as for client-based systems, discussed in the preceding section. The terms client and server have to do only with perspective. In both cases, software is running on a system.
Database management systems are nearly as complex as the OSes on which they reside. Vulnerabilities in database management systems include
Database security defects can be identified through manual examination or automated tools. Mitigation may be as easy as changing access permissions or as complex as redesigning the database schema and related application software programs.
Cryptographic systems are especially apt to contain vulnerabilities, for the simple reason that people focus on the cryptographic algorithm but fail to implement it properly. Like any powerful tool, a cryptographic system is useless at best and dangerous at worst if the operator doesn’t know how to use it.
Following are some ways in which a cryptographic system may be vulnerable:
These and other vulnerabilities in cryptographic systems can be detected and mitigated through peer reviews of cryptosystems, assessments by qualified external parties, and application of corrective actions to defects.
Industrial control systems (ICSes) represent a wide variety of means for monitoring and controlling machinery of various kinds, including power generation, distribution, and consumption; natural gas and petroleum pipelines; municipal water, irrigation, and waste systems; traffic signals; manufacturing; and package distribution.
Other related terms in common use include
Weaknesses in industrial control systems include the following:
These vulnerabilities can be mitigated through a systematic process of establishing good controls, testing control effectiveness, and applying corrective action when controls are found to be ineffective.
The U.S. National Institute of Standards and Technology (NIST) defines three cloud computing service models as follows:
NIST defines four cloud computing deployment models:
Major public cloud service providers such as AWS, Azure, Google Cloud Platform, and Oracle Cloud Platform provide customers not only virtually unlimited compute and storage at scale, but also security capabilities that often exceed the capabilities of the customers themselves. These services do not mean that cloud-based systems are inherently secure, however. The shared responsibility model is used by public cloud service providers to clearly define which aspects of security the provider is responsible for and which aspects the customer is responsible for. SaaS models place the most responsibility on the cloud service provider, typically including securing the following:
The customer is always ultimately responsible for the security and privacy of its data. Additionally, identity and access management (IAM) is typically the customer’s responsibility.
In a PaaS model, the customer is typically responsible for the security of its applications and data, as well as identity and access management.
In an IaaS model, the customer is typically responsible for the security of its applications and data, run-time ware and middleware, and OSes. The cloud service provider is typically responsible for the security of networking and the data center (although cloud service providers generally do not provide firewalls). Virtualization, server, and storage security may be managed by either the cloud service provider or the customer.
Distributed systems are systems with components scattered throughout physical and logical space. Often, these components are owned and/or managed by different groups or organizations, sometimes in different countries. Some components may be privately used, and others represent services available to the public (such as Google Maps). Vulnerabilities in distributed systems include
All these weaknesses can be present in simpler environments. These weaknesses and other defects can be detected through the use of security scanning tools or manual techniques, and corrective actions can be taken to mitigate those defects.
The security of Internet of Things (IoT) devices and systems is a rapidly evolving area of information security. IoT sensors and devices collect large amounts of both potentially sensitive data and seemingly innocuous data, and are used to control physical systems and environments. Under certain circumstances, however, practically any data that is collected can be used for nefarious purposes, and devices can be subverted to affect physical environments. As a result, security must be a critical design consideration for IoT devices and systems, includes not only securing the data stored on the systems, but also how the data is collected, transmitted, processed, and used.
Many networking and communications protocols are commonly used in IoT devices, including the following:
The security of these various protocols and their implementations must be carefully considered in the design of secure IoT devices and systems.
Microservices represent a variety of software-based services running on systems in a distributed environment. Using older technology terms, you could consider microservices to be like software program subroutines that run on various systems and are written in various computer languages. Put another way, you could think of microservices as being a more mature form of mashups, which are web applications that use content from various sources displayed through a single user interface.
Microservices are generally developed and deployed with a DevOps or DevSecOps model; typically, they communicate by using standard message-based protocols such as HTTP.
Vulnerabilities in microservices appear in several ways, including the following:
It’s imperative that microservices environments be fully included in all traditional IT service management processes so that they are actively managed, protected, and monitored, just like all other types of server and endpoint OSes, subsystems, software, and source code.
Containers are relatively new innovations in virtualization environments. Instead of running multiple instantiations of software programs in their own virtual OS machines, programs are run in isolated containers within a single OS instance. The practice of building and managing containers is known as containerization.
For the purposes of information security, you can think of containerization as being like virtualization. Vulnerabilities can exist in several layers, including
Serverless computing is a cloud-native development model in which virtual infrastructure (such as virtual machines or containers) is abstracted from developers, allowing them to build and run applications without having to manage the underlying infrastructure. Serverless applications are deployed using container services such as Kubernetes that automatically launch on demand when required. When an event triggers code to run, the cloud service provider dynamically allocates resources, and when it finishes executing, these resources are released. This system brings cost and resource efficiencies while also releasing developers from routine tasks, such as application scaling and server provisioning. The term serverless computing is something of a misnomer, as a server OS and infrastructure indeed exist, but they are abstracted away from the customer and provisioned, scaled, and managed by the service provider.
Using serverless applications requires a paradigm shift in how organizations approach security. Instead of building security around the application infrastructure, the developers need to build security around the functions within the applications hosted by the cloud service provider. There are two major security areas of serverless cloud infrastructure that require special attention: secure coding and identity and access management.
Vulnerabilities in a serverless environment are the same as those in software of every other type. Software developers must be trained in secure software development, and tooling must be used to identify source-code defects that must be fixed before the software is placed into production. The serverless environment must be actively monitored for security events that could be signs of intrusion.
A serverless environment must include hardened authentication controls to prevent successful intrusions by attackers. The security of the most hardened software is all for naught if the administrative interface is exposed to the Internet with simple authentication and credentials such as admin/admin.
Embedded systems encompass the wide variety of systems and devices that are Internet-connected. Mainly, we’re talking about devices that are not human-connected in the computing sense. Examples of such devices include
These devices often run embedded systems, which are specialized OSes designed to run on devices that lacking computerlike human interaction through a keyboard or display. The devices still have an OS that is very similar to that on endpoints such as laptops and mobile devices.
Design defects in this class of devices include
Because the majority of these devices cannot be altered, mitigation of these defects typically involves isolating these devices on separate, heavily guarded networks that have tools in place to detect and block attacks.
High-performance computing (HPC) refers to the use of supercomputers or grid computing to solve problems that require computationally intensive processing. Topics addressed by HPC include weather forecasting and climatology, quantum mechanics, oil and gas exploration, seismology, and cryptanalysis.
HPC systems are generally characterized by having large numbers of CPUs and large amounts of memory, facilitating a high number of floating-point operations per second. Historically, HPC systems used specialized operating systems, but increasingly, Linux is used.
HPC environments use some form of parallel processing, in which computational tasks are distributed across large numbers of processors. Either a single program will execute across multiple threads, or several programs communicate by using some form of inter-process communication.
Edge computing refers to the architecture of a highly distributed environment in which computing resources are deployed near the edges of the environment, close to where data is acquired from outside. Edge computing is all about server placement in a network to reduce latency and improve performance.
The vulnerabilities in an edge computing environment are virtually the same as in any other, including
Virtualization is the practice of implementing multiple instances of OSes in a single hardware platform. Virtualization makes the use of computing hardware more efficient and flexible. But organizations must be mindful of certain risks associated with virtualization, including
Virtual desktop infrastructure is the practice of implementing centrally stored and managed desktop OSes that execute on individual endpoints. This practice can reduce the cost of endpoint management, as well as prevent information leakage by keeping sensitive data on central servers. Endpoints assume the role of terminals, and all processing and data manipulation is performed on servers.
Web-based systems contain many components, including application code, database management systems, OSes, middleware, and the web-server software itself. These components may, individually and collectively, have security design or implementation defects. Some of those defects are
http://bank.com/transfer?tohackeraccount:amount=99999.99
.These vulnerabilities can be mitigated in three main ways:
Mobile systems include OSes and applications on smartphones, tablets, phablets, smart watches, and wearables. The most popular OS platforms for mobile systems are Apple iOS, Android, and Windows 10.
The vulnerabilities of mobile systems include
In a managed corporate environment, the use of an MDM system can mitigate many or all of these risks. But individual users must do the right thing by using strong security settings.
Cryptography (from the Greek kryptos, meaning hidden, and graphia, meaning writing) is the science of encrypting and decrypting communications to make them incomprehensible to everyone all but the intended recipient.
Cryptography can be used to achieve several goals of information security:
Cryptography has evolved into a complex science (some people say an art), presenting many great promises and challenges in the field of information security. The basics of cryptography include various terms and concepts, the individual components of the cryptosystem, and the classes and types of ciphers.
A plaintext message is a message in its original readable format or a ciphertext message that has been properly decrypted (unscrambled) to produce the original readable plaintext message.
A ciphertext message is a plaintext message that has been transformed (encrypted) into a scrambled message that’s unintelligible. This term doesn’t apply to messages from your boss, which may also happen to be unintelligible.
Encryption (or enciphering) is the process of converting plaintext communications to ciphertext. Decryption (or deciphering) reverses that process, converting ciphertext to plaintext. (See Figure 5-5.)
© John Wiley & Sons, Inc.
FIGURE 5-5: Encryption and decryption.
Traffic on a network can be encrypted via end-to-end or link encryption.
With end-to-end encryption, packets are encrypted once at the original encryption source and then decrypted only at the final decryption destination. The advantages of end-to-end encryption are speed and overall security. For the packets to be routed properly, however, only the data is encrypted, not the routing information.
Link encryption requires each node (such as a router) to have separate key pairs for its upstream and downstream neighbors. Packets are encrypted and decrypted, and then re-encrypted at every node along the network path.
The following example, as shown in Figure 5-6, illustrates link encryption:
© John Wiley & Sons, Inc.
FIGURE 5-6: Link encryption.
The advantage of using link encryption is that the entire packet (including routing information) is encrypted. But link encryption has the two disadvantages:
A cryptosystem is the hardware or software implementation that transforms plaintext to ciphertext (encrypting it) and back to plaintext (decrypting it).
An effective cryptosystem must have the following properties:
The encryption and decryption process is efficient for all possible keys within the cryptosystem’s keyspace.
A keyspace is the range of all possible values for a key in a cryptosystem.
Cryptosystems are typically composed of two basic elements:
Key clustering occurs when identical ciphertext messages are generated from a plaintext message with the same encryption algorithm but different encryption keys. Key clustering indicates a weakness in a cryptographic algorithm because it statistically reduces the number of key combinations that must be attempted in a brute-force attack.
Ciphers are cryptographic transformations. The two main classes of ciphers used in symmetric key algorithms are block and stream (see “Cryptographic Methods,” later in this chapter), which describe how the ciphers operate on input data.
Block ciphers operate on a single fixed block (typically, 128 bits) of plaintext to produce the corresponding ciphertext. When a given key is used in a block cipher, the same plaintext block always produces the same ciphertext block. Advantages of block ciphers compared with stream ciphers are
Block ciphers are typically implemented in software. Examples of block ciphers include AES, DES, Blowfish, Twofish, and RC5.
Stream ciphers operate in real time on a continuous stream of data, typically bit by bit. Stream ciphers generally work faster than block ciphers and require less code to implement. But the keys in a stream cipher are generally used only once (see the nearby sidebar “A disposable cipher: The one-time pad”) and then discarded. Key management becomes a serious problem. When a stream cipher is used, the same plaintext bit or byte produces a different ciphertext bit or byte every time it is encrypted. Stream ciphers are typically implemented in hardware.
Examples of stream ciphers include Salsa20 and RC4.
The two basic types of ciphers are substitution and transposition. Both are involved in the process of transforming plaintext into ciphertext.
Substitution ciphers replace bits, characters, or character blocks in plaintext with alternate bits, characters, or character blocks to produce ciphertext. A classic example of a substitution cipher is one that Julius Caesar used: He swapped letters of the message with other letters from the same alphabet. In a simple substitution cipher using the standard English alphabet, a cryptovariable (key) is added modulo 26 to the plaintext message. In modulo 26 addition, the remainder is the final result for any sum equal to or greater than 26. A basic substitution cipher in which the word “boy” is encrypted by adding three characters using modulo 26 math produces the following result:
b |
o |
y |
PLAINTEXT |
|
2 |
15 |
25 |
NUMERIC VALUE |
|
+ |
3 |
3 |
3 |
SUBSTITUTION VALUE |
5 |
18 |
2 |
MODULO 26 RESULT |
|
e |
r |
b |
CIPHERTEXT |
A substitution cipher may be
A more modern example of a substitution cipher is the S-boxes (substitution boxes) employed in the Data Encryption Standard (DES) algorithm. The S-boxes in that algorithm produce a nonlinear substitution (6 bits in, 4 bits out).
Transposition ciphers rearrange bits, characters, or character blocks in plaintext to produce ciphertext. In a simple columnar transposition cipher, a message might be read horizontally but written vertically to produce the ciphertext, as in the following example,
THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG
written in nine columns as
THEQUICKB
ROWNFOXJU
MPSOVERTH
ELAZYDOG
and then transposed (encrypted) vertically as
TRMEHOPLEWSAQNOZUFVYIOEDCXROKJTGBUH
The original letters of the plaintext message are the same; only the order has been changed to achieve encryption.
DES performs permutations through the use of P-boxes (permutation boxes) to spread the influence of a plaintext character over many characters so that they’re not easily traced back to the S-boxes used in the substitution cipher.
Other types of ciphers include
The cryptographic life cycle is the sequence of events that occurs throughout the use of cryptographic controls in a system. These steps include
These steps are not altogether different from the selection, implementation, examination, and correction of any other type of security control in a network and computing environment. Like virtually any other components in a network and computing environment, components in a cryptosystem must be examined periodically to ensure that they are still effective and being operated properly.
Cryptographic methods include symmetric, asymmetric, elliptic curves, and quantum.
Symmetric key cryptography — also known as symmetric algorithm, secret key, single key, and private key cryptography — uses a single key to encrypt and decrypt information. Two parties (for our example, Thomas and Richard) can exchange an encrypted message by using the following procedure:
For an attacker (Harold) to read the message, he must do one of the following things:
The following list includes the main disadvantages of symmetric systems:
Symmetric systems also have many advantages:
Symmetric key algorithms include DES, Triple DES (3DES), Advanced Encryption Standard (AES), International Data Encryption Algorithm (IDEA), and Rivest Cipher 5 (RC5).
In the early 1970s, NIST solicited vendors to submit encryption algorithm proposals to be evaluated by the National Security Agency in support of a national cryptographic standard. This new encryption standard was used for private-sector and sensitive but unclassified government data. In 1974, IBM submitted a 128-bit algorithm known as Lucifer. After some modifications (the algorithm was shortened to 56 bits, and the S-boxes were changed), the IBM proposal was endorsed by the National Security Agency and formally adopted as the DES. It was published in Federal Information Processing Standard (FIPS) PUB 46 in 1977 (updated and revised in 1988 as FIPS PUB 46-1) and American National Standards Institute (ANSI) X3.92 in 1981.
The DES algorithm is a symmetric (or private) key cipher consisting of an algorithm and a key. The algorithm is a 64-bit block cipher based on a 56-bit symmetric key. It consists of 56 key bits plus 8 parity bits. Alternatively, you can think of it as being 8 bytes, with each byte containing 7 key bits and 1 parity bit.) During encryption, the original message (plaintext) is divided into 64-bit blocks. Operating on a single block at a time, the algorithm splits each 64-bit plaintext block into two 32-bit blocks. Under control of the 56-bit symmetric key, 16 rounds of transpositions and substitutions are performed on each character to produce the ciphertext output.
The four distinct modes of operation (the mode of operation defines how the plaintext/ciphertext blocks are processed) in DES are Electronic Code Book, Cipher Block Chaining, Cipher Feedback, and Output Feedback.
The Triple Data Encryption Standard (3DES) effectively extended the life of the DES algorithm. In 3DES implementations, a message is encrypted by using one key, encrypted by using a second key, and then encrypted again by using either the first key or a third key.
In May 2002, NIST announced the Rijndael Block Cipher as the new standard to implement the Advanced Encryption Standard (AES), which replaced DES as the U.S. government standard for encrypting sensitive but unclassified data. AES was subsequently approved for encrypting classified U.S. government data up to the top secret level (using 192- or 256-key lengths).
The Rijndael Block Cipher, developed by Dr. Joan Daemen and Dr. Vincent Rijmen, uses variable block and key lengths (128, 192, or 256 bits) and 10 to 14 rounds. It was designed to be simple, resistant to known attacks, and fast. It can be implemented in either hardware or software and has relatively low memory requirements.
Until recently, the only known successful attacks against AES were side-channel attacks, which don’t attack the encryption algorithm directly; instead, they attack the system on which the encryption algorithm is implemented. Side-channel attacks using cache-timing techniques are most common against AES implementations. In 2009, a theoretical related-key attack against AES was published. The attack method is considered to be theoretical because although it reduces the mathematical complexity required to break an AES key, it is still well beyond the computational capability available today.
The Blowfish algorithm operates on 64-bit blocks, employs 16 rounds, and uses variable key lengths of up to 448 bits. The Twofish algorithm, a finalist in the AES selection process, is a symmetric block cipher that operates on 128-bit blocks, employing 16 rounds with variable key lengths up to 256 bits. Both Blowfish and Twofish were designed by Bruce Schneier (and others) and are freely available in the public domain. (Neither algorithm has been patented.) To date, no known successful cryptanalytic attacks have been made against either algorithm.
Dr. Ron Rivest, Dr. Adi Shamir, and Dr. Len Adleman invented the RSA (Rivest, Shamir, Adleman) algorithm and founded the company RSA Data Security. The Rivest ciphers are a series of symmetric algorithms that include the following:
Note: RC1 was never published, and RC3 was broken during development.
The International Data Encryption Algorithm (IDEA) Cipher evolved from the Proposed Encryption Standard and the Improved Proposed Encryption Standard, developed in 1990. IDEA is a block cipher that operates on 64-bit plaintext blocks by using a 128-bit key. IDEA performs eight rounds on 16-bit sub-blocks and can operate in four distinct modes similar to DES. The IDEA Cipher provides stronger encryption than RC4 and 3DES, but because it is patented, it’s not widely used today. The patents were set to expire in various countries between 2010 and 2012. It is currently used in some software applications, including Pretty Good Privacy email.
Asymmetric key cryptography (also known as asymmetric algorithm cryptography or public-key cryptography) uses two separate keys: one key to encrypt and a different key to decrypt information. These keys are known as public and private key pairs. When two parties want to exchange an encrypted message by using asymmetric key cryptography, they follow these steps, shown in Figure 5-7:
Image courtesy of authors
FIGURE 5-7: Sending a message using asymmetric key cryptography.
Only the private key can decrypt the message; thus, an attacker (Harold) who possesses only the public key can’t decrypt the message. Not even the original sender can decrypt the message. This use of an asymmetric key system is known as a secure message. A secure message guarantees the confidentiality of the message.
If the sender wants to guarantee the authenticity of a message (or, more correctly, the authenticity of the sender), they can digitally sign the message with this procedure, shown in Figure 5-8:
Image courtesy of authors
FIGURE 5-8: Verifying message authenticity using asymmetric key cryptography.
An attacker can also verify the authenticity of the message, of course. This use of an asymmetric key system is known as an open message format because it guarantees only authenticity, not confidentiality.
If the sender wants to guarantee both the confidentiality and authenticity of a message, they can do so by using this procedure, shown in Figure 5-9:
Image courtesy of authors
FIGURE 5-9: Encrypting and signing a message using asymmetric key cryptography.
If an attacker intercepts the message, they can apply the sender’s public key, but then they have an encrypted message that they can’t decrypt without the intended recipient’s private key. Thus, both confidentiality and authenticity are assured. This use of an asymmetric key system is known as a secure and signed message format.
A public key and a private key are mathematically related, but theoretically, no one can compute or derive the private key from the public key. This property of asymmetric systems is based on the concept of a one-way function. A one-way function is a problem that you can easily compute in one direction but not in the reverse direction. In asymmetric key systems, a trapdoor (private key) resolves the reverse operation of the one-way function.
A trapdoor one-way function is like a lock box that is supplied to the user in an opened configuration. Any user may place an item inside the box and then close the lid, which latches the lock closed as it does so. Only the person who has the key can then open the box to obtain the item inside. In this analogy, the lock box itself is the public key, and the key that opens the box is the private key.
Because of their complexity, asymmetric key systems are more commonly used for key management or digital signatures than for encryption of bulk information. Often, a hybrid system is employed, using an asymmetric system to securely distribute the secret keys of a symmetric key system that’s used to encrypt the data.
The main disadvantage of asymmetric systems is their lower speed. Because of the types of algorithms that are used to achieve the one-way hash functions, very large keys are required. (A 128-bit symmetric key has strength equivalent to that of a 2,304-bit asymmetric key.) Those large keys in turn require more computational power, causing a significant loss of speed (up to 10,000 times slower than a comparable symmetric key system).
Asymmetric systems also have many significant advantages, including
Asymmetric key algorithms include RSA, Diffie-Hellman Key Exchange, El Gamal, Merkle-Hellman (Trapdoor) Knapsack, and Elliptic Curve, which we talk about in the following sections.
The RSA algorithm is a key transport algorithm based on the difficulty of factoring a number that’s the product of two large prime numbers (typically, 512 bits). Two users (Thomas and Richard) can securely transport symmetric keys by using RSA as follows:
Dr. Whitfield Diffie and Dr. Martin Hellman published a paper titled “New Directions in Cryptography” that detailed a new paradigm for secure key exchange based on discrete logarithms. Diffie-Hellman is described as a key agreement algorithm. Two users (Thomas and Richard, who have never met) can exchange symmetric keys by using Diffie-Hellman as follows and as depicted in Figure 5-10:
Image courtesy of authors
FIGURE 5-10: Diffie-Hellman key exchange is used to generate a symmetric key for two users.
Diffie-Hellman is vulnerable to man-in-the-middle attacks, in which an attacker intercepts the public keys during the initial exchange and substitutes their own private key to create a session key that can decrypt the session. (You can read more about these attacks in the section “Man-in-the-middle” later in this chapter.) A separate authentication mechanism is necessary to protect against this type of attack, ensuring that the two parties communicating in the session are in fact the legitimate parties.
El Gamal is an unpatented, asymmetric key algorithm based on the discrete logarithm problem used in Diffie-Hellman (discussed in the preceding section). El Gamal extends the functionality of Diffie-Hellman to include encryption and digital signatures.
The Merkle-Hellman (Trapdoor) Knapsack, published in 1978, employs a unique approach to asymmetric cryptography. It’s based on the problem of determining what items, in a set of items that have fixed weights, can be combined to obtain a given total weight. Knapsack was broken in 1982.
Elliptic Curves is far more difficult to compute than conventional discrete logarithm problems or factoring prime numbers. (A 160-bit EC key is equivalent to a 1,024-bit RSA key.) The use of smaller keys means that Elliptic Curve is significantly faster than other asymmetric algorithms (and many symmetric algorithms) and can be widely implemented in various hardware applications, including wireless devices and smart cards.
Quantum computing is an emerging computing processor design that uses the properties of quantum states to perform computation. Although quantum computing is still in its infancy, it may someday pose a significant threat to encryption. A quantum computer may eventually be able to break the most advanced encryption in very short periods of time.
Realizing that quantum computing may eventually be used to break cryptosystems, cryptographers are revisiting the designs of their cryptosystems and developing new ways to ensure that they can resist quantum-computing cryptanalysis.
A public key infrastructure (PKI) is an arrangement whereby a designated authority stores encryption keys or certificates (an electronic document that uses the public key of an organization or person to establish identity and a digital signature to establish authenticity) associated with users and systems, thereby enabling secure communications through the integration of digital signatures, digital certificates, and other services necessary to ensure confidentiality, integrity, authentication, nonrepudiation, and access control.
Like physical keys, encryption keys must be safeguarded. Most successful attacks against encryption exploit some vulnerability in key management functions rather than some inherent weakness in the encryption algorithm. Following are the major functions associated with managing encryption keys:
A cryptoperiod is the length of time that an encryption key can be considered valid. Various factors influence the length of the cryptoperiod, including the length of the key and the strength of the encryption algorithm. When an encryption key has reached the end of its cryptoperiod, it should be discarded, and a new key should be generated. This process may require deciphering existing ciphertext and encrypting it the new key.
Message authentication guarantees the authenticity and integrity of a message by ensuring that
Checksums, CRC values, and parity checks are examples of basic message authentication and integrity controls. More-advanced message authentication is performed by using digital signatures and message digests.
The Digital Signature Standard (DSS), published by NIST in Federal Information Processing Standard (FIPS) 186-4, specifies three acceptable algorithms in its standard: the RSA Digital Signature Algorithm; the Digital Signature Algorithm (DSA); which is based on a modified El Gamal algorithm; and the Elliptic Curve Digital Signature Algorithm.
A digital signature is a simple way to verify the authenticity (and integrity) of a message. Instead of encrypting a message with the intended receiver’s public key, the sender encrypts it with their own private key. The sender’s public key properly decrypts the message, authenticating the originator of the message. This process is known as an open message format in asymmetric key systems, as we discuss in the section “Asymmetric” earlier in this chapter.
To repudiate is to deny. Nonrepudiation means that an action (such as an online transaction, email communication, and so on) or occurrence can’t be easily denied. Nonrepudiation is a related function of identification and authentication and accountability. It’s difficult for a user to deny sending an email message that was digitally signed with that user’s private key, for example. Likewise, it’s difficult to deny responsibility for an enterprise-wide outage if the accounting logs positively identify you (from username and strong authentication) as the poor soul who inadvertently issued the write-erase command on the core routers two seconds before everything dropped!
It’s often impractical to encrypt a message with the receiver’s public key to protect confidentiality and then encrypt the entire message again by using the sender’s private key to protect authenticity and integrity. Instead, a representation of the encrypted message is encrypted with the sender’s private key to produce a digital signature. The intended recipient decrypts this representation by using the sender’s public key and then independently calculates the expected results of the decrypted representation by using the same known one-way hashing algorithm. If the results are the same, the integrity of the original message is assured. This representation of the entire message is known as a message digest.
To digest means to reduce or condense something, and a message digest does precisely that. (Conversely, indigestion means to expand, like gases. How do you spell relief?) A message digest is a condensed representation of a message; think Reader’s Digest. Ideally, a message digest has the following properties:
Message digests are produced by using a one-way hash function. There are several types of one-way hashing algorithms (digest algorithms), including MD5, SHA-2 variants, and HMAC.
A one-way hashing algorithm produces a hashing value (or message digest) that can’t be reversed; that is, it can’t be decrypted. In other words, no trapdoor exists for a one-way hashing algorithm. The purpose of a one-way hashing algorithm is to ensure integrity and authentication.
MD (Message Digest) is a family of one-way hashing algorithms developed by Dr. Ron Rivest that includes MD (obsolete), MD2, MD3 (not widely used), MD4, MD5, and MD6:
Like MD, SHA (Secure Hash Algorithm) is another family of one-way hash functions. The SHA family of algorithms is designed by the U.S. National Security Agency and published by NIST. The SHA family of algorithms includes SHA-1, SHA-2, and SHA-3:
The Hashed Message Authentication Code (or Checksum) (HMAC) further extends the security of the MD5 and SHA-1 algorithms through the concept of a keyed digest. HMAC incorporates a previously shared secret key and the original message into a single message digest. Thus, even if an attacker intercepts a message, modifies its contents, and calculates a new message digest, the result doesn’t match the receiver’s hash calculation because the modified message’s hash doesn’t include the secret key.
Attackers employ a variety of methods in their attempts to crack a cryptosystem. The following sections provide a brief overview of the most common methods.
In a brute-force (or exhaustion) attack, the cryptanalyst attempts every possible combination of key patterns, sometimes using rainbow tables and specialized or scalable computing architectures. This type of attack can be very time-intensive (up to several hundred million years) and resource-intensive, depending on the length of the key, the speed of the attacker’s computer, and the life span of the attacker.
In a ciphertext-only attack, the cryptanalyst obtains the ciphertext of several messages, all encrypted by using the same encryption algorithm, but they don’t have the associated plaintext. The cryptanalyst attempts to decrypt the data by searching for repeating patterns and using statistical analysis. Certain words in the English language, such as the and or, occur frequently, for example. This type of attack is generally difficult and requires a large sample of ciphertext.
In a known-plaintext attack, the cryptanalyst has obtained the ciphertext and corresponding plaintext of several past messages, which they use to decipher new messages.
Frequency analysis is a method of attack in which an attacker examines ciphertext in an attempt to correlate commonly used words such as the and and to discover the encryption key or the algorithm in use.
In a chosen-ciphertext attack, the cryptanalyst selects a sample of ciphertext (or plaintext) and obtains the corresponding plaintext (or ciphertext). Several types of chosen ciphertext attacks exist, including
Implementation attacks attempt to exploit some weakness in the cryptosystem, such as vulnerability in a protocol or algorithm.
A side-channel attack is an attack in which the attacker is observing one or more characteristics of a system to discover its secrets. In an attack against a cryptosystem, a side-channel attack attempts to learn more about the cryptosystem, usually to obtain an encryption key. Several methods are used in a side-channel attack, including
Fault injection refers to techniques used to stress a system to see how it will behave. When applying fault injection to a cryptosystem, an attacker may be attempting to see whether the cryptosystem can be tricked into malfunctioning (for example, revealing plaintext when an unusual key value, such as null, is entered or a buffer overflow attack is executed) or to trick it into revealing secrets about the cryptosystem.
You could consider fault injection to be a form of fuzzing. In most cases, a cryptosystem is just a program running an algorithm, and that program may have flaws if its inputs are not sanitized properly.
This topic is a specific case in the larger field of software security. If this topic floats your boat, you’ll want to bookmark this page and head over to Chapter 10.
A timing (or replay) attack occurs when a session key is intercepted and used against a later encrypted session between the same two parties. Replay attacks can be countered by incorporating a time stamp in the session key.
A man-in-the-middle (MITM) attack involves an attacker intercepting messages between two parties on a network and potentially modifying the original message.
In this type of attack, an attacker encrypts known plaintext with each possible key on one end, decrypts the corresponding ciphertext with each possible key, and then compares the results in the middle. Although commonly classified as a brute-force attack, this kind of attack may also be considered to be an analytic attack because it involves some differential analysis.
Pass the hash is an authentication-bypass attack in which an attacker steals password hashes and uses them to authenticate to a system that uses NTLM authentication. To employ a pass the hash attack, the attacker must first obtain a system’s password hashes, generally through another attack.
If an attacker is able to obtain password hashes for a system but cannot successfully execute a pass-the-hash attack, the attacker can also use a rainbow table or employ brute-force password cracking techniques to obtain plaintext passwords, which can be used to log in to a target system.
Kerberos is a cryptosystem used for authentication and access control in distributed environments. Microsoft Active Directory and other environments such as X use Kerberos.
Attackers may choose to attack Active Directory servers, particularly the Key Distribution Service Account, to gain broad access to an environment. A successful attack can give the attacker the ability to forge valid ticket-granting tickets, thus giving them access to virtually all network resources. Such an attack is called a golden ticket attack.
A golden ticket attack can be difficult to detect except through inference by observing the behavior of authenticated users. Although prevention through techniques such as effective vulnerability management and access governance is essential, we know that we cannot stop all attacks at initial stages; thus, we also need to detect them through techniques such as user entity and behavior analytics.
If you’ve read this entire section, you may wonder why ransomware is included in cryptosystem attacks. That’s a good question, and here’s the answer: Ransomware is not so much an attack on a cryptosystem; the cryptosystem itself is the attack weapon. We thought this section was a good place to mention this topic. (ISC)2 also mentions ransomware in section 3.7 of the CBK, so we’re sort of obligated to discuss it here anyway.
Ransomware is an attack on a system in which the attacker, after somehow successfully gaining user-level or administrative-level access on a system, encrypts data on the system and displays a message to the user, informing them of the attack and demanding a ransom if the user wants to recover their encrypted data. Variants of ransomware will also upload the plaintext data to the attacker’s server before encryption and then threaten to publish the stolen data. Further variants also inform identifiable people that their personal information has been stolen.
The best mitigations against ransomware include
Organizations that are concerned about ransomware should perform threat modeling and other forms of risk analysis to determine what measures should be employed to reduce the probability and effect of a ransomware attack.
Securely designed and built software running on securely designed and built systems must be operated in securely designed and built facilities. Otherwise, an adversary with unrestricted access to a system and its installed software will inevitably succeed in compromising your security efforts. Astute organizations involve security professionals during the design, planning, and construction of new or renovated locations and facilities. Proper site- and facility-requirements planning during the early stages of construction helps ensure that a new building or data center is adequate, safe, and secure, \which can help an organization avoid costly situations later.
The principles of Crime Prevention through Environmental Design (CPTED), published in 1971, have been widely adopted by security practitioners in the design of public and private buildings, offices, communities, and campuses. CPTED focuses on designing facilities by using techniques such as unobstructed areas, creative lighting, and functional landscaping, which naturally deter crime through positive psychological effects. By making it difficult for a criminal to hide, gain access to a facility, escape a location, or otherwise perpetrate an illegal and/or violent act, such techniques may cause a would-be criminal to decide against attacking a target or victim and create an environment that’s perceived as being safer for people who use the area regularly. CPTED consists of three basic strategies:
Location, location, location! Although, to a certain degree, this bit of conventional business wisdom may be less important to profitability in the age of e-commerce, it’s still a critical factor in physical security. Important factors in considering a location include
The CISSP candidate must understand the various threats to physical security; the elements of site- and facility-requirements planning and design; and various physical security controls, including access controls, technical controls, environmental and life safety controls, and administrative controls. In addition, you must know how to support the implementation and operation of these controls, as covered in this section.
Many physical and technical controls should be considered during the initial design of a secure facility to reduce costs and improve the overall effectiveness of these controls. Building design considerations include
Wiring: All wiring, conduits, and cable runs must comply with building and fire codes, and must be properly protected. Plenum cabling must be used below raised floors and above drop ceilings, because PVC-clad cabling releases toxic chemicals when it burns.
A plenum is the vacant area above a drop ceiling or below a raised floor. A fire in these areas can spread very rapidly, carrying smoke and noxious fumes to other areas of a burning building. For this reason, non-PVC-coated cabling, known as plenum cabling, must be used in these areas in most jurisdictions.
Wiring closets, intermediate distribution facilities (IDFs), server rooms, data centers, and media and evidence storage facilities contain high-value equipment and/or media that is critical to ongoing business operations or support of investigations. Physical security controls often found in these locations include
High-security work areas often employ physical security controls above and beyond those used in ordinary work areas. In addition to key card access control systems and video surveillance, additional physical security controls may include
TABLE 5-5 General Fencing Height Requirements
Height |
General Effect |
---|---|
3–4 feet (1 meter) |
Deters casual trespassers |
6–7 feet (2 meters) |
Too high to climb easily |
8 feet (2.4 meters) plus three-strand barbed wire |
Deters determined intruders |
Work-area security also makes us think of various safety issues, all of which are important to the security professional, although one or more of the following may be managed by facilities or other personnel:
Environmental and life safety controls such as utilities and heating, ventilation, and air conditioning (HVAC) are necessary for maintaining a safe and acceptable operating environment for computers, equipment, and personnel.
HVAC systems maintain the proper environment for computers and personnel. HVAC-requirements planning involves making complex calculations based on numerous factors, including the average BTUs (British Thermal Units) produced by the estimated computers and personnel occupying a given area, the size of the room, insulation characteristics, and ventilation systems.
The ideal temperature range for computer equipment is between 50°F and 80°F (10°C and 27°C). At temperatures as low as 100°F (38°C), magnetic storage media can be damaged.
The ideal humidity range for computer equipment is between 40 and 60 percent. Higher humidity causes condensation and corrosion. Lower humidity increases the potential for static electricity.
Doors and side panels on computer equipment racks should be kept closed (and locked, as a form of physical access control) to ensure proper airflow for cooling and ventilation. When possible, empty spaces in equipment racks (such as a half-filled rack or gaps between installed equipment) should be covered with blanking panels to reduce hot and cold air mixing between the hot side or hot aisle (typically, the power-supply side of the equipment) and the cold side or cold aisle (typically, the front of the equipment). Such mixing of hot and cold air can reduce the efficiency of cooling systems.
Heating and cooling systems should be maintained properly, and air filters should be cleaned regularly to reduce dust contamination and fire hazards.
Most gas-discharge fire suppression systems automatically shut down HVAC systems before discharging, but a separate emergency power-off switch should be installed near exits to facilitate manual shutdown in an emergency.
Ideally, HVAC equipment should be dedicated, controlled, and monitored. If the systems aren’t dedicated or independently controlled, proper liaison with the building manager is necessary to ensure that everyone knows who to call when problems occur. Monitoring systems should alert the appropriate personnel when operating thresholds are exceeded.
Water damage (and damage from liquids in general) can be caused by many things, including pipe breakage, firefighting, leaking roofs, spilled drinks, flooding, and tsunamis. Wet computers and other electrical equipment pose a potentially lethal hazard.
Both preventive and detective controls are used to ensure that water in unwanted places does not disrupt business operations or destroy expensive assets. Common features include
Contaminants in the air, unless filtered out by HVAC systems, can be irritating or harmful to personnel and to equipment. A build-up of carbon dioxide and carbon monoxide can also be injurious and even cause death. Air quality sensors can be used to detect and particulates, contaminants, CO2, and CO and alert facilities personnel.
Threats from a fire can be potentially devastating and lethal. Proper precautions, preparation, and training not only help limit the spread of fire and damage, but also (and more important) save lives.
Other hazards associated with fires include smoke, explosions, building collapse, release of toxic materials or vapors, and water damage.
For a fire to burn, it requires three elements: heat, oxygen, and fuel. These three elements are sometimes referred to as the fire triangle, which is depicted in Figure 5-11. Fire suppression and extinguishing systems fight fires by removing one of these three elements or by temporarily breaking the chemical reaction among these three elements, separating the fire triangle. Fires are classified according to fuel type, as listed in Table 5-6.
© John Wiley & Sons, Inc.
FIGURE 5-11: A fire needs these three elements to burn.
TABLE 5-6 Fire Classes and Suppression/Extinguishing Methods
Class |
Description (Fuel) |
Extinguishing Method |
---|---|---|
A |
Common combustibles, such as paper, wood, furniture, and clothing |
Water or soda acid |
B |
Burnable fuels, such as gasoline or oil |
CO2 or soda acid |
C |
Electrical fires, such as computers or electronics |
CO2 (Note: The most important step in fighting a fire in this class is turning off the electricity first.) |
D |
Special fires, such as combustible metals |
May require total immersion or other special techniques |
K (or F) |
Cooking oils or fats |
Water mist or fire blankets |
Fire detection and suppression systems are some of the most essential life safety controls for protecting facilities, equipment, and (most important) human lives. The three main types of fire detection systems are
The two primary types of fire suppression systems are
Water sprinkler: Water extinguishes fire by removing the heat element from the fire triangle, and it’s most effective against Class A fires. Water is the primary fire-extinguishing agent for all business environments. Although water can potentially damage equipment, it’s one of the most effective, inexpensive, readily available, and least harmful (to humans) extinguishing agents available. The four variations of water sprinkler systems are
The four main types of water sprinkler systems are wet-pipe, dry-pipe, deluge, and preaction.
Carbon dioxide (CO2): CO2 is a colorless, odorless gas that extinguishes fire by removing the oxygen element from the fire triangle. CO2 is most effective against Class B and C fires. Because this gas removes oxygen, it is potentially lethal and therefore is best suited for use in unmanned areas or on a delay (including manual override) in staffed areas.
CO2 is also used in portable fire extinguishers, which should be located near all exits and within 50 feet (15 meters) of any electrical equipment. All portable fire extinguishers (CO2, water, and soda acid) should be clearly marked (listing the extinguisher type and the fire classes it can be used for) and periodically inspected. Additionally, all personnel should receive training in the proper use of fire extinguishers.
Inert gas-discharge: Gas-discharge systems suppress fire by separating the elements of the fire triangle; they are most effective against Class B and C fires. Inert gases don’t damage computer equipment, don’t leave liquid or solid residue, mix thoroughly with the air, and spread extremely quickly. But in concentrations higher than 10 percent, the gases are harmful if inhaled, and some types degrade into toxic chemicals (hydrogen fluoride, hydrogen bromide, and bromine) when used on fires that burn at temperatures above 900°F (482°C).
Halon used to be the gas of choice in gas-discharge fire suppression systems. But because of Halon’s ozone-depleting characteristics, the Montreal Protocol of 1987 prohibited the further production and installation of Halon systems (beginning in 1994) and encouraged the replacement of existing systems. Acceptable replacements include FM-200 (most effective), CEA-410 or CEA-308, NAF-S-III, FE-13, Argon or Argonite, and Inergen.
Halon is an ozone-depleting substance. Acceptable replacements include FM-200, CEA-410 or CEA-308, NAF-S-III, FE-13, Argon or Argonite, and Inergen.
General considerations for electrical power include having one or more dedicated feeders from one or more utility substations or power grids, as well as ensuring that adequate physical access controls are implemented for electrical distribution panels and circuit breakers. An emergency power-off switch should be installed near major systems and exit doors to shut down power in case of fire or electrical shock. Additionally, a backup power source should be established, such as a diesel or natural-gas power generator, along with an uninterruptible power supply (UPS). Backup power should be provided for critical facilities and systems, including emergency lighting, fire detection and suppression, mainframes and servers (and certain workstations), HVAC, physical access control systems, and telecommunications equipment.
Protective controls for electrostatic discharge include the following:
Protective controls for electrical noise include the following:
A UPS is perhaps the most important protection against electrical anomalies because it provides clean power to sensitive systems and a temporary power source during electrical outages (blackouts, brownouts, and sags). This power supply must be sufficient to shut down the protected systems properly.
Sensitive equipment can be damaged or affected by various electrical hazards and anomalies, including
Electrostatic discharge (ESD): The ideal humidity range for computer equipment is 40 to 60 percent. Higher humidity causes condensation and corrosion. Lower humidity increases the potential for static electricity. A static charge of as little as 40 volts can damage sensitive circuits, and 2,000 volts can cause a system shutdown. The minimum discharge that can be felt by humans is 3,000 volts, and electrostatic discharges of more than 25,000 volts are possible. If you can feel it, it’s a problem for your equipment!
The ideal humidity range for computer equipment is 40 to 60 percent. Also, remember that it’s not the volts that kill; it’s the amps!
TABLE 5-7 Electrical Anomalies
Electrical Event |
Definition |
---|---|
Blackout |
Total loss of power |
Fault |
Momentary loss of power |
Brownout |
Prolonged drop in voltage |
Sag |
Short drop in voltage |
Inrush |
Initial power rush |
Spike |
Momentary rush of power |
Surge |
Prolonged rush of power |
Voltage drop |
Decrease in electric voltage |