Chapter 6 deals with two sides of the same coin: identity management and access control. The essence of information risk mitigation is ensuring that only the right people and processes can read, view, use, change, or remove any of our sensitive information assets, or use any of our most important information-based business processes. We also require the ability to prove who or what touched what information asset and when, and what happened when they did. We'll see how to authenticate that a subject user (be that a person or a software process) is who they claim to be; use predetermined policies to decide if they are authorized to do what they are attempting to do; and build and maintain accounting or audit information that shows us who asked to do what, when, where, and how. Chapter 6 combines decades of theory-based models and ideas with cutting-edge how-to insight; both are vital to an SSCP on the job.
At the heart of all information security (whether Internet-based or not) is the same fundamental problem. Information is not worth anything if it doesn't move, get shared with others, and get combined with other information to make decisions happen. But to keep that information safe and secure, to meet all of our company's CIANA+PS needs, we usually cannot share that information with just anybody! The flip side of that also tells us that in all likelihood, any one person will not have valid “need to know” for all of the information our organization has or uses. Another way to think about that is that if you do not know who is trying to access your information, you don't know why to grant or deny their attempt.
Each one of the elements of the CIANA+PS security paradigm—which embraces confidentiality, integrity, availability, nonrepudiation, authentication, privacy, and safety—has this same characteristic. Each element must look at the entire universe of people or systems, and separate out those we trust with access to our information from those we do not, while at the same time deciding what to let those trusted people or systems do with the information we let them have access to.
What do we mean by “have access to” an object? In general, access to an object can consist of being able to do one or more of the following kinds of functions:
(i.e., if the object is a server of some kind)
This brings us right to the next question: who, or what, is the thing that is attempting to access our information, and how do we know that they are who they claim to be? It used to be that this identity question focused on people, software processes or services, and devices. The incredible growth in Web-based services complicates this further, and we've yet to fully understand what it will mean with Internet of Things (IoT) devices, artificial intelligences, and robots of all kinds joining our digital universal set of subjects—that is, entities requesting access to objects.
Our organization's CIANA+PS needs are at risk if unauthorized subjects—be they people or processes—can execute any or all of those functions in ways that disrupt our business logic:
The increasing use of remote collaboration technologies in telemedicine highlights these concerns. If clinical workers cannot trust that the patient they think they are treating, the lab results they believe they are reading, and the rest of that patient's treatment record have all been protected from unauthorized access, they lose the confidence they need to know that they are diagnosing and treating the right patient for the right reasons.
One more key ingredient needs to be added as we consider the vexing problems of managing and validating identities and protecting our information assets and resources from unauthorized use or access: the question of trust. In many respects, that old adage needs to be updated: it's not what you know, but how you know how much you can trust what you think you know, that becomes the heart of identity and access management concerns.
Identity management actually starts with the concept of an entity, which is a person, object, device, or software unit that can be uniquely and unambiguously identified. Each entity, whether it be human or nonhuman, can also have many different identities throughout the existence of that entity. Many of these identities are legitimate, lawful, and in fact necessary to create and use; some may not be.
Let's illustrate with a human being, for example. Most human names are quite common, even at the level of the full form of all parts of that name. On top of this, many information systems such as credit reporting agencies carry different versions of the names of individuals, through errors, abbreviations, or changes in usage. This can happen if I choose to use my full middle name, its initial, or no use of it at all on an account application or transaction. To resolve the ambiguity, it often takes far more data:
For any given human, that can amount to a lot of data—far too much data to make for a useful, practical way to control access to a facility or an information system. This gives rise defining an identity as:
Note the distinctions here: entities are created, identities are assigned. And the process of granting permissions to enter, use, view, modify, learn from, or enjoy the resources or assets controlled by the identity-granting organization is known as entitlement. To add one further definition, a credential is a document or dataset that attests to the correctness, completeness, and validity of an identity's claim to be who and what that identity represents, at a given moment in time, and for the purposes associated with that identity.
We need a way to associate an identity, in clear and unambiguous ways, with exactly one such person, device, software process or service, or other subject, whether a part of our system or not. In legal terms, we need to avoid the problems of mistaken identity, just because of a coincidental similarity of name, location, or other information related to two or more people, processes, or devices. It may help if we think about the process of identifying such a subject:
So how do we create an identity? It's one thing for your local government's office of vital records to issue a birth certificate when a baby is born, or a manufacturer to assign a MAC address to an Internet-compatible hardware device. How do systems administrators manage identities?
The identity management lifecycle describes the series of steps in which a subject's identity is initially created, initialized for use, modified as needs and circumstances change, and finally retired from authorized use in a particular information system. These steps are typically referred to as provisioning, review, and revocation of an identity:
Provisioning happens throughout the life of an identity within a particular system. Initially, it may require a substantial proofing effort, as examined earlier. Once the new identity has been created and enabled within the system, it will probably need changes (other than revocation or temporary suspension) to meet the changing needs of the organization and the individual, and some of these changes can and should be allocated to the identity holder themselves to invoke on an as-needed basis. Password changes, updates to associated physical addresses, phone numbers, security challenge questions, and other elements are often updated by the end users (entities using the assigned identities) directly, without requiring review, approval, or action by security personnel or access control administrators. Organizations may, of course, require that some of these changes be subject to approval or require other coordinated actions to be taken.
The initial creation of an identity can also be performed right at the moment when an entity first requests access to a system's resources. This just-in-time identity (JIT identity, not JITI) is quite common on websites, blogs, and email systems where the system owner does not require strong identity proofing as part of creating a new identity. To support this, standardized identity assurance levels (IALs) have been created and are in widespread use, reflecting the degree of proofing required to support the assertion of an identity by an applicant. These levels are:
Just-in-time identity can also play a powerful role in privilege management, in which an identity that has elevated privileges associated with it is not actually granted use of these privileges until the moment they are needed. Systems administrators, for example, generally do not require root or superuser privileges to read their internal email or access trouble ticket systems. A common example of this is the super user do or sudo command in Unix and Linux systems, or the User Account Control feature on Windows-based systems. These provide ways to enforce security policies that prevent accidental and some malicious attempts to perform specific operations such as installing new software, often by requiring a second, specific confirmation by the end user.
The identity management lifecycle is supported by a wide range of processes and tools within a typical IT organization. At the simplest level, operating systems have built-in features that allow administrators to create, maintain, and revoke user identities and privileges. Most OS-level user creation functions can also create roaming profiles, which can allow one user identity to have access privileges on other devices on the network, including any tailoring of those privileges to reflect the location of the user's device or other conditions of the access request. What gets tricky is managing access to storage, whether on local devices or network shared storage, when devices and users can roam around. This can be done at the level of each device using built-in OS functions, but it becomes difficult if not impossible to manage as both the network and the needs for control grow. At some point, the organization needs to look at ways to manage the identity lifecycle for all identities that the organization needs to care about. This will typically require the installation and use of one or more servers to provide the key elements of identity and access control.
SSCPs often need to deal with the “triple-A” of identity management and access control, which refers to authentication, authorization, and accounting. As stated earlier, these are all related to identities, and are part of how our systems decide whether to grant access (and with which privileges) or not—so in that sense they sit on the edge of the coin between the two sides of our CIANA+PS coin. Let's take a closer look at each of these important functions.
Authentication is where everything must start. Authentication is the act of examining or testing the identity credentials provided by a subject that is requesting access, and based on information in the access control list, either granting (accepts) access, denying it, or requesting additional credential information before making an access determination:
Once an identity has been authenticated, the access control system determines just what capabilities that identity is allowed to perform. Authorization requires a two-step process:
At this point, the access request has been granted in full; the user or requesting subject can now go do what it came to our systems to do. Yet, in the words of arms control negotiators during the Cold War, trust, but verify. This is where our final A comes into play. Accounting gathers data from within the access control process to monitor the lifecycle of an access, from its initial request and permissions being granted through the interactions by the subject with the object, to capturing the manner in which the access is terminated. This provides the audit trail by which we address many key information security processes, each of which needs to ascertain (and maybe prove to legal standards) who did what to which information, using which information:
Obviously, it's difficult if not impossible to accomplish many of those tasks if the underlying audit trail wasn't built along the way, as each access request came in and was dealt with.
Before we see how these AAA functions are implemented in typical information systems, we need to look further into the idea of permissions or capabilities.
Access control is all about subjects and objects (see Figure 6.1). Simply put, subjects try to perform an action upon an object; that action can be reading it, changing it, executing it (if the object is a software program), or doing anything to the object. Subjects can be anything that is requesting access to or attempting to access anything in our system, whether data or metadata, people, devices, or another process, for whatever purpose. Subjects can be people, software processes, devices, or services being provided by other Web-based systems. Subjects are trying to do something to or with the object of their desire. Objects can be collections of information, or the processes, devices, or people that have that information and act as gatekeeper to it. This subject-object relationship is fundamental to your understanding of access control. It is a one-way relationship: objects do not do anything to a subject. Don't be fooled into thinking that two subjects interacting with each other is a special case of a bidirectional access control relationship. It is simpler, more accurate, and much more useful to see this as two one-way subject-object relationships. It's also critical to see that every task is a chain of these two-way access control relationships.
As an example, consider the access control system itself as an object. It is a lucrative target for attackers who want to get past its protections and into the soft underbellies of the information assets, networks, and people behind its protective moat. In that light, hearing these functions referred to as datacenter gatekeepers makes a lot of sense. Yet the access control system is a subject that makes use of its own access control tables, and of the information provided to it by requesting subjects. (You, at sign-on, are a subject providing a bundle of credential information as an object to that access control process.)
Let's think about a simple small office/home office (SOHO) LAN environment, with an ISP-provided modem, a Wi-Fi router, and peer-to-peer file and resource sharing across the half a dozen devices on that LAN. The objects on this LAN would include:
Note that third item: on a typical Windows 10 laptop, with 330 GB of files and installed software on a 500 GB drive, that's only half a million files—and each of those, and each of the 100,000 or so folders in that directory space, is an object. Those USB drives, and any cloud-based file storage, could add similar amounts of objects for each computer; mobile phones using Wi-Fi might not have quite so many objects on them to worry about. A conservative upper bound might be 10 million objects.
What might our population of subjects be, in this same SOHO office?
That same Windows 10 laptop, by the way, shows 8 apps, 107 background processes, 101 Windows processes, and 305 services currently able to run—loaded in memory, available to Windows to dispatch to execute, and almost every one of them connected by Windows to events so that hardware actions (like moving a mouse) or software actions (such as an Internet Control Message Protocol packet hitting our network interface card will wake them up and let them run. That's 521 pieces of executing code. And as if to add insult to injury, the one live human who is using that laptop has caused 90 user identities to be currently active. Many of these are associated with installed services, but each is yet another subject in its own right.
Multiply that SOHO situation up to a medium-sized business, with perhaps 500 employees using its LANs, VPNs, and other resources available via federated access arrangements, and you can see the magnitude of the access control management problem.
Next, let's talk layers. No, not layers in the TCP/IP or OSI 7-layer reference model sense! Instead, we need to look at how permissions layer onto each other, level by level, much as those protocols grow in capability layer by layer.
Previously, you learned the importance of establishing an information classification system for your company or organization. Such systems define broad categories of protection needs, typically expressed in a hierarchy of increasing risk should the information be compromised in some way. The lowest level of such protection is often called unclassified, or suitable for public release. It's the information in press releases or in content on public-facing webpages. Employees are not restricted from disclosing this information to almost anyone who asks. Next up this stack of classification levels might be confidential information, followed by secret or top secret (in military parlance). Outside of military or national defense marketplaces, however, we often have to deal with privacy-related information, as well as company proprietary data.
For example, the US-CERT (Computer Emergency Readiness Team) has defined a schema for identifying how information can or cannot be shared among the members of the US-CERT community. The Traffic Light Protocol (TLP) can be seen at www.us-cert.gov/tlp
and appears in Figure 6.2. It exists to make sharing of sensitive or private information easier to manage so that this community can balance the risks of damage to the reputation, business, or privacy of the source against the needs for better, more effective national response to computer emergency events.
Note how TLP defines both the conditions for use of information classified at the different TLP levels, but also any restrictions on how a recipient of TLP-classified information can then share that information with others.
Each company or organization has to determine its own information security classification needs and devise a structure of categories that support and achieve those needs. They all have two properties in common, however, which are called the read-up and write-down problems:
Shoulder-surfing is a simple illustration of the read-up problem, because it can allow an unauthorized person to masquerade as an otherwise legitimate user. A more interesting example of the read-up problem was seen in many login or sign-on systems, which would first check the login ID, and if that was correctly defined or known to the system, then solicit and check the password. This design inadvertently confirms the login ID is legitimate; compare this to designs that take both pieces of login information, and return “user name or password unknown or in error” if the input fails to be authenticated.
Writing classified or proprietary information to a thumb drive, and then giving that thumb drive to an outsider, illustrates the write-down problem. Write-down also can happen if a storage device is not properly zeroized or randomized prior to its removal from the system for maintenance or disposal.
Having defined our concepts about subjects and objects, let's put those read-up and write-down problems into a more manageable context by looking at privileges or capabilities. Depending on whom you talk with, a subject is granted or defined to have permission to perform certain functions on certain objects. The backup task (as subject) can read and copy a file, and update its metadata to show the date and time of the most recent backup, but it does not (or should not) have permission to modify the contents of the file in question, for example. Systems administrators and security specialists determine broad categories of these permissions and the rules by which new identities are allocated some permissions and denied others.
Let's take a closer look at CIANA+PS, in particular the two key components of confidentiality and integrity. Figure 6.3 illustrates a database server containing proprietary information and an instance of a software process that is running at a level not approved for proprietary information. (This might be because of the person using the process, the physical location or the system that the process is running on, or any number of other reasons.) Both the server and the process act as subjects and objects in their different attempts to request or perform read and write operations to the other. As an SSCP, you'll need to be well acquainted with how these two different models approach confidentiality and integrity:
The first model is the Bell-LaPadula model, developed by David Bell and Leonard LaPadula for the Department of Defense in the 1970s, as a fundamental element of providing secure systems capable of handling multiple levels of security classification. Bell-LaPadula emphasized protecting the confidentiality of information—that information in a system running at a higher security classification level must be prevented from leaking out into systems running at lower classification levels. Shown in Figure 6.3(a), Bell-LaPadula defines these controls as:
The discretionary security property requires that systems implementing Bell-LaPadula protections must use an access matrix to enforce discretionary access control
Remember that in our examples in Figure 6.2, the process is both subject and object, and so is the server! This makes it easier to see that the higher-level subject can freely read from (or be written into) a lower-level process; this does not expose the sensitive information to something (or someone) with no legitimate need to know. Secrets stay in the server.
Data integrity, on the other hand, isn't preserved by Bell-LaPadula; clearly, the lower-security-level process could disrupt operations at the proprietary level by altering data that it cannot read. The other important model, developed some years after Bell-LaPadula, was expressly designed to prevent this. Its developer, Kenneth Biba, emphasized data integrity over confidentiality; quite often the non-military business world is more concerned about preventing unauthorized modification of data by untrusted processes, than it is about protecting the confidentiality of information. Figure 6.3(b) illustrates Biba's approach:
Quarantine of files or messages suspected of containing malware payloads offers a clear example of the need for the “no-read-down” policy for integrity protection. Working our way down the levels of security, you might see that “business vital proprietary,” privacy-related, and other information would be much more sensitive (and need greater integrity protection) than newly arrived but unfiltered and unprocessed email traffic. Blocking a process that uses privacy-related data from reading from the quarantined traffic could be hazardous! Once the email has been scanned and found to be free from malware, other processes can determine if its content is to be elevated (written up) by some trusted process to the higher level of privacy-related information.
As you might imagine, a number of other access models have been created to cope with the apparent and real conflicts between protecting confidentiality and assuring the integrity of data. You'll probably encounter Biba and Bell-LaPadula on the SSCP exam; you may or may not run into some of these others:
All of these models provide the foundational theories or concepts behind which access control systems and technologies are designed and operate. Let's now take a look at other aspects of how we need to think about managing access control.
Role-based access control (RBAC) grants specific privileges to subjects regarding specific objects or classes of objects based on the duties or tasks a person (or process) is required to fulfill. Several key factors should influence the ways that role-based privileges are assigned:
Role-based access has one strategic administrative weakness. Privilege creep, the unnecessary, often poorly justified, and potentially dangerous accumulation of access privileges no longer strictly required for the performance of one's duties, can inadvertently put an employee and the organization in jeopardy. Quality people take on broader responsibilities to help the organization meet new challenges and new opportunities; and yet, as duties they previously performed are picked up by other team members, or as they move to other departments or functions, they often retain the access privileges their former jobs required. To contain privilege creep, organizations should review each employee's access privileges in the light of their currently assigned duties, not only when those duties change (even temporarily!) but also on a routine, periodic basis.
Attribute-based access control (ABAC) systems combine multiple characteristics (or attributes) about a subject, an object, or the environment to authorize or restrict access. ABAC uses Boolean logic statements to build as complex a set of rules to cover each situation as the business logic and its information security needs dictate. A simple example might be the case of a webpage designer who has limited privileges to upload new webpages into a beta test site in an extranet authorized for the company's community of beta testers but is denied (because of their role) access to update pages on the production site. Then, when the company prepares to move the new pages into production, they may need the designer's help in doing so and thus (temporarily) require the designer's ability to access the production environment. Although this could be done by a temporary change in the designer's subject-based RBAC access privileges, it may be clearer and easier to implement with a logical statement such as:
IF (it's time for move to production) AND (designer-X) is a member of (production support team Y) THEN (grant access to a, b, c…)
Attribute-based access control can become quite complex, but its power to tailor access to exactly what a situation requires is often worth the effort. As a result, it is sometimes known as externalized, dynamic, fine-grained, or policy-based access control or authorization management.
Subject-based access control looks at characteristics of the subject that are not normally expected to change over time. For example, a print server (as a subject) should be expected to have access to the printers, the queue of print jobs, and other related information assets (such as the LAN segment or VLAN where the printers are attached); you would not normally expect a print server to access payroll databases directly! As to human subjects, these characteristics might be related to age, their information security clearance level, or their physical or administrative place in the organization. For example, a middle school student might very well need separate roles defined as a student, a library intern, or a software developer in a computer science class, but because of their age, in most jurisdictions they cannot sign contracts. The webpages or apps that the school district uses to hire people or contract with consultants or vendors, therefore, should be off limits to such a student.
Object-based access control uses characteristics of each object or each class of objects to determine what types of access requests will be granted. The simplest example of this is found in many file systems, where objects such as individual files or folders can be declared as read-only. More powerful OS file structures allow a more granular approach, where a file folder can be declared to have a set of attributes based on classes of users attempting to read, write, extend, execute, or delete the object. Those attributes can be further defined to be inherited by each object inside that folder, or otherwise associated with it, and this inheritance should happen with every new instance of a file or object placed or created in that folder.
Rule-based access control (RuBAC), as the name suggests, uses systems of formally expressed rules that direct the access control system in granting or denying access to objects. These rules can be as simple or as complex as the organization's security policies might require and are normally constructed using Boolean logic or other set theory constructs. Elegant in theory, RuBAC can be hard to scale to large enterprises with many complex, overlapping use cases and conditions; they can also be hard to maintain and debug as a result. One common use for RuBAC is to selectively invoke it for special cases, such as for the protection of organizational members or employees (and the organization's data and systems) when traveling to or through higher-risk locations.
Risk-based access control (which so far does not have an acronym commonly associated with it) is more of a management approach to overall access control system implementation and use. As you saw in previous chapters, the actual risk context that an organization or one of its systems faces can change on a day-to-day basis. Events within the organization and in the larger marketplaces and communities it serves can suggest that the likelihood of previously assessed risks might dramatically increase (or decrease) and do so quickly. Since all security controls introduce some amount of process friction (such as additional processing time, identity and authorization challenges, or additional reviews by managers), a risk-based access control system provides separate adjustable sensitivity controls for different categories of security controls. When risk managers (such as an organization's chief information security officer or chief risk officer) decide an increase is warranted, one such control might increase the frequency and granularity of data backups, while another might lower the threshold on transactions that would trigger an independent review and approval. Risk-based access control systems might invoke additional attributes to test or narrow the limits on acceptable values for those attributes; in some cases, such risk-based decision making might turn off certain types of access altogether.
One question about access control remains: now that your system has authenticated an identity and authorized its access, what capabilities (or privileges) does that subject have when it comes to passing along its privileges to others? The “write-down” problem illustrates this issue: a suitably cleared subject is granted access to read a restricted, proprietary file; creates a copy of it; and then writes it to a new file that does not have the restricted or proprietary attribute set. Simply put, mandatory (or nondiscretionary) access control uniformly enforces policies that prohibit any and all subjects from attempting to change, circumvent, or go around the constraints imposed by the rest of the access control system. Specifically, mandatory or nondiscretionary access prevents a subject from:
Discretionary access control, on the other hand, allows the systems administrators to tailor the enforcement of these policies across their total population of subjects. This flexibility may be necessary to support a dynamic and evolving company, in which the IT infrastructure as well as individual roles and functions are subject to frequent change, but it clearly comes with some additional risks.
Connecting to a network involves performing the right handshakes at all of the layers of the protocols that the requesting device needs services from. Such connections either start at Layer 1 with physical connections, or start at higher layers in the TCP/IP protocol stack. Physical connections require either a cable, fiber, or wireless connection, and in all practicality, such physical connections are local in nature: you cannot really plug in a Cat 6 cable without being there to do it. By contrast, remote connections are those that skip past the Physical layer and start the connection process at higher layers of the protocol stack. These might also be called logical connections, since they assume the physical connection is provided by a larger network, such as the Internet itself.
Let's explore these two ideas by seeing them in action. Suppose you're sitting at a local coffee house, using your smartphone or laptop to access the Internet via their free Wi-Fi customer network. You start at the Physical layer (via the Wi-Fi), which then asks for access at the Data Link layer. You don't get Internet services until you've made it to Layer 3, probably by using an app like your browser to use the “free Wi-Fi” password and your email address or customer ID as part of the logon process. At that point, you can start doing the work you want to do, such as checking your email, using various Transport layer protocols or Application layer protocols like HTTPS. The connection you make to your bank or email server is a remote connection, isn't it? You've come to their access portal by means of traffic carried over the Internet, and not via a wire or wireless connection.
Network access control is a fundamental and vital component of operating any network large or small. Without network access control, every resource on your network is at risk of being taken away from you and used or corrupted by others. The Internet connectivity you need, for business or pleasure, won't be available if your neighbor is using it to stream their own videos; key documents or files you need may be lost, erased, corrupted, or copied without your knowledge. “Cycle-stealing” of CPU and GPU time on your computers and other devices may be serving the needs of illicit crypto-currency miners, hackers, or just people playing games. You lock the doors and windows of your house when you leave because you don't want uninvited guests or burglars to have free and unrestricted access to the network of rooms, hallways, storage areas, and display areas for fine art and memorabilia that make up the place you call home. (You do lock up when you leave home, don't you?) By the same token, unless you want to give everything on your network away, you need to lock it up and keep it locked up, day in and day out.
Network access control (NAC) is the set of services that give network administrators the ability to define and control what devices, processes, and persons can connect to the network or to individual subnetworks or segments of that network. It is usually a distributed function involving multiple servers within a network. A set of NAC protocols define ways that network administrators translate business CIANA+PS needs and policies into compliance filters and settings. Some of the goals of NAC include:
At its heart, network access control is a service provided to multiple devices and other services on the network; this establishes many client-server relationships within most networks. It's important to keep this client-server concept in mind as we dive into the details of making NAC work.
A quick perusal of that list of goals suggests that an organization needs to define and manage all of the names of people, devices, and processes (all of which are called subjects in access control terms) that are going to be allowed some degree of access to some set of information resources, which we call objects. Objects can be people, devices, files, or processes. In general, an access control list (ACL) is the central repository of all the identities of subjects and objects, as well as the verification and validation information necessary to authenticate an identity and to authorize the access it has requested. By centralized, we don't suggest that the entire ACL has to live on one server, in one file; rather, for a given organization, one set of cohesive security policies should drive its creation and management, even if (especially if!) it is physically or logically is segmented into a root ACL and many subtree ACLs.
Network access control is an example of the need for an integrated, cohesive approach to solving a serious problem. Command and control of the network's access control systems is paramount to keeping the network secure. Security operations center (SOC) dashboards and alarm systems need to know immediately when attempts to circumvent access control exceed previously established alarm limits so that SOC team members can investigate and respond quickly enough to prevent or contain an intrusion.
IEEE 802.1X provides a port-based standard by which many network access control protocols work, and does this by defining the Extensible Authentication Protocol (EAPOL). Also known as “EAP over LAN,” it was initially created for use in Ethernet (wired) networks, but later extended and clarified to support wired and wireless device access control, as well as the Fiber Distributed Data Interface (ISO standard 9314-2). Further extensions provide for secure device identity and point-to-point encryption on local LAN segments.
This standard has seen implementations in every version of Microsoft Windows since Windows XP, Apple Macintosh systems, and most distributions of Linux.
EAPOL defines a four-step authentication handshake, the steps being initialization, initiation, negotiation, and authentication. We won't go into the details here, as they are beyond the scope of what SSCPs will typically encounter (nor are they detailed on the exam), but it's useful to know that this handshake needs to use what the standard calls an authenticator service. This authenticator might be a RADIUS client (more on that in a minute), or almost any other IEEE 802.1X-compatible authenticators, of which many can function as RADIUS clients.
Let's look a bit more closely at a few key concepts that affect the way NAC as systems, products, and solutions is often implemented.
Remote Authentication Dial-In User Service (RADIUS) provides the central repository of access control information and the protocols by which access control and management systems can authenticate, authorize, and account for access requests. Its name reflects its history, but don't be fooled—RADIUS is not just for dial-in, telephone-based remote access to servers, either by design or use. It had its birth at the National Science Foundation, whose NSFNet was seeing increasing dial-up customer usage and requests for usage. NSF needed the full AAA set of access control capabilities—authenticate, authorize, and accounting—and in 1991 asked for industry and academia to propose ways to integrate its collection of proprietary, in-house systems. From those beginnings, RADIUS has developed to where commercial and open source server products exist and have been incorporated into numerous architectures. These server implementations support building, maintaining, and using that central access control list that we discussed earlier.
Without going into the details of the protocols and handshakes, let's look at the basics of how endpoints, network access servers, and RADIUS servers interact and share responsibilities:
RADIUS also supports roaming, which is the ability of an authenticated endpoint and user to move from one physical point of connection into the network to another. Mobile device users, mobile IoT, and other endpoints “on the move” typically cannot tolerate the overhead and wall-clock time consumed to sign in repeatedly, just because the device has moved from one room or one hotspot to another.
RADIUS, used by itself, had some security issues. Most of these are overcome by encapsulating the RADIUS access control packet streams in more secure means, much as HTTPS (and PKI) provide very secure use of HTTP. When this is not sufficient, organizations need to look to other AAA services such as Terminal Access Controller Access-Control System Plus (TACACS+) or Microsoft's Active Directory.
Once a requesting endpoint and user subject have been allowed access to the network, other access control services such as Kerberos and Lightweight Directory Access Protocol (LDAP) are used to further protect information assets themselves. For example, as a student you might be granted access to your school's internal network, from which other credentials (or permissions) control your use of the library, entry into online classrooms, and so forth; they also restrict your student logon from granting you access to the school's employee-facing HR information systems.
A further set of enhancements to RADIUS, called Diameter, attempted to deal with some of the security problems pertaining to mobile device network access. Diameter has had limited deployment success in the 3G (third-generation) mobile phone marketplace, but inherent incompatibilities still remain between Diameter and network infrastructures that fully support RADIUS.
The Terminal Access Controller Access Control System (TACACS, pronounced “tack-axe”) grew out of early Department of Defense network needs for automating authentication of remote users. By 1984, it started to see widespread use in Unix-based server systems; Cisco Systems began supporting it and later developed a proprietary version called Extended TACACS (XTACACS) in 1990. Neither of these were open standards. Although they have largely been replaced by other approaches, you may see them still being used on older systems.
TACACS+ was an entirely new protocol based on some of the concepts in TACACS. Developed by the Department of Defense as well, and then later enhanced, refined, and marketed by Cisco Systems, TACACS+ splits the authentication, authorization, and accounting into separate functions. This provides systems administrators with a greater degree of control over and visibility into each of these processes. It uses TCP to provide a higher-quality connection, and it also provides encryption of its packets to and from the TACACS+ server. It can define policies based on user type, role, location, device type, time of day, or other parameters. It integrates well with Microsoft's Active Directory and with LDAP systems, which means it provides key functionality for single sign-on (SSO) capabilities. TACACS+ also provides greater command logging and central management features, making it well suited for systems administrators to use to meet the AAA needs of their networks.
The most critical step in implementing, operating, and maintaining identity management and access control (IAM) systems is perhaps the one that is often overlooked or minimized. Creating the administrative policy controls that define information classification needs, linking those needs to effective job descriptions for team members, managers, and leaders alike, has to precede serious efforts to plan and implement identity and access management. As you saw in Chapters 3 and 4, senior leaders and managers need to establish their risk tolerance and assess their strategic and tactical plans in terms of information and decision risk. Typically, the business impact analysis (BIA) captures leadership's deliberations about risk tolerance and risk as it is applied to key objectives, goals, outcomes, processes, or assets. The BIA then drives the vulnerability assessment processes for the information architecture and the IT infrastructure, systems, and apps that support it.
Assuming your organization has gone through those processes, it's produced the information classification guidelines, as well as the administrative policies that specify key roles and responsibilities you'll need to plan for as you implement an IAM set of risk mitigation controls:
Two more major decisions need to be made before you can effectively design and implement an integrated access control strategy. Each reflects in many ways the decision-making and risk tolerance culture of your organization, while coping with the physical realities of its information infrastructures. The first choice is whether to implement a centralized or decentralized access control system:
The next major choice that needs to be made reflects whether the organization is delegating the fine-grained, file-by-file access control and security policy implementation details to individual to users or local managers, or is retaining (or enforcing) more global policy decisions with its access control implementation:
Having made those decisions, based on your organization's administrative security policies and information classification strategies, and with roles and responsibilities assigned, you're ready to start your IAM project.
Almost every device on your organization's networks (and remember, a device can be both subject and object) has an operating system and other software (or firmware) installed on it. For example, Microsoft Windows operating systems provide policy objects, which are software and data constructs that the administrators use to enable, disable, or tune specific features and functions that the OS provides to users. Such policies can be set at the machine, system, application, user, or device level, or for groups of those types of subjects. Policy objects can enforce administrative policies about password complexity, renewal frequency, allowable number of retries, lockout upon repeated failed login attempts, and the like. Many Linux distributions, as well as Apple's operating systems, have very similar functions built into the OS. All devices ship from the factory with most such policy objects set to “wide open,” you might say, allowing the new owner to be the fully authorized systems administrator they need to be when they first boot up the device. As administrator/owners, we're highly encouraged to use other built-in features, such as user account definitions and controls, to create “regular” or “normal” user accounts for routine, day-to-day work. You then have the option of tailoring other policy objects to achieve the mix of functionality and security you need.
For a small home or office LAN, using the built-in capabilities of each device to implement a consistent administrative set of policies may be manageable. But as you add functionality, your “in-house sysadmin” job jar starts to fill up quickly. That new NAS or personal cloud device probably needs you to define per-user shares (storage areas), and specify which family users can do what with each. And you certainly don't want the neighbors next door to be able to see that device, much less the existence of any of the shares on it! If you're fortunate enough to have a consistent base of user devices—everybody in the home is using a Windows 10 or macOS Mojave laptop, and they're all on the same make and model smartphone—then you think through the set of policy object settings once and copy (or push) them to each laptop or phone. At some point, keeping track of all of those settings overwhelms you. You need to centralize. You need a server that can help you implement administrative policies into technical policies, and then have that server treat all of the devices on your network as clients.
Before we look at a client-server approach to IAM, let's look at one more built-in feature in the current generation of laptops, tablets, smartphones, and phablets, which you may (or may not) wish to utilize “straight from the shrink wrap.”
As organizations scale out their IT and OT infrastructures to include other resources and organizations, they need to be able to automate and control how these systems exchange identity, authentication, and authorization information with each other. RADIUS, as we've seen, provides one approach to delivering these sets of functions. Let's take a brief look at some of the others, which you may need to delve into more fully if your organization is using any of them.
The Lightweight Directory Access Protocol (LDAP) is based on the International Telecommunications Union's Telecommunications Standardization Sector (ITU-T) X.500 standard, which came into effect in 1988. This is actually a set of seven protocols that together provide the interfaces and handshakes necessary for one system to query and retrieve information from another system's directory of subjects and objects. DAP, the original protocol, was constructed around the OSI 7-Layer protocol stack, and thus other implementations of the same functions were needed to support TCP/IP's protocol stack; LDAP has proven to be the most popular in this respect. It works in conjunction with other protocol suites, notably X.509, to establish trust relationships between clients and servers (such as confirmation that the client reached the URL or URI that they were intending to connect with). We'll look at this process in more detail in Chapter 12, “Cross-Domain Challenges.”
The Security Assertions Markup Language (SAML), published by OASIS, is a community-supported open standard for using extensible markup language (XML) to make statements or assertions about identities. These assertions are exchanged between security domains such as access control systems belonging to separate organizations, or between access control systems and applications platforms. SAML 2.0, the current version, was published in 2005, with a draft errata update released in 2019. As an XML-based language, SAML is human-readable; SAML assertions flow via HTTP (HTTPS preferably) over the Internet. These assertions provide information about an identity and any conditions that may apply to it.
XACML, the Extensible Access Control Markup Language, was designed to support SAML's basic authentication processes, and as such the two languages (and the rules for using them) are very strongly related. XACML is not bound to SAML, however; it can just as easily be used with other access control systems, especially when security policies require a finer level of detail than normally supported by SAML alone.
The IETF created the Open Authorization (OAuth) process as a way of supporting applications to gain third-party access and use of an HTTP service. It's built around four basic roles:
OAuth eliminated the need to send usernames and passwords to the third party system. OAuth 2.0, the current version (since 2012), has been made transport-dependent: it uses TLS, which forces it to be used via HTTPS rather than the unsecure HTTP. The access tokens are encrypted during transit as an additional precaution.
The system for cross-domain identity management (SCIM) provides protocols used by cloud-based systems, applications, and services to streamline and automate the provisioning of identities across multiple applications. Larger-scale applications platforms, such as Salesforce, Microsoft's O365, and Slack, use SCIM. As a system based around Microsoft's Active Directory, it relies on the initial user provisioning process creating a specific type of AD record, which has a SCIM connector associated with it. This leads to that new user now having access to all SCIM-enabled applications. When the user no longer requires any of those access privileges (such as when they leave the organization), one action in the central directory to terminate that user's ID leads to terminating their access to all of those SCIM applications.
As mentioned at the start of this chapter, authentication of a subject's claim to an identity may require multiple steps to accomplish. We also have to separate this problem into two categories of identities: human users, and everything else. First, let's deal with human users. Traditionally, users have gained access to systems by using or presenting a user ID (or account ID) and a password to go with it. The user ID or account ID is almost public knowledge—there's either a simple rule to assign one based on personal names or they're easily viewable in the system, even by nonprivileged users. The password, on the other hand, was intended to be kept secret by the user. Together, the user ID and password are considered one factor, or subject-supplied element in the identity claim and authentication process.
In general, each type of factor is something that the user has, knows, or is; this applies to single-factor and multifactor authentication processes:
Use of each factor is subject to false positive errors (acceptance of a presented factor that is not the authentic one) or false negative errors (rejection of authentic factors), and can be things that legitimate users may forget (such as passwords, or leaving their second-factor authentication device or card at home). As you add more factors to user sign-on processes, you add complexity and costs. User frustration can also increase with additional factors being used, leading to attempts to cheat the system.
There is also a potential privacy concern with all of these factors. In order for authentication systems to work, the system has to have a reference copy of the documents, the information, or the biometric measurements. Access to these reference copies needs to be controlled and accounted for, for any number of legal and ethical reasons. It might seem obvious that the reference copies be stored in an encrypted form, and then have the endpoint device that accepts this information encrypt it for transmission to the identity management system for comparison with the encrypted copies on file. This may make it difficult or impossible to determine whether the endpoint's data has an acceptable amount of error in it (the document was not properly aligned with the scanner, or the finger was not aligned the same way on the fingerprint reader). As an SSCP, you do not need to know how to solve these problems, but you should be aware of them and take them into consideration as you plan for identity authentication.
All of the foregoing applies whether your systems are using single-factor or multifactor authentication processes.
Multifactor authentication requires the use of more than one factor in authenticating the legitimacy of the claimed identity. The underlying presumption is that with more factors being checked, the likelihood that the subject's claim to the identity is invalid decreases.
Three cautions may be worth some attention at this point with regard to the use of built-in biometric and image identification systems in the current generations of laptops, phablets, and smartphones.
First, these may be challenging to scale, if your organization needs to allow for roaming profiles (which enable the same user to log on from different devices, perhaps even in different locations around the world).
Second, there's the risk that a third party could compel or inveigle your user into using the biometrics to complete an access attempt. Legally, a growing number of jurisdictions have the authority to compel someone to unlock devices in their possession, such as when crossing borders. Pickpockets, too, have been known to steal someone's smartphone, immediately try to unlock it, and turn and point the camera at its owner to complete the photo-based authentication challenge. Although many businesses may never have to worry about these concerns, the one that you work for (or help create) just might.
Finally, we must consider that as with any instrumentation or control system and process, errors do happen. The false negative, false rejection, or Type 1 error, happens when a legitimate, trusted access request by a subject is denied in error. Type 2 errors, also known as false acceptance or false positive errors, occur when an unauthorized or unrecognized subject is mistakenly allowed access. Biometric authentication technologies, for example, must frequently cope with errors induced by their users' physical health, ambient noise, lighting, or weather conditions, or electrical noise that affects the sensors at the endpoint device. The important question becomes how much error in today's measurements you can tolerate, when compared to the on-file (baseline) biometric data, before you declare that the readings do not match the baseline:
Figure 6.4 illustrates the general concept of FAR and FRR, showing how increasing the sensitivity of a sensor (such as a biometric device) may lower the false acceptance rate but raise the false rejection rate. Note that FAR and FRR apply equally to any sensor or measurement technology; errors are present in everything.
This visualization of FAR and FRR is often over-emphasized, and caution is advised when talking about a crossover error point or rate measure. This figure is a control's-eye view, you might say: it only sees the risk management issue from the point of view of the device and its operator. Organizational managers need to look at a much larger trade space, one that looks at all of the access control techniques being used and then considers FAR and FRR in aggregate. Figure 6.5 illustrates this problem.
From management's perspective, every step increase in FRR introduces friction into legitimate, necessary, value-creating activities: work done by employees or members of the organization, or business conducted with customers, prospects, or suppliers, can take longer. Since FAR and FRR are inversely related, every step increase in protecting the organization against intruders gaining access due to errors in the IAM systems causes friction that hinders those legitimate user activities. The intrusion might happen, think these managers; the friction will happen, every time, as soon as the risk-based sensitivity control is turned up.
As a result, the sweet spot on the trade space is generally not where the CER point is indicated; instead, it is somewhere to either side, reflecting the organization's larger sense of risk tolerance and costs. They may also adjust where the best FAR-versus-FRR operating point is based on the asset or process involved.
In the vast majority of IT infrastructures, companies and organizations turn to server-based identity management and access control systems. They scale much more easily than node-by-node, device-by-device attempts at solutions, and they often provide significantly greater authentication, authorization, and accounting functions in the bargain. Although seemingly more complex, they are actually much easier to configure, operate, maintain, and monitor. Let's take a closer look.
Conceptually, an identity management and access control system provides a set of services to client processes, using a centralized repository to support authentication of identity claims and grant or deny access, and accounting for successful and unsuccessful attempts at access. Different systems designs may use one server or multiple servers to perform those functions. These servers can of course either be dedicated hardware servers, be job streams that run on hardware servers along with other jobs (such as print sharing or storage management), or be running on virtual machines in a public, private, or hybrid cloud environment. In any case, careful attention must be paid to how those servers are connected to each other, to the rest of your networks and systems, and to the outside world.
In particular, notice that different access control systems are modeled around different transmission protocols. As you saw in Chapter 5, UDP and TCP deliver very different error detection and correction opportunities for systems designers. RADIUS is an example of an access control system built around UDP, and so its basic flow of control and data is prone to data loss or error. TACACS, and systems based on its designs, are built around TCP, which provides better control over error detection and retransmission.
On the other hand, different access control designs provide different mixes of authentication, authorization, and accountability functionality. RADIUS implementations tend to provide richer accounting of access activities than TACACS, for example.
Server-based IAM systems (integrated or not) may also make use of multiple information repositories, as well as multiple servers performing some or all of the AAA tasks. This is particularly helpful in enterprise architectures, where an organization might have business units in multiple locations around the globe. Performance, reliability, and availability would dictate a local IAM server and repository, which synchronizes with the repositories at other locations as often as business logic requires it to.
As organizations grow more complex in their information needs, they usually need more powerful ways to bring together different aspects of their identity management and access control systems. A typical mid-sized company might need any number of specific platforms for logically separated tasks, such as human resources management, finance and accounting, customer relations management, and inventory. In the past, users had to first sign on to their local client workstation, then sign on to the corporate intranet, and then present yet another set of credentials to access and use each software platform and the data associated with it. Each application might have been built by different vendors, and each might be using different approaches to end-user identification authentication and access authorization. When the business further expands and needs to share information resources or provide (limited subsets of) platform access to partners, clients, or vendors, its identity and access management functions become more complicated. We need to share authorization information across related but separate applications, platforms, and systems, including systems that aren't under our direct control or management.
One approach is to use a directory system as the repository for identity authentication and access authorization information (or credentials), and then ensure that each time an application needs to validate an access request or operation, it uses that same set of credentials. This would require a server for that repository, and an interface by which client systems can request such services. The International Telecommunications Union (ITU) first published the X.500 Directory Specification in the late 1980s, and since then it has become the standard used by almost all access control and identity management systems. It included a full-featured Directory Access Protocol (DAP), which needed all of the features of the OSI 7-layer protocol stack. Broader use of X.500 by TCP/IP implementations was spurred by the development at MIT of LDAP.
Single sign-on (SSO) was the first outgrowth of needing to allow one user identity with one set of authenticated credentials to access multiple, disparate systems to meet organizational needs. SSO is almost taken for granted in the IT world—cloud-based service providers that do not support an SSO capability often find that they are missing a competitive advantage without it. On the one hand, critics observe that if the authentication servers are not working properly (or aren't available), then the SSO request fails and the user can do nothing. This may prompt some organizations to ensure that each major business platform they depend on has its own sign-on capability, supported by a copy of the central authentication server and its repository. SSO implementations also require the SSO server to internally store the authenticated credentials and reformat or repackage them to meet the differing needs of each platform or application as required. Because of this, SSO is sometimes called reduced sign-on.
We started this chapter with the need to separate the entity from the identities (yes, plural) that it uses and to further separate those identities from the credentials that attest to the validity of those identities in particular contexts (such as accessing resources). In recent years it's become apparent that web-based identity systems must carry these concepts forward in their implementations and use.
OpenID Connect, developed by the OpenID Foundation, provides for a separation of the entity layer from the identity layer. OpenID Connect builds on the standards and toolkits provided by its predecessor, OpenID 2.0 and has moved to using Java technologies (specifically JSON and its JSON Web Token, or JWT), which make OpenID Connect both more interoperable with other systems and more developer-friendly.
A number of third-party solutions now provide cloud-based as ways of obtaining subscription-based identity management and access control capabilities. Some of these product offerings are positioned toward larger organizations, with 500 or more users' worth of identity and access information needing to be managed. When the vendors in question have well-established reputations in the identity and access management marketplace, then using IDaaS may be a worthwhile alternative to developing and fielding your own in-house solutions (even if your chosen server architectures end up being cloud-based). This marketplace is almost 10 years old at this writing, so there should be a rich vein of lessons learned to pore over as you and your organization consider such an alternative.
IDaaS should not be confused with digital identity platforms, such as provided by using a Microsoft, Google, or other account. These digital identity platforms can provide alternate ways to authenticate a user, but you should be cautious: you're trusting that digital identity platform has done its job in proofing the identity information provided by the user to the degree that your information security needs require.
Generally speaking, a federated system is one built up from stand-alone systems that collaborate with each other in well-defined ways. In almost every industry, federations of businesses, nonprofit or civic organizations, and government agencies are created to help address shared needs. These federations evolve over time as needs change, and many of them fade away when needs change again. Federated identity management and access control systems can serve the needs of those organizational federations when they require identities to be portable across the frontiers between their organizations and their IT infrastructures.
Federated identity management systems provide mechanisms for sharing identity and access information, which makes identity and access portable, allowing properly authorized subjects to access otherwise separate and distinct security domains. Federated access uses open standards, such as the OASIS Security Assertion Markup Language (SAML), and technologies such as OAuth, OpenID, various security token approaches, Web service specifications, Windows Identity Foundation, and others. Microsoft's Active Directory Federated Services (ADFS), which runs on Windows Server systems, can provide SSO capabilities across a mixed environment of Windows and non-Windows systems, for example, by using SAML 2.0-compliant federation services as partners. Federated access systems typically use Web-based SSO for user access (which is not to be confused with SSO within an organization's systems). Just as individual platform or system access is logically a subset of SSO, SSO is a subset of federated access.
One outgrowth of federated IAM approaches has been to emphasize the need for better, more reliable ways for entities to be able to assert their identity as a part of an e-business transaction or operation. Work to develop an identity assurance framework is ongoing, and there are efforts in the US, UK, and a few other nations to develop standards and reference models to support this.
We've now got most of the building blocks in place to look more closely at what happens in the Session layer, or Layer 5, of the OSI 7-Layer internetworking model. From a user's perspective, a session consists of all the activities that they conduct with a server, between (and including) their initial connection or login to that server and ending with (and including) their termination of that session.
Looking at this in integrated identity and access management terms, a session is defined as:
You'll note that this definition leaves the purpose of the session, or the accomplishment of the user's purposes, goals, or objectives, purely to the application layer to address. Session management helps create a session, keeps it running safely and securely, and tears it down when the session is ended or expires; what happens during the session is not, in general, the session manager's concern. It can also provide for tracking and keeping secure the various service requests made by a subject.
Session management requires coordination between the IAM elements on all servers and client devices involved in the session; normally, these will use X.509 certificates as a way of passing standardized data regarding identities. (We'll look at these and most of the other cryptologic functions needed in session management in more detail in Chapter 7, “Cryptography”). Briefly, the steps in session management are as follows:
Many systems use cookies to keep track of the conduct of a session, and these often store the session ID as part of one or more cookies. Attackers who can capture a session cookie as it is transmitted to the subject's device, or recover it from temporary storage on that device, may be able to resubmit that session ID, which allows the attacker to impersonate the subject and reconnect to the server. This session replay attack is quite common, thanks to less-than-secure implementations of session management functions by many web apps. It rates second place on the Open Web Application Security Project's (OWASP's) Top Ten list of vulnerabilities, where it's shown as broken or vulnerable authentication and session management.
Named after the three-headed watchful guard dog of Greek mythology, this approach for indirect authentication (using the services of a third party) has become a de facto standard across much of the industry. Microsoft has made it a mainstay of its systems, and it is used by many Linux platforms as well. It is built into many applications platforms and other products and provides a broad base for single sign-on (SSO) capabilities. It also uses encryption processes (which we'll look at in Chapter 7), but it uses them to provide authentication in a different way than other systems. Kerberos uses a ticket as its access control token, which when combined with a session key provides the authentication to the servers controlling the resources the subject seeks to access. This can be summarized in its six-step process model, shown here:
One of the distinctive features of Kerberos is that it provides its own key distribution center (KDC), and this means that the KDC can be a single point of failure in federated systems using Kerberos. The KDC must also be kept secured and isolated from any other non-Kerberos-related tasks. (Sadly, exploits against print services, turned on by default in many as-shipped copies of Windows Server and other server systems, provide a common backdoor into some Kerberos and other SSO architectures.) As with other SSO architectures, options exist to provide redundant or distributed support to critical single-point functions like the KDC, but at added complexity (and potentially bringing other risks to the system as a result).
Earlier we looked at the common definition of a credential as a formal authentication that an identity is what it claims to be. More formally, security professionals talk about credentials as being a binding (or encapsulation) of all the relevant data about an identity and an authenticator, which is created to support the subject's assertion that their claim to that identity is valid and meets or exceeds the authentication criteria of the system(s) the subject wants to use that identity with. Credentials can take many forms:
Credentials have their own lifecycle. They are usually generated by a credential service provider (CSP), while credential management systems (CMSs) facilitate the transfer and use of credentials in providing access. The credential lifecycle, first published by NIST as part of the US government's Federal Identity, Credential, and Access Management (FICAM) Architecture in 2009, consists of the following steps:
As identity as a service became more widespread, NIST, ISO, and others worked to update these process models to better reflect the use of cryptography in the generation, sharing, authentication, and use of credentials. The cryptographic details will be looked at in Chapter 7; as a credential management process, Figure 6.6, taken from NIST SP 800-63, outlines the updated model.
This figure shows the same entity—the subject or user—taking on three different roles or functions throughout the process. Let's start on the left side of the figure with the applicant. As an applicant, the user enrolls with a credential services provider such as LinkedIn, Facebook, or Office 365. That CSP issues the applicant their authenticator for the credential being issued to them. This enables the applicant to change roles, morphing into a subscriber of that CSP. At some time later, the user desires to use that credential to connect to a service provider, such as their online bank or a merchant. The user now takes on the role of being a claimant.
This server, known as a relying party, takes the authentication information provided by the claimant and passes it to a verifier for third-party verification. That verification service interrogates the CSP, which returns validation that the authentication and credential are properly bound and may provide other attributes if need be. The verifier passes this back to the relying party, which can now make the final decision to establish a secure session with the subscriber-claimant.
As you might suspect, there's a lot of details below the level of this model that make it all work, but we won't go into those.
One of the key considerations in federating access between or across systems is the way that trust relationships do or do not transfer. One example might be a humanitarian relief operation that involves a number of nonprofit, nongovernmental organizations (NGOs) from different countries, sharing a consolidated planning, coordination, and information system platform operated by a major aid agency. Some of the NGOs might trust aid agency employees with shared access to their information systems; others might not. There might also be local organizations, working with some of the NGOs, who are not known to the international aid agency; even host nation government agencies might be a part of this puzzle. The aid agency might wish to grant only a limited set of accesses to some of the NGOs and their staff and maybe no access at all to a few of the NGOs. This demonstrates several types of trust relationships:
As the complexity of the relationships between organizations, their systems and platforms, and the domains of user subjects (and objects) associated with those platforms increases, trust relationships can start to matrix together sometimes in convoluted ways. This could quickly overwhelm efforts by each organization's systems administrators to manage locally. Federated approaches to identity and access management are not by themselves simple, but they can be easier to manage, especially when the social or organizational context and trust relationships are not simple and straightforward. Federated systems also allow for much quicker, cleaner disconnects, such as when the relief operation ends or when one agency's systems are found to be less secure than can be tolerated by others in the federation.
Solutions to situations like this might contain elements of the following:
This last needs some explanation and discussion.
The number-one priority in keeping any system secure, safe, and available is detecting when an intrusion or an attack is underway. This demands that the security team and the total set of end users, administrators, and support staff be able to recognize when something out of the ordinary is happening. Discriminating between business normal, abnormal but reasonable, and possibly suspicious behaviors or events requires organizations and their people to have a deep understanding of their normal business processes, as well as good situational awareness of changes in the real world (internal or external) that are legitimately driving change in their systems' and users' behaviors.
As we've seen throughout this chapter, users can be both human and nonhuman entities; entities, we've also seen, are capable of having multiple identities associated with them (and not all of those are necessarily either legitimate or properly authenticated for our systems).
Modeling techniques are often used to take many observations of similar events (such as user behaviors) and attempt to deduce or infer a set of general statements about those behaviors. Modeling can also be done by predicting future system behaviors and then observing subsequent events to see if the predictions hold true.
Over the last few years, user and entity behavioral analysis (UEBA) has become one of the major innovations in the security profession. It's being used in detecting new types and patterns of financial crime, smuggling, racketeering, and many other disciplines; its use in information systems security has made UEBA something of a household word. UEBA systems operationalize an organization's approach to separating out the possibly suspicious from the abnormal (or the normal) events by applying machine learning techniques to three distinct but related sets of knowledge:
The first two of these knowledge bases are curated content—as collections of patterns or templates, they are subject to ongoing review and assessment. Human analysts and modelers (both from security and other functional areas of the organization) can choose to add newly detected examples of either type of behaviors to their respective collections. The threat typologies are often derived in the first instance from current CVE and attack pattern findings published at MITRE's ATT&CK framework website. Vendors of UEBA-based access control and security systems generally do this as part of making their systems and services more valuable to customers. Allowed typologies must be generated by people or organizations with deep, broad, and current understanding of how their business logic and systems are supposed to be put to use. This should reflect how they were designed to be used, and how their users are coached, trained, and guided to use them. In-house analysts will often add new patterns (to any or all of these knowledge bases) to reflect known or anticipated changes in market dynamics, the overall risk context of the organization, or changes in the organization's products and services.
When a UEBA system is first installed, it has to undergo a period of initial training. This training has to happen whether the learning components of that UEBA system are part of its artificial intelligence or its natural (human-hosted) intelligence capabilities. Once that training period is over, the same operational data gathering system that feeds the user session history knowledge base can now use the knowledge base as part of its real-time monitoring, analysis, and incident detection activities. These features come together in integrated UEBA-enabled security systems, as shown in Figure 6.7.
Suppose this system is being used to monitor and protect the security information and event management (SIEM) system in your organization. Your SOC team, like many other functional groups in the organization, probably has some of its members who must be able to perform a variety of privileged and nonprivileged SIEMs-related tasks remotely—the SOC manager or CISO, for example, might need to be able to gain real-time insight and provide real-time direction when they're at home, or on travel status, as part of an incident response process. In this example, your SOC manager, Emba, has traveled to a distant city to attend an information security conference; he gets an urgent SMS notification from the SIEM that several IoCs have been detected and that human analysts have confirmed them as highly likely to be correct. Emba logs in remotely, and this triggers the UEBA watchdogs into action.
This hypothetical example indicates the challenges that even the most advanced security techniques and the most proficient, skillful, and knowledgeable security analysts must face every day: the most stressing situations are ones that are highly dynamic and often quite unique. They may have elements—small sets of behaviors—that can be defined, studied, or modeled in advance. But the whole attack will come as a series of surprises, reflecting the attacker's skill and preparation as much as the misperceptions and mistaken choices they make during each step of the attack.
UEBA is an approach to adding capability to your identity management and access control systems, to your network security, and to every other aspect of your organization's business processes. It is another of those rapidly evolving “hot topics” in information systems security. More than 20 of the industry-leading vendors providing these types of capabilities offer purchase, lease, or subscription plans that bring this type of security within the price reach of smaller businesses and organizations. As a security professional, you don't have to become a machine learning expert to put these systems to work; instead, your security-focused operational analysis and troubleshooting perspectives and skills can help you get the organization started on a UEBA pathway.
From some perspectives, the normal conventions for designing and implementing network security implicitly or explicitly assume that once a subject has been granted access to the network, they are trusted to do what they were granted access to do. This is a little bit like registering as a hotel guest, and the key card you're given lets you use the elevator to access the floors the guest rooms are on or go into the fitness center. Your key card will not, however, let you into other guests' rooms or into areas restricted to the staff. Even in the hotel, the question must be asked: do you have legitimate business on floors where your room is not located?
Zero trust network design and access control reflect the need to counter the more advanced persistent threats and the increasing risk of data exfiltration associated with many of them. This shifts the security focus from the perimeter to step-by-step, node-by-node movement and action within the organization's information infrastructure. Instead of large, easily managed networks or segments, zero trust designs seek to micro-segment the network. Fully exploiting the capabilities of attribute-based access control, the zero trust approach promises to more effectively contain a threat, whether an outsider or insider, and thus limit the possibility of damage or loss. In August 2020, NIST published SP 800-207, Zero Trust Architecture, to focus attention and bring together ongoing development of the concept. It establishes a working definition of zero trust architecture (ZTA) as a concept, and some general deployment models applying it to workflows, security processes, and systems design and implementation for industrial control, enterprise information systems, and other domains. ISO 27001 has not been updated to reflect zero trust architectures specifically, but many experts are quick to point out that implementing ZTA is fully consistent with existing ISO standards such as 27001.
You might at first think that zero trust architectures, and their attitude of “never trust, always verify,” are incompatible with federated identity management and access control systems. Federated systems seem to encourage us to make one giant, trusting community of collaboration and sharing, with which we can break down the walls between companies, departments, and people; how can zero trust play a role in this? It does this by increasing the levels of decision assurance within the organization. Zero trust architectures add to the CIANA+PS payback via:
Never trust, always authenticate access requests fully, and always track and account for all activity, authorized or not. Analyze and assess those accounting records; seek the anomalies and investigate them.
This may sound like rampant paranoia, but the truth is, the advanced persistent threats are not just “out there” somewhere. They are probably already in your systems. Perhaps now's the time to replace “trust, but verify” with constant vigilance as your watchwords.
Two major themes tie everything in this chapter together with Chapter 5's deep dive into network architectures and the protocol stacks that make them work. The first of those themes is the need to systematically and rigorously manage and control the creation, maintenance, and use of identities as they relate to subjects claiming the right to access our systems and our information. Identities are not, of course, the subjects, no more than you are your name. Nor are you the information needed to authenticate your claim that you are you when you try to access your online bank or your employer's information system. That brings us to the second of those themes, which involves the “triple A” of authenticating a claim to an identity by a subject, authorizing that subject's access to an object, and keeping detailed accounting records of every activity involved in that process and in the subject's use of the object.
Three forces have come together to make the SSCP's job even more demanding when it comes to this combined set of topics we call identity and access management. Organizations have grown in complexity internally and externally, as they take on both temporary and long-term relationships with partners, vendors, clients, and others in their markets. This combines with the natural tendency to want more data, better data, to support more decisions made more quickly, resulting in ever more complex patterns of information access and use within the organization and across its federated ecosystem of other, hopefully like-minded organizations and individuals. Finally, we have to acknowledge the growing sophistication of the advanced persistent threat actors, and their willingness and ability to take months to infiltrate, scout out valuable information assets to steal a copy of, and then complete their attack by exfiltrating their prize. All three of these trends are forcing us to take on more complex, powerful, flexible approaches to network security, identity management, and access control.
Explain the advantages and disadvantages of single sign-on architectures. Initially, the design of systems and platform applications required users to present login credentials each time they attempted to use each of these different systems. This is both cumbersome and frustrating to users and difficult to manage from an identity provisioning and access control perspective. SSO (single sign-on) allows users to access an organization's systems by only having to do one sign-on—they present their authentication credentials once. It uses an integrated identity and access control management (IAM) systems approach to bring together all information about all subjects (people or processes) and all objects (people, processes, and information assets, including networks and computers) into one access control list or database. SSO then generates a ticket or token, which is the authorization of that subject's access privileges for that session. This can be implemented with systems like XTACACS, RADIUS, Microsoft Active Directory, and a variety of other products and systems, depending on the degree of integration the organization needs. SSO eliminates the hassle of using and maintaining multiple, platform-specific or system-specific sign-on access control lists; it does bring the risk that once into the system, users can access anything, including things outside of the scope, purview, or needs of their authorized duties and privileges. Properly implemented access control should provide that next level of “need to know” control and enforcement.
Explain why we need device authentication for information security, and briefly describe how it works. Access to company or organizational information assets usually requires physical and logical access, typically via the Physical, Data Link, and Network layers of a protocol stack such as TCP/IP. The CIANA+PS needs of the organization will dictate what information needs what kinds of protection, and in most cases, this means that only trusted, authorized subjects (people, processes, or devices) should be authorized to access this information. That requires that the subject first authenticate its identity. Device authentication depends on some hardware characteristic, such as a MAC address, and may also depend on authentication of the software, firmware, or data stored on the device; this ensures that trusted devices that do not have required software updates or malware definition file updates, for example, are not allowed access. Further constraints might restrict even an authorized device from attempting to access the system from new, unknown, and potentially untrustworthy locations, times of day, etc. The authentication process requires the device to present such information, which the access control system uses to either confirm the claimed identity and authorize access, request additional information, or deny the request.
Compare and contrast single sign-on and federated access. SSO, by itself, does not bridge one organization's access control systems with those of other organizations, such as strategic partners, subcontractors, or key customers; this requires a federated identity and access management approach. Just as individual platform or system access is logically a subset of SSO, SSO is a subset of federated access. Federated identity management systems provide mechanisms for sharing identity and access information, which makes identity and access portable, allowing properly authorized subjects to access otherwise separate and distinct security domains. Federated access uses open standards, such as the OASIS Security Assertion Markup Language (SAML), and technologies such as OAuth, OpenID, various security token approaches, Web service specifications, Windows Identity Foundation, and others. Federated access systems typically use Web-based SSO for user access.
Explain what is meant by the evolution of identity and its impact on information security. Traditionally, identity in information systems terms was specific to human end users needing access to systems objects (such as processes, information assets, or other users); this was user-to-applications access, since even a system-level application (such as a command line interpreter) is an application program per se. This has evolved to consider applications themselves as subjects, for example, and in Web service or service-oriented architectures (SOA), this involves all layers of the protocol stack. Privacy and the individual civil rights of users also are driving the need to provide a broad, integrated approach to letting users manage the information about themselves, particularly the use of personally identifying information (PII) as part of identity and access management systems. Fortunately, this evolution is occurring at a time when open and common standards and frameworks, such as the Identity Assurance Framework, are becoming more commonly used and are undergoing further development. The concept of identity will no doubt continue to involve as we embrace both the Internet of Things and greater use of artificial intelligence systems and robots.
Describe what internetwork trust architectures are and how they are used. When two or more organizations need their physically and logically separate networks to collaborate together, this requires some form of sharing of identity and access control information. Internetwork trust architectures are the combination of systems, technologies, and processes used by the two organizations to support this interorganizational collaboration. This will typically require some sort of federated access system.
Explain what a zero trust network is and its role in organizational information security.
Zero trust network design and access control reflect the need to counter the more advanced persistent threats and the increasing risk of data exfiltration associated with many of them. This shifts the security focus from the perimeter to step-by-step, node-by-node movement and action within the organization's information infrastructure. Instead of large, easily managed networks or segments, zero trust designs seek to micro-segment the network. Fully exploiting the capabilities of attribute-based access control, the zero trust approach promises to more effectively contain a threat, whether an outsider or insider, and thus limit the possibility of damage or loss. It's sometimes called the “never trust, always verify” approach, and for good reason.
Explain how one-way, two-way, and transitive trust relationships are used in a chain of trust. It's simplest to start with one-way trust: node A is the authoritative source of trusted information about a topic, and since the builders of node B know this, node B can trust the information it is given by node A. This would require that the transmission of information from node A to B meets nonrepudiation and integrity requirements. Two-way trust is actually the overlap of two separate one-way trust relationships: node A is trusted by node B, which in turn is trusted by node A. Now, if node C trusts node B, then transitivity says that node C also trusts node A. This demonstrates a simple chain of trust: node A is trusted by B, which is trusted by C. This chain of trust concept is fundamental to certificates, key distribution, integrated and federated access control, and a host of other processes critical to creating and maintaining the confidentiality, integrity, authorization, nonrepudiability, and availability of information.
One-way and two-way trust are most often applied to domains of users: organization A trusts its users and trusts the users of its strategic partner B, but organization B does not have the same level of trust for organization A's users. This often happens during mergers, temporary partnerships or alliances, or the migration of subsets of an organization's users from one set of platforms to another.
Explain the role of third-party connections in trust architectures. In many trust architectures, either one of the parties is the anchor of the trust chain, and thus issues trust credentials for others in the architecture to use, or a trusted third party, not actually part of the architecture per se, is the provider of this information. One such role is that of a credential service provider (CSP), which (upon request) generates and provides an object or data structure that establishes the link between an identity and its associated attributes, to a subscriber to that CSP. Other examples of third parties are seen in the ways that digital certificates and encryption keys are generated, issued, and used.
Describe the key steps in the identity management or identity provisioning lifecycle. In an information systems context, an identity is a set of credentials associated with (or bound to) an individual user, process, device, or other entity. The lifecycle of an identity reflects the series of events as the entity joins the organization, needs to be granted access to its information systems, and how those needs change over time; finally, the entity leaves the organization (or no longer exists), and the identity needs to be terminated to reflect this. Typically, these steps are called provisioning, review, and revocation. Provisioning creates the identity and distributes it throughout the organization's identity and access control systems and data structures, starting with management's review and approval of the access request, the identifying information that will be used, and the privileges requested. Pushing the identity out to all elements of the organization's systems may take a few minutes to a number of hours; often, this is done as part of overnight batch directory and integrated access management system updates. Review should be frequent and be triggered by changes in assigned roles as well as changes in organizational needs. Privilege creep, the accumulation of access privileges beyond that strictly required, should be avoided. When the employee (or entity) is no longer required by the organization to have access—when they are fired or separated from the organization, for example—their identity should first be blocked from further use, and then finally removed from the system after any review of their data or an audit of their access accounting information.
Explain the role of authentication, authorization, and accounting in identity management and access control terms. These three processes (the “AAA” of access control) are the fundamental functions of an access control system. Authentication examines the identity credentials provided by a subject that is requesting access, and based on information in the access control list, either grants (accepts) access, denies it, or requests additional credential information, such as an additional identification factor. Next, the access control system authorizes (grants permission to) the subject, allowing the subject to have access to various other objects in the system. Accounting is the process of keeping logs or other records that show access requests, whether those were granted or not, and a history of what resources in the system that subject then accessed. Accounting functions may also be carried out at the object level, in effect keeping a separate set of records as to which subjects attempted access to a particular object, when, and what happened as a result. Tailoring these three functions allows the SSCP to meet the particular CIANA+PS needs of the organization by balancing complexity, cost, and runtime resource utilization.
Explain the role of identity proofing in identity lifecycle management. Proofing an identity is the process of verifying the correctness and the authenticity of the supporting information used to demonstrate that a person (or other subject) is in fact the same entity that the supporting information claims that they are. For example, many free email systems require an applicant to provide a valid credit or debit card, issued in the applicant's name, as part of the application process. This is then tested (or “proofed”) against the issuing bank, and if the card is accepted by that bank, then at least this one set of supporting identity information has been found acceptable. The degree of required information security dictates the degree of trust placed in the identity (and your ability to authenticate it), and this then places a greater trust in the proofing of that identity. For individual (human) identities, a growing number of online identity proofing systems provide varying levels of trust and confidence to systems owners and operators that job applicants, customers, or others seeking access to their systems are who (or what) they claim to be.
Compare and contrast discretionary and nondiscretionary access control policies. Mandatory (also called nondiscretionary) policies are rules that are enforced uniformly across all subjects and objects within a system's boundary. This constrains subjects granted such access (1) from passing information about such objects to any other subject or object; (2) attempting to grant or bequeath its own privileges to another subject; (3) changing any security attribute on any subject, object, or other element of the system; (4) granting or choosing the security attributes of newly created or modified objects (even if this subject created or modified them); and (5) changing any of the rules governing access control. Discretionary access policies are also uniformly enforced on all subjects and objects in the system, but depending on those rules, such subjects or objects may be able to do one or more of the tasks that are prohibited under a mandatory policy.
Explain the different approaches that access control systems use to grant or deny access. Role-based access control (RBAC) systems operate with privileges associated with the organizational roles or duties assigned, typically to individual people. For example, a new employee working in the human resources department would not be expected to need access to customer-related transaction histories. Similarly, chief financial officers (CFOs) may have to approve transactions above a certain limit, but they probably should not be originating transactions of any size (using separation of duties to preclude a whaling attack, for example). Attribute-based access control systems look at multiple characteristics (or attributes) of a subject, an object, or the environment to authorize or restrict access. That said, CFOs might be blocked from authorizing major transactions outside of certain hours, on weekends, or if logged on from an IP address in a possibly untrustworthy location. Subject-based access control is focused on the requesting subject and applying roles or attributes as required to grant or deny access. Subject-based and object-based access control systems associate attributes and constraint checking against them with each subject and with each object, respectively.
Describe the different privileges that access control systems can authorize to subjects. Subjects attempt to do something with or to an object, learn something about it, or request a service from it. Access control has to compare the privileges already assigned to the subject with the conditions, constraints or other factors pertaining to the object and type of access requested, to determine whether to grant access or deny it. These privileges may involve requests to read data from it, or read metadata kept in the system about the object; modify its contents, or the metadata; delete or extend it (that is, request that additional systems resources, such as space in memory or in storage, be allocated to it); load it as an executable process or thread for execution by a CPU; assign privileges or attributes to it; read, change, or delete access control system criteria, conditions, or rules associated with the object; pass or grant permissions to the object; copy or move it to another location; or even ask for historical information about other access requests made about that object. In systems that implement subject ownership of objects, passing ownership is also a privilege to control. Each of these kinds of operations may be worth considering as a privilege that the access control system can either grant or deny.
Describe the key attributes of the reference monitor in access control systems. In abstract or conceptual terms, the reference monitor is a subject (a system, machine, or program) that performs all of the functions necessary to carry out the access control for an information system. Typically, it must be resistant to tampering, must always be invoked when access is requested or attempted, and must be small enough to allow thorough analysis and verification of its functions, design, and implementation in hardware, software, and procedures. It can be placed within hardware, operating systems, applications, or anywhere we need it to be, as long as such placement can meet those conditions. The security kernel is the reference monitor function within an operating system; the trusted computing base is the hardware and firmware implementation of the reference monitor (and other functions) in a processor or motherboard.
Explain how Biba and Bell-LaPadula, as access control models, contribute to information security. Each of these models is focused on a different information security attribute or characteristic. Bell-LaPadula was designed to meet the Department of Defense's need for systems that could handle multiple levels of classified information; it focuses on confidentiality by providing restrictions on “read up”—that is, accessing information at a higher level than the process is cleared for—or “write-down” of classified information into a process or environment at a lower security level. Biba is focused on protecting data integrity, and so it restricts higher-level tasks from reading from lower-level tasks (to prevent the higher-level task from possibly being contaminated with incorrect data or malware), while allowing reads from lower-level to higher-level tasks.
Explain Type 1 and Type 2 errors and their impacts in an identity management and access control context. Type 1 errors are false negatives, also called a false rejection, which incorrectly identify a legitimate subject as an intruder; this can result in delays or disruptions to users getting work done or achieving their otherwise legitimate system usage accomplished. Type 2 errors are false positives or false acceptances, in which unknown subjects, or authorized users or subjects exceeding their privileges, are incorrectly allowed access to systems or objects. Type 2 errors can allow unauthorized subjects (users or tasks) to access system information resources, take action, exfiltrate data, or take other harmful actions.
Explain the roles of remediation and quarantine in network access control. Network access control systems can be programmed to inspect or challenge (interrogate) devices that are attempting to connect to the network, which can check for a deficiency such as software updates not applied, malware definitions not current, or other conditions. Systems with otherwise legitimate, trusted credentials that fail these checks can be routed to remediation servers, which only allow the user access to and execution/download of the required fixes. For network access control, quarantine (which is also called captive portals) is similar in concept but deals with client systems attempting an HTTP or HTTPS connection that fails such tests. These are restricted to a limited set of webpages that provide instructions on how to remediate the client's shortcomings.
Describe the use of TACACS, RADIUS, and other network access control technologies. Network access control systems use authentication methods to validate that a subject (device or user) is whom or what they claim to be and that they are authorized to conduct access requests to sets of systems resources, and to account for such access requests, authorization, and resource use. Different access control technologies do these “AAA” tasks differently, achieving different levels of information security. Access control systems need a database of some sort that contains the information about authorized subjects, their privileges, and any constraints on access or use; this is often called an access control list (ACL). (Keep separate in your mind that routers and firewalls are often programmed with filter conditions and logic, as part of access control, by means of ACLs contained in the router's control memory. Two kinds of ACLs, two different places, working different aspects of the same overall problem.)
Terminal Access Controller Access Control System (TACACS) was an early attempt to develop network access capabilities, largely for Unix-based systems. (The “terminal” meant either a “dumb” CRT-keyboard terminal, a very thin client, or a remote card reader/printer job entry system.) XTACACS, or extended TACACS, was a Cisco proprietary extension to TACACS. TACACS+ grew out of both efforts, as an entirely new set of protocols that separate the authentication, authorization, and accounting functions, which provides greater security and control.
Remote Authentication Dial-In User Service (RADIUS) started with trying to control access to hosts by means of dial-in connections, typically using dumb terminals and thin clients. It works with (not in place of) a network access control server, which maintains the ACL information, to validate the request, deny it, or ask for additional information from the requestor. RADIUS has continued to be popular and effective, especially as it supports roaming for mobile end-user devices. An enhanced version of RADIUS, called Diameter, never gained momentum in the marketplace.