Defense is a set of strategies, management is about making decisions, and mitigation is a set of tactics chosen to implement those decisions. Integrated information risk management is about protecting what’s important to the organization. It’s about what to protect and why; risk mitigation addresses how. Some outcomes, processes, or assets are by their nature much more critical to organizational success (and survival!) than others. By the same token, some threats pose more danger to the organization and its vulnerabilities than others do. The CIANA+PS set of cybersecurity needs prevails, and the SSCP can fill many important roles in shaping an effective integrated information defense strategy, as you’ll see in this chapter. We’ll also borrow from NIST Special Publication 800-37 Rev. 2, Information Systems Security and Privacy Risk Management Framework (RMF), to look at what leadership and management have to do to start the risk management process.
We are also going to challenge your thinking about defense—specifically about an idea called defense in depth. Some cybersecurity systems vendors claim that defense in depth is “dead,” whereas many others in the industry continue to strongly recommend it. You’ll see that the difference between whether defense in depth is very much alive and well, or on its way to the scrap heap, can be found in one word: integrated. Let’s see what this means.
Let’s face it: your organization’s systems, its information, its very existence is in danger. Your money, your capital equipment, supplies, and inventory all are at risk of theft or damage. Your information assets are at risk of being stolen, and your trade secrets, customer information, and business practices, even your talented people, are “up for grabs” if hackers can get into your files. Perhaps most important, your organization’s reputation for honesty, reliability, and quality could be ruined by hostile action by such threat actors as these, or just by your own failure to quickly and prudently deal with accidental or natural disruptions to your business activities.
The key to risk and risk management is simple: it’s about making decisions in reliable ways and using the CIANA+PS to help you know when the decision you’re about to make is a reliable one…and when it is a blind leap into the dark. From the SSCP’s perspective, information security is necessary because it enables more decisions to be made on time and on target. Reliable decision making is as much about long-range planning as it is about incident response. This means that you can rely on the following:
Each element of that basic decision cycle must be reliable if you want to count on your decisions being the right decisions at the right time. Each of those elements has its own CIANA+PS set of needs. By controlling or managing information risk, the SSCP helps the organization manage its decision risk, and thereby manage its overall exposure to risk while it decides and acts to achieve its goals and objectives.
The good news is that neither the SSCP nor the organizational leaders and stakeholders she works for have to figure out how to do all of this from scratch. Universities, businesses, and governments have for generations been compiling “lessons learned” about organizational management and leadership, especially on topics such as risk management. The dawn of the computer age highlighted the need to bring even more talent and expertise to bear, to find even better ways to manage and mitigate information risk and decisions risk. Since the 1990s, governments, private business, and academia have been collaborating to develop what organizations large and small need to be able to deal with information systems risk. They’ve produced risk management frameworks as well as highly technical standards, practices, and recommendations for the nitty-gritty work of hardening your information systems and defending them in prudent and effective ways.
A risk management framework is a management tool kit that you can use to bring these kinds of risks (and others) under control. One such framework, published by the U.S. Department of Commerce, National Institute of Standards and Technology, provides what it calls “a systems life cycle approach for security and policy.” We’ll use its overall approach to introduce the concepts of risk, defensive strategies, and responses; then we’ll look more closely at how organizations manage risk and attain some control over it.
But first, let’s look at what we mean by risk and, more specifically, information risk.
Let’s start by giving a formal definition of what we mean by risk. A risk is the possibility that an event can occur that can disrupt or damage the organization’s planned activities, assets, or processes, which may impact the organization’s ability to achieve some or all of its goals and objectives. Risks are further classed as either threats or hazards, based on whether the action is taken by a human (or human organization) with intent, or happens because of accident, acts of nature, or system failures due to wear and tear. Separating risk this way, into threats caused by threat actors and hazards, aligns our thinking about information systems risk with standard business and insurance terminology. Note that risk management still must embrace both threats and hazards, of course.
Let’s take our definition apart, piece by piece, and see how we can operationalize it—turn it into something we can make part of day-to-day, task-level operational steps we must accomplish to achieve it. We’ll do that from the inside out, as shown in Figure 3.1.
FIGURE 3.1 Vulnerability leads to failure, which leads to impact
Start by recognizing that vulnerabilities exist in everything we do, in everything we build—even in each of us. A bulletproof vest cannot stop heavy machine gun fire; structure fires can melt “fireproof” document safes, or even be so hot that safe actually burns. Parts wear out; mechanisms overheat; anything that runs on electricity can (and will) have that electrical supply fail. Humans make errors as we design, build, and use these systems. And to add insult to injury, the physical world is a noisy place—data gets corrupted, messages get garbled, and the result is often that what we thought we said is not what others think we meant.
Fortunately, each of these weaknesses is not occurring on a nonstop basis. Risks are “if something happens.” We talk about a vulnerability becoming an event when something goes wrong—when a part fails, when a message doesn’t get through, when a person makes a mistake, or when somebody exploits that vulnerability to cause an unwanted or poorly anticipated event to actually occur. Vulnerabilities by themselves do not cause harmful or disruptive events; it is only when some action is taken (or a required action is not taken) that such an event occurs. Even then, not all events that occur are events of interest to information security professionals. Two events in 2017 illustrate this difference. (We’ll examine more recent events, in greater detail, in Chapter 12, “Cross-Domain Challenges.”)
Our first example is one of a classical “non-zero day” exploit gone horribly viral in scale. On September 7, 2017, Equifax announced that millions of individual consumer credit report files might have been subject to an “unauthorized disclosure” due to a breach in the security of the company’s systems. Since this initial announcement, continued reporting shows that more than 148 million consumers worldwide might have had their credit history, government identification, and other private data stolen from Equifax by the attackers. In the weeks that followed Equifax’s announcement, an all-too-familiar sequence of events was revealed.
First, an exploitable software vulnerability was detected in the Apache Struts web software, used by Equifax and many others, by a security researcher in Shanghai, China; he reported his discovery immediately to the Apache Software Foundation, which then published the vulnerability and a fix on March 6, 2017. One day later, the vulnerability showed up in Metasploit, one of the most popular exploitation tool suites used by black hat and white hat hackers alike. By March 10, reconnaissance probes by hackers started to hit the Equifax servers. By early May, attackers exploited these vulnerabilities to gain access to multiple database applications served by many Equifax web pages, and then systematically “exfiltrated” (that is, stole) data from Equifax. Data from a U.S. Government Accountability Office (GAO) report indicates that hackers ran nearly 9,000 unauthorized queries over 76 days, many of which simply blended in with “normal” activity levels, and used standard encryption protocols to further disguise this traffic.
Equifax detected these exfiltrations on July 30, waited a day to verify that these were in fact unauthorized accesses leading to a data breach, and then shut down the affected servers. Equifax waited until September 7 to report the data losses to the U.S. Federal Trade Commission and to the public, claiming it had no legal obligation to report sooner.
What were the outcomes of the Equifax data breach? Equifax did spend, by some reports, up to $200 million on improving its systems security measures, and its board of directors did ask the chief executive officer and chief information security officer to retire early—with up to $500 million in retirement and severance benefits intact. As of March 2018, actual claims by consumers totaled $275 million; these are expected to rise to at least $600 million before all claims have been fully resolved.
By contrast, consider numerous data systems failures that have caused significant losses to the companies involved. Delta Airlines, for example, had to cancel hundreds of flights in January 2017 due to multiple systems crashes at its Atlanta, Georgia, operations center; this was after its datacenter crashed the previous August when its (supposedly) uninterruptible electrical power systems failed. This cost Delta more than $150 million and inconvenienced tens of thousands of travelers. Yet, by all reports, this event was not of interest to IT security professionals; it was simply a cascading set of errors leading to otherwise preventable failures.
Not all risks are information security risks, even if they impact the availability of information systems to support decision making. In retrospect, we see that choosing whether an event is a security concern or not is largely a judgment call we may have to make. Two important questions must be asked about such failures or risk occurrences as incidents:
These answers suggest that if something we do, use, or depend on can fail, no matter what the cause, then we can start to look at the how of those failures—but we let those frequencies, probabilities, and possible impacts guide us to prioritize which risks we look at first, and which we can choose to look at later.
We care about risks because when they occur (when they become an incident), they disrupt our plans. Incidents disrupt us in two ways:
Consider, for example, a simple daily set of activities like driving to work. As you back your car out of the driveway, a sudden noise and an impact suggests that you’ve run over something (hopefully not someone!). You stop the car, get out, and look; you find a child’s bicycle had been left in the driveway behind your car. The damage to bicycle and car is minor but not zero; money, time, and effort are required to fix them. You’ve got to decide when and how to get those repairs done, in ways that don’t completely disrupt your plans for the day. And you’re probably both upset (why didn’t you look better first?) and relieved (no one got hurt).
Most of the time, we think of risks as “bad news.” Things break; opportunities are lost; systems or property is damaged; people get hurt or killed. If we stop and think about this, we see that risk can be good news but still disruptive to our plans. An unexpected opportunity appears (a surprising offer of a dream job, halfway across the country), but to take advantage of it, you must divert resources to do what’s necessary. From an information security perspective, however, you are best off thinking of risk as a negative impact only.
The occurrence of a risk, therefore, takes our preplanned, previously evaluated, deliberated decisions and action plans and tosses them aside. And it does this because either new information (the bicycle behind your car, the new job) was not anticipated as you put your original decisions together, or your decision process was not set up to deal with that new information without derailing your train of thought.
Everything people and human organizations do is a series of steps, and each step is a series of substeps; our lives and our businesses run on layers upon layers of tasks and subtasks. You go to work in the morning, but that in itself requires steps (like waking up), and all of those steps contain or are made up of substeps. Businesses get things done in step-by-step ways; these business processes are often chained together, the results of one process becoming the inputs to the next. Within each process we often find many subprocesses, as well as many decision points that affect the way each particular process is applied to each particular set of input conditions, day in and day out. This is why sometimes we say that all work is actually decision work, since even the simplest task you do requires you to decide “Should I do this next?” before starting; “Am I doing this right?” while you’re doing it; and “Did I finish it correctly?” when you think it’s time to stop.
Each step in a process is a decision. That’s the number one most powerful lesson of cybernetics, the study of control systems. The most powerful lesson from 10,000 years of warfare and conflict between nations is that first, you defeat the way your adversary thinks. By defeating his strategy, you may not even need to engage his armies on the field—or you will spend far less effort in actual combat if you must nonetheless! By outthinking your opponent in this way, you are much more able to win through to your own goals, often at much lower costs to you. We call this getting inside the opponent’s decision cycle. And for the same 10,000 years, this same lesson has shaped the way marketplaces work and thus shapes the way that businesses compete with one another.
Every one of those decisions, large or small, is an opportunity for somebody or something to “mess with” what you had planned and what you want and need to accomplish:
Decision assurance, then, consists of protecting the availability, reliability, and integrity of the four main components of the decision process:
From our CIANA+PS perspective, integrity and availability affect all four components of every decision we make (including the ones we have machines make on our behalf). Whether confidentiality is required for a particular decision, its inputs, its decision logic, or the actions or communications that are the result of making the decision, is something that the decision maker needs to decide about as well.
One of the most powerful decision assurance tools that managers and leaders can use at almost any organizational level is to “sanity-check” the inputs, the thinking, and the proposed actions with other people before committing to a course of action. “Does this make sense?” is a question that experience suggests ought to be asked often but isn’t. For information security specialists, checking your facts, your stored knowledge, your logic, and your planning with others can take many different forms:
It’s important to remember that most of what makes human organizations (and individual efforts) successful is our ability to recognize, think, and decide at many levels—some of which we are not consciously aware of. “Common sense,” for example, teaches us many “lessons learned” from experience: you don’t leave your car unlocked with packages on the seats if you want to come back and find those packages still in the car. You don’t leave your house or apartment unlocked when you go on vacation. You don’t leave the default user IDs of “admin” and “password” enabled on your laptops, phones, routers, switches, modems, and firewalls. (You don’t, do you?)
Risk management includes this kind of prudent use of common sense. Risk management recognizes that before we can do anything else, we need to make sure that the blindingly obvious safety precautions, the common-sense computing hygiene measures, have already been put in place and are being used conscientiously. (We’ll look at these starting in Chapter 4, “Operationalizing Risk Mitigation.”)
The one drawback to common sense, as Voltaire said, is that it isn’t so common. Sometimes this is because what we call “common sense” turns out to be that we’ve made decisions intuitively, without consciously thinking them through. Other times, we’ve probably read or heard the “lessons learned” by others, but right at the moment we make a decision, we’re not using them explicitly. If those lessons lead to writing a problem-solving checklist (like a fault isolation diagram), then “common sense” becomes documented, common practice for us. As you’ve seen in earlier chapters, due care is applying common sense to ensure that the right processes, with the right steps and safeguards, have been put in place to achieve a set of goals. Due diligence is the follow-through that continually verifies those processes are still working right and that they are still necessary and sufficient to meet what’s needed.
Common sense can and often does suggest that there are still reasonable, prudent actions that we can take to make sure that an appropriate set of information security measures are in place and effective. Information security best practices suggest a good minimum set of “when in doubt” actions to ensure that the organization:
This “safe computing” or computing hygiene standard, is a proven place for any organization to start with. If you don’t have at least this much going for your information security program, you’re just asking for trouble!
You are going to need to go beyond common sense in dealing with information risks, and that means you’ll need to manage those risks. This means you must augment your guesswork and intuition with informed judgment, measurement, and accountability.
Risk, as we stated earlier, is about a possible occurrence of an event that leads to loss, harm, or disruption. Individuals and organizations face risk, and are confronted by its possibilities of impact, in four basic ways, as Figure 3.2 illustrates. Three observations are important here, so important that they are worth considering as rules in and of themselves:
FIGURE 3.2 Four faces of risk, viewed together
Risk management, then, is trading off effort and resources now to reduce the possibility of a risk occurring later, and if it does occur, in limiting the damage it can cause to us or those things, people, and objectives we hold important. The impact or loss that can happen to us when a risk goes from being a possibility to a real occurrence—when it becomes an incident—is often looked at first in terms of how it affects our organization’s goals, objectives, systems, and our people. This provides four ways of looking at risk, no one of which is the one best right way. All of these perspectives have something to reveal to us about the information risks our organization may be facing.
Think back to the Ishikawa, or fishbone, diagram we introduced in Chapter 1, “The Business Case for Decision Assurance and Information Security.” The “tail” and “head” of the fishbone and the central left-to-right arrow of the backbone demonstrate the outcomes-based viewpoint. The major inputs of materials, methods, measurements, people, and machines are assets. The environment is where external threats (natural, accidental, or deliberate) can strike from. Internal threats can be visualized as the failure of any connecting arrow to “deliver the goods”—make good on the promised on-time, on-target delivery of a service, set of information, materials, labor, or other outcomes to the steps in the process that need them.
When we make an estimate, we are predicting a future outcome of a set of choices. That same computer has a purchase value, but to estimate what its useful life is, we have to make assumptions about how often it is used, how routine maintenance and repairs are done, and how often such machines break down under comparable use. Those assumptions, plus that purchase value, are the basis of estimate we can then calculate the useful life with.
By calling these the faces of risk, we highlight the need for you as the SSCP to be conscious of how you look at things and how you perceive a situation. And that, of course, depends a lot on where you stand.
This face of risk looks at why people or organizations do what they do or set out to achieve their goals or objectives. The outcomes of achieving those goals or objectives are the tangible or intangible results we produce, the harvest we reap. Passing the SSCP examination and earning your SSCP credential is an objective, yes, and the achievement of it is an outcome in and of itself. But in doing so, you enable or enhance your ability to be a more effective information security practitioner, which can enable you to achieve other, more strategic goals. A severe illness or injury could disrupt your ability to study for, take, and pass the examination; a family emergency could lead you to abandon that objective altogether. These are risks to the outcome (or objective) itself, and they are largely independent of the ways in which you had planned to achieve the outcome.
Here’s a hypothetical example: Search Improvement Engineering (SIE) is a small software development company that makes and markets web search optimization aids targeted to mobile phone users. SIE’s chief of product development wants to move away from in-house computers, servers, and networks and start using cloud-based integrated development and test tools instead; this, she argues, will reduce costs, improve overall product quality and sustainability, and eliminate risks of disruption that owning (and maintaining) their own development computer systems can bring. The outcome is to improve software product quality, lower costs, and enable the company to make new products for new markets. This further supports the higher-level outcomes of organizational survival, financial health, growth, and expansion. One outcomes-based risk would be the disclosure, compromise, or loss of control over SIE’s designs, algorithms, source code, or test data to other customers operating on the cloud service provider’s systems. (We’ll look at how to evaluate and mitigate that risk in later chapters.)
Everything we want to achieve or do requires us to take some action; action requires us to make a decision. Even if it’s only one action that flows from one decision, that’s a process. In organizational terms, a business process takes a logical sequence of purpose, intention, conditions, and constraints and structures them as a set of systematic actions and decisions in order to carry them out. This business logic, and the business processes that implement it, also typically provide indicators or measurements that allow operators and managers to monitor the execution of the process, assess whether key steps are working correctly, signal completion of the process (and thus perhaps trigger the next process), or issue an alarm to indicate that attention and action are required. When a task (a process step) fails to function properly, this can either stop the process completely or lead to erroneous results.
If we look further at our hypothetical SIE, we see that the company has several major sets of business processes. Human resources management processes support hiring, training, and providing salary and benefits for workers; financial processes ensure that bills are paid and invoices are issued, both of which are accurately reflected in the accounting ledgers (“the books” as the chief financial officer calls them). Software development processes define, track, and manage how customer needs and market research ideas translate into new functional requirements for products and the development and testing of those products. Customer relationship management (CRM) processes bring everything from “who is” a customer, to “What do they like to buy from us?” together with credit rating, market share, and many other factors to help SIE know how important one customer is versus another. Process-based risks to this last set of processes could be that complaints or concerns from important customers aren’t recognized quickly, properly investigated, and acted on in ways that help customers decide to stay with SIE for their search optimization software needs.
Note that in this example, the outcome of using the processes is where we feel the impact of the risk becoming an incident—but it is the process that we’re focused on as we investigate “what can go wrong” as we wonder “Why are customers leaving us?”
Broadly speaking, an asset is anything that the organization (or the individual) has, owns, uses, or produces as part of its efforts to achieve some of its goals and objectives. Buildings, machinery, or money on deposit in a bank are examples of hard, or tangible assets. The people in your organization (including you!), the knowledge that is recorded in the business logic of your business processes, your reputation in the marketplace, the intellectual property that you own as patents or trade secrets, and every bit of information that you own or use are examples of soft, or intangible assets. Assets are the tools you use to perform the steps in your business processes; without assets, the best business logic cannot do anything.
Lots of information risk management books start with information assets—the information you gather, process, and use, and the business logic or systems you use in doing that—and information technology assets—the computers, networks, servers, and cloud services in which that information moves, resides, and is used. The unstated assumption in nearly all cases is that if the information asset or IT asset exists, it must therefore be important to the company or organization, and therefore, the possibility of loss or damage to that asset is a risk worth managing. This assumption may or may not still hold true. Assets also lose value over time, reflecting their decreasing usefulness, ongoing wear and tear, obsolescence, or increasing costs of maintenance and ownership. A good example of an obsolete IT asset would be a mainframe computer purchased by a university in the early 1970s for its campus computer center, perhaps at a cost of over a million dollars. By the 1990s, the growth in personal computing and network capabilities meant that students, faculty, and staff needed far more capabilities than that mainframe computer center could provide, and by 2015, it was probably far outpaced by the capabilities in a single smartphone connected to the World Wide Web and its cloud-based service provider systems. Similarly, an obsolete information asset might be the paper records of business transactions regarding products the company no longer sells, services, or supports. At some point, the law of diminishing returns says that it costs more to keep it and use it than the value you receive or generate in doing so.
These are two sides of the same coin, really. Threat actors (human, intentional individuals or organizations) or hazards can cause damage and distruction leading to loss. Vulnerabilities are weaknesses within systems, processes, assets, and so forth that are points of potential failure. When (not if) they fail, they result in damage, disruption, and loss. Typically, threats or threat actors exploit (make use of) vulnerabilities. Hazards can originate from natural causes, such as storms or earthquakes, or from accidental or unintentional actions. System failures due to wear and tear, for example, are unintended. Threats are deliberate actions taken or contemplated by humans or instigated by humans. Such intentional attackers have purposes, goals, or objectives they seek to accomplish; Mother Nature or a careless worker does not intend to cause disruption, damage, or loss.
As an example, consider a typical small office/home office (SOHO) IT network, consisting of a modem/router, a few PCs or laptops, and maybe a network attached printer and storage system. A thunderstorm can interrupt electrical power; the lack of a backup power supply is a weakness or vulnerability that the thunderstorm unintentionally exploits. By contrast, the actions of the upstairs neighbors or passers-by who try to “borrow some bandwidth” and make use of the SOHO network’s wireless connection will most likely degrade service for authorized users, quite possibly leading to interruptions in important business or personal tasks. This is deliberate action, taken by threat actors, that succeeds perhaps by exploiting poorly configured security settings in the wireless network, whether its intention was hostile (e.g., willful disruption) or merely inconsiderate.
Think back to what we just discussed about process-based risks. It’s quite common for an organization to have some of its business processes contain steps for which there are no easy, affordable alternative ways to get results when that step fails to function properly. These steps are said to be “on the critical path” from start to finish, and thus a set of processes containing such a critical step is a critical path in and of itself. Almost without exception, critical paths and the critical steps on them are vulnerabilities in the business logic and the business processes that the company depends upon.
It’s perhaps natural to combine the threat-based and vulnerability-based views into one perspective, since they both end up looking at vulnerabilities to see what impacts can possibly disrupt an organization’s information systems. The key question that the threat-based perspective asks, at least for human threat actors, is why. What is the motive? What’s the possible advantage the attacker can gain if they exploit this vulnerability? What overall gains an attacker might achieve by an attack on our information systems at all? Many small businesses (and some quite large ones) do not realize that a successful incursion into their systems by an attacker may only be a step in that attacker’s larger plan for disruption, damage, or harm to others.
Note that whether you call this a “threat-based” or a “vulnerability-based” approach or perspective, you end up taking much the same action: you identify the vulnerabilities on the critical path to your high-priority objectives, and then decide what to do about them in the face of a possible threat becoming a reality and turning into an incident.
Imagine for a moment a typical walled city in medieval Europe. Within the city was the castle, sitting on higher ground and surrounded by a moat, trenches, and a wall of its own. When threatened by an attacking army, farmers and villagers in the surrounding area retreated inside the city’s walls, and if the attackers breached the walls, they’d further retreat inside the castle keep itself. This layered defense had both static elements, such as the walls, moat, and trenches, as well as dynamic elements (troops could be moved about within the city). The assets being defended (the people, their livestock, food supplies, etc.) could be moved inward layer by layer as the threat increased. Watchmen, captains of the guard, and other officials would use runners to carry messages to the city’s leaders, who’d send messages back to each element of the defense.
Continued advances in warfighting technology, of course, meant that static walls of stone quickly became obsolete. Yet this layered defense concept, when combined with an active, flexible command, control, and communications architecture, still dominates our thinking when we look to implement information risk management and mitigations strategies. As well it should. We use a layered or “top-down” approach when we design, build, and operate a business and the processes and systems that support it. Why not use that same “layers upon layers” perspective to look at how to defend it, preserve it, and keep it safe?
We see by now that several ideas interact with each other, as we look to what the SSCP can do to help the organization achieve the right mix of information security, performance, and cost. Let’s start by examining how our process for designing our information defense systems mirrors the way we design, build, and operate our organization’s business processes and the IT systems that serve its needs.
Consider a layered or structural approach to your organization’s information security needs. Whether you are trying to ensure that new business objectives can be developed, launched, and operated successfully, or you’re just trying to protect the data and systems in use today, you can look at the organization, the risks it faces, and your opportunities to secure and defend it in a layered fashion, as Figure 3.3 illustrates. From the inner, most vital center of the organization on out, you might see these layers as follows:
FIGURE 3.3 The layered view
As SSCPs, we have to defend those layers of function against risk; failure to do so exposes the organization to unmanaged risks, which leaves us unable to predict what might go wrong or to plan how to respond when it does. Natural systems (such as the immune system in our bodies) and human-built systems have long recognized a few key principles when planning for defense:
Note how these concepts apply equally, whether you are considering nonintentional threats, such as “acts of Nature,” accidents, or deliberate, hostile attacks on your organization, its assets, and its interests. For example:
These layers of function may take physical, logical, and administrative forms throughout every human enterprise:
We no doubt used a top-down systems engineering approach when we designed our business, our business logic and its processes, and its IT infrastructures; let’s apply the same process to designing the defense of those layers of systems and functions. In doing so, let’s borrow a page or two from our history books and notice what the number one critical failing of most defenses (layered or not) turns out to be.
Classical “defense-in-depth” thinking (that is, old-fashioned ideas that probably don’t work anymore) taught that each layer protected what was inside from what was outside. Oftentimes it was not very successful at defending against the threats from within—such as a trusted insider who had revealed to outsiders information about critical weaknesses in that defense, or a saboteur who had created such an exploitable weakness for an outside attacking force to take advantage of. More to the point, the classical approach was point by point; it looked at a specific weakness, chose a control and applied it, and in doing so often ignored a system-level need for integrated awareness, command, and control. We might say that a current defense-in-depth system is “classical” to the degree it implemented point-wise due care but failed to look at system-level due diligence needs.
This lack of systems thinking encourages three critical failures on our part. We’re far too willing to ignore “blind spots” in our defenses; to blindly trust in our systems, processes, and people; and then not check up on them to see if they’re actually working correctly. This three-part peril is what kills most classical defense-in-depth approaches.
Let’s take a closer look at Figure 3.3. It portrays four apparently separate sets of processes (and the services that support them), each supporting a distinct set of stakeholders or users of the system. These four functional pathways only seem to come together when they cross the gates into the core business processes and data at the center. This suggests that the system is designed to isolate the flow of activity along these outward, radial paths; that there is no lateral movement possible for an investor-user, for example, to access employee-facing functions or data. It is not until the software (or people-powered) processes that serve investor service requests reach the core that cross-channel connections between data and services may be allowed to happen.
This represents good partitioning of a system; it’s isolated major use cases (customers, investors, etc.) into their own separate lanes or channels for access. The individual gates that control access and the flow of data and service requests might be routers, firewalls, or other isolation or access control enforcement devices. Taken together, these limit the capability of an attacker to use falsified credentials to enter along one pathway, search laterally (at the same level of privilege and access) for other data or process assets they might exploit, and then use those exploits for their own malicious purposes.
What’s not shown on the diagram is any integration of those gates into a cohesive security-focused command, control, and communications system, sometimes referred to as C3. Chapter 5, “Communications and Network Security,” will look at this in greater depth.
In everyday life, we have many tactics, techniques, and procedures for keeping ourselves and those we care for safe and sound. We make sure our homes have proper smoke alarms in them; we have doors and windows we can lock. We trust in these components and even in the overall design of our home and the emergency response systems in our neighborhoods to take care of us. But how often do we verify that this trust is well placed? Do we check the batteries in the smoke alarms, or check that all of the windows and doors are secured before we go to bed each night? Do we have our family do “fire drills” to make sure that each family member knows what to do if and when the alarms go off?
This might lead you to think that the weakest link in any proactive, integrated defense system is actually the one you haven’t recently verified is still working properly—and you’d be right to think so! Our organizations will develop requirements for information security, and as SSCPs we’ll do our part to use those requirements to build in features and procedures to keep our systems safe. Those requirements must include how we plan to verify that what we built, installed, and trained people to use is actually doing the job we trust it to do. That verification is not just done at “acceptance testing” time, when we turn the systems over to the users; it must be continuous. Chapter 4, “Operationalizing Risk Mitigation,” will delve into this topic in greater depth and show you how to design and carry out both acceptance testing and ongoing monitoring and assessment activities.
This is a very important question! Legally, the doctrines of due care and due diligence provide a powerful framework in which to view how organizations, their leaders and managers, their stakeholders, and all of their employees or members have to deal with the total set of responsibilities they have agreed to fulfill. Due care and due diligence are two burdens that you willingly take on as you step into a leadership, managerial, or other responsible role in an organization. And a piece of these burdens flows down to each member of that organization, and that includes customers, suppliers, or other outsiders who deal with it.
What does it mean, in a business sense, to fulfill your responsibilities? Suppose you want to open a retail business. You go to friends or family and ask them to invest money or other resources in your business. When you accept those investments, you and your investors agree that you will use them prudently, properly, legally, and effectively to set up and operate the business to achieve the goals you’ve agreed to with the investors.
You take due care of those responsibilities, and your investors’ expectations and investments, when you set up the business, its business logic and processes, and all of its facilities, equipment, people, and supplies so that it can operate. The burden of due care requires you not only to use common sense, but also to use best practices that are widely known in the marketplace or the domain of your business. Since these represent the lessons learned through the successes or failures of others, you are being careful when you consider these; you are perhaps acting recklessly when you ignore them.
As a business leader, owner, or stakeholder, you exercise due diligence by inspecting, auditing, monitoring, and otherwise ensuring that the business processes, people, and systems are working correctly and effectively. This means you must check that those processes and people are doing what they were set up to do and that they are performing these tasks correctly. More than that, you must also verify that they are achieving their share of the business’s goals and objectives in efficient and effective ways—in the best ways possible, in fact!
Everybody in the organization has a piece of the due care and due diligence burden to carry—including the customers! Consider your relationship with your bank; you would be careless indeed if you never checked your bank balance or looked at transactions (online or on a periodic statement) and verified that each one was legitimate. In fact, under many banking laws, if the customer fails to provide timely notice to the bank of a possible fraudulent transaction, this can relieve the bank of its responsibilities to resolve it (and to reimburse the customer for any loss they suffered).
Because the concepts of due care and due diligence first developed in business communities, we often think that this means that government officials somehow do not have these same burdens of responsibilities, either in law or in practice. This is not true! It is beyond the scope of this book to go into this further, but as an SSCP, you do need to be aware that everyone has a share of these burdens. By being willing to be a certified professional, you step up and accept the burden of due care by pledging to do the best job possible in designing, building, operating, and maintaining information security systems. You accept the burden of due diligence by accepting the need to ensure that such systems continue to work effectively, correctly, and efficiently, by means of monitoring their actions, analyzing the log data they produce, and keeping the organization’s leadership and management properly informed.
Preparedness means we have to assume that some attackers will win through to their targets and that some damage will happen. Even for natural threats, such as earthquakes or hurricanes, all it takes is one “perfect storm” to wipe out our business completely—if we are not prepared for it. So how do we limit our risk—that is, not keep all of our eggs in one basket to be smashed by a single hazardous event? How do we contain it, perhaps by isolating damage so that a fire in one building does not spread to others?
We should always start with a current set of priorities for our goals and objectives. Many organizations (and most human beings!) do the things they do and have the things they have because of decisions that they made quite some time ago. “We’ve always done it this way,” or “It’s always been my dream to own a big house on the beach” may have been our goals; the question is, are these still our most important goals today?
By focusing on today’s priorities, we can often find tasks we are doing that no longer matter. Sometimes the hardest question for people or organizations to answer is, “Why are we doing this particular business process?” In large, established organizations, history and momentum have a lot to do with how business gets done; “We’ve always done it this way” can actually be a good practice, when you can be sure that the process in question is the best way to reach your organization’s goal or target outcome. But market conditions change, technologies evolve, people grow and learn, and more often than not, processes become outmoded, unproductive, or otherwise obsolete.
Even our sense of the threats we face, or the vulnerabilities inherent to who we are (as a business) or what we do, are subject to change.
Thus, the first step in defense is to know yourself (as an individual or as a business) right now. Know who and what you want to become. Prioritize what it takes to achieve today’s plan and not fall back on yesterday’s strategies. On the basis of that knowledge, look at what you need, what you have to do, and what obstacles or threats have to be faced today and in the near term—and if outcomes, objectives, processes, or assets you currently have don’t serve those priorities, then those are probably not worthy of extensive efforts to mitigate risks against them.
Recall that a risk management framework is a set of concepts, tools, processes, and techniques that help organize information about risk. As you’ve no doubt started to see, the job of managing risks to your information is a set of many jobs, layered together. More than that, it’s a set of jobs that changes and evolves with time as the organization, its mission, and the threats it faces evolve.
Let’s start by taking a quick look at NIST Special Publication 800-37r2 Risk Management Framework (RMF) for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy, Revision 2. Published in December 2018, NIST SP 800-37 Rev2 establishes a broad, overarching perspective on what it calls the fundamentals of information systems risk management. Organizational leadership and management must address these areas of concern, shown conceptually in Figure 3.4:
You can see that there’s an expressed top-down priority or sequence here. It makes little sense to worry about your IT supply chain (which might be a source of malware-infested hardware, software, and services) if leadership and stakeholders have not first come to consensus about risks and risk management at the broader, strategic level. (You should also note that in NIST’s eyes, the big-to-little picture goes from strategic, to operational, to tactical, which is how many in government and the military think of these levels. Business around the world, though, sees it as strategic, to tactical, to day-to-day operations.)
FIGURE 3.4 NIST RMF areas of concern
The RMF goes on by specifying seven major phases (which it calls steps) of activities for information risk management:
It is tempting to think of these as step-by-step sets of activities—for example, once all risks have been categorized, you then start selecting which are the most urgent and compelling to make mitigation decisions about. Real-world experience shows, though, that each step in the process reveals things that may challenge the assumptions we just finished making, causing us to reevaluate what we thought we knew or decided in that previous step. It is perhaps more useful to think of these steps as overlapping sets of attitudes and outlooks that frame and guide how overlapping sets of people within the organization do the data gathering, inspection, analysis, problem solving, and implementation of the chosen risk controls. Figure 3.5 shows that there’s a continual ebb and flow of information, insight, and decision between and across all elements of these “steps.”
FIGURE 3.5 NIST RMF phased approach
Although NIST publications are directive in nature for U.S. government systems, and indirectly provide strong guidance to the IT security market in the United States and elsewhere, many other information risk management frameworks are in widespread use around the world. For example, the International Organization for Standardization publishes ISO Standard 31000:2018, Risk Management Guidelines, in which the same concepts are arranged in slightly different fashion. First, it suggests that three main tasks must be done (and in broad terms, done in the order shown):
Three additional, broader functions support or surround these central risk mitigation tasks:
As you can see in Figure 3.6, the ISO RMF also conveys a sense that on the one hand, there is a sequence of major activities, but on the other hand, these major steps or phases are closely overlapping.
FIGURE 3.6 ISO 31000:2018 Conceptual RMF
It’s wise to bear in mind that each major section of these RMFs gives rise to more detailed guidance, instructions, and “lessons learned” advice. For example, NIST Special Publication 800-61 Rev. 2, Computer Security Incident Handling Guide, looks more in-depth at what happens when an information risk actually occurs and becomes an incident. Its phases of Preparation, Detection, Analysis, Containment, Eradication, Recovery, and Post-Incident Activities parallel those found in the RMF, which looks at the larger picture of information risk management. We’ll explore these in greater detail in Chapter 10, “Incident Response and Recovery.”
As an SSCP, you’ll have two major opportunities to help your organization or your business keep its information and information systems safe, secure, and reliable, as these risk management frameworks suggest. At one level, you’ll be working as a technical specialist to help implement information risk controls. You’ll be doing the day-to-day operational tasks that treat risk, ensuring that the chosen risk treatment procedures are delivering the required level of safety and security; you’ll also be part of the team that responds when risk treatments fail. As you continue to grow in your profession and gain experience and insight, you’ll be able to offer technical insight and informed opinion to your managers. It’s important, then, to see how the technical, operational details that deliver information security and decision assurance, day by day, fit within the context of the management decisions that create the risk management plans that you and others carry out.
For the SSCP exam, you’ll need to have a broad awareness of the existence of standards such as these, but you won’t need to be conversant with their details. You will, however, need to be able to keep track of the context the question or issue comes up in, and be able to recognize when to shift your thinking from bigger-picture “information risk management” to more detailed, finer-grain “information security incident response” and back again.
To help you in that shift of thinking, we’ll split the managerial and leadership portions of risk management and mitigation off from the technical, operational, and administrative where it makes sense. The rest of this chapter, for example, will show how SSCPs support leadership and management as they prepare the organization to manage its risks, perform its information risk assessments, and use them to develop the business impact analysis (BIA). An effective BIA provides a solid transition from understanding the risks to mitigating them. We will briefly outline the remaining steps, but use Chapter 4 to get into the technical, administrative, and operational details of risk mitigation.
We’ll also translate the somewhat bureaucratic language that is used in the NIST RMF, and in ISO 31000:2018, into the sort of terms you’re more likely to hear and use within the workplace.
So let’s get started!
The Project Management Institute and many other organizations talk about the basic cycle of making decisions, taking steps to carry out those decisions, monitoring and assessing the outcomes, and taking further actions to correct what’s not working and strengthen or improve what is.
One important idea to keep in mind is that these cycles of Plan, Do, Check, Act (PDCA) don’t just happen one time—they repeat, they chain together in branches and sequels, and they nest one inside the other, as you can see in Figure 3.7. Note too that planning is a forward-looking, predictive, thoughtful, and deliberate process. We plan our next vacation before we put in for leave or make hotel and travel arrangements; we plan how to deal with a major disruption due to bad weather before the tornado season starts!
The SSCP applies this framework at the daily operational level. What must you accomplish today? How will you do it? What will you need? Then, do those tasks. Check to see if you did them correctly and that you got the desired outcomes as a result. If not, take corrective action if you can, or seek help and guidance if you cannot.
FIGURE 3.7 PDCA cycle diagram (simple), with subcycles
We’ll see this PDCA cycle in action here as we look at risk assessment and the decisions that come from it; Chapter 4 will then show PDCA in action as we look at ways to mitigate selected risks. Let’s take a closer look at these four steps:
As with many theoretical or “school-house” models, PDCA looks simple in concept and suggests clean, well-defined breakpoints between each of its four elements. In reality, these four steps flow into and out of one another; sometimes checking will lead right back to some “modified doing,” or the day-to-day urgencies may dictate that we “get doing” before the planning is done or the checking of the actions we took based on the last version of the plan! For you as an SSCP, it’s important to recognize these separate “thought models” for dealing with situations and to recognize when you might be doing when you haven’t actually planned what to do—which would, after all, be somewhat risky behavior in itself.
Risk assessment is a systematic process of identifying risks to achieving organizational priorities. There are many published handbooks, templates, and processes for doing risk assessment, and they all have several key elements that you should not lose sight of while trying to implement the chosen framework of the day.
At the heart of a risk assessment process must be the organizational goals and objectives, suitably prioritized. Typically, the highest priorities are existential ones—ones that relate to the continued existence and health of the organization. These often involve significant threats to continued operation or significant and strategic opportunities for growth. Other priorities may be vitally important in the near term, but other options may be available if the chosen favorite fails to be successful. The “merely nice to have” objectives may fall lower in the risk assessment process. This continual reevaluation of priorities allows the risk assessment team to focus on the most important, most compelling risks first.
The next major element of risk assessment is to thoroughly examine and evaluate the processes, assets, systems, information, and other elements of the organization as they relate to or support achieving these prioritized goals and objectives. This linkage of “what” and “how” with “why” helps narrow the search for system elements or process steps that, if they fail or are vulnerable to exploitation, could put these goals in jeopardy.
Most risk assessment processes typically summarize their findings in some form of BIA. This relates costs (in money, time, and resources) to the organization that could be faced if the risk events do occur. It also takes each risk and assesses how frequently it might occur. The expected cost of these risks (their costs multiplied by their frequencies and probabilities of occurrences, across the organization) represents the anticipated financial impact of that risk, over time; this is a key input to making risk mitigation or control choices.
Let’s see what it takes to put this kind of risk assessment process and thinking into action.
Preparing the organization to manage its information risk requires that senior leadership, key stakeholders, and others develop and establish key working relationships and processes that focus on risk management in general and on information risk management in particular. These key individuals will need to focus attention on the relationships between organizational priorities on the one hand, and the business processes and information systems that have been built and are being used to meet those priorities on the other. This consensus should align resource allocations with those priorities.
A critical task during this first step is ensuring a common understanding of the organization’s context and culture. Doing so involves reaching a consensus on risk appetite, or the willingness of the organization to accept risk, and on how leadership make decisions about risk. (This is sometimes referred to as the organization’s risk tolerance.) It is also important at this point to understand how the organization controls changes to its business processes and systems, particularly to its information technology systems.
We also begin to see at this point that the organization might have a bias, or a customary way of considering risk—that is, does it view risk in terms of outcomes, processes, assets, or threats? Asset-focused risk thinkers, for example, will probably drive the organization to build its risk assessments (and its BIA) in terms of asset values and damages to those assets. Threat-based thinkers, by contrast, may try to drive the assessment conversation more in the direction of threat modeling (which we’ll examine further in Chapter 8). The key is that all perspectives have something of value to contribute at this stage; the wiser organizations will use outcomes, processes, assets, and threats as the points to ponder as they perform their information risk assessments. No one “face of risk” is the most correct.
Risk management frameworks such as NIST SP 800-37r2 and ISO 31000:2018 provide top-down guidance to organizations in setting the organizational attitude and mindset in ways that support building this consensus. These RMFs also provide specific recommendations, often in step-by-step fashion, that organizations large and small can learn from. NIST SP 800-37r2 calls this step “Prepare” as a way to emphasize how important it is to establish a common ground of understanding within the organization. The “movers and shakers” who drive the business forward have to agree, and they have to speak with a common set of words and meanings when they engage with the people who will actually do the hard work of managing and mitigating information risk. ISO 31000:2018 perhaps says this more clearly by focusing on the key outcomes of this step. First, we agree to where the boundaries are—what do we own and operate, and what do we count on outsiders to do on our behalf? Next, we look at context; finally, we must agree to our thresholds for accepting risk or our willingness to pay to mitigate it.
The SSCP exam does not go into either RMF in great detail; nor, for that matter, would an SSCP be expected to have in-depth expertise on applying part of an RMF on the job. That said, these RMFs can help the SSCP recognize the context that their day-to-day operational duties support—and maybe help them in spotting when there are weak spots in the organization’s overall information risk management approach.
What happens when an organization’s information is lost, compromised by disclosure to unauthorized parties, or corrupted? These questions (which reflect the CIANA+PS set of security characteristics) indicate what the organization stands to lose if such a breach of information security happens. Let’s illustrate with a few examples:
Company financial data, and price and cost information Loss or compromise can lead to loss of business, to investors withdrawing their funds, or to loss of business opportunities as vendors and partners go elsewhere. Can also result in civil and criminal penalties.
Details about internal business processes Loss could lead to failures of business processes to function correctly; compromise could lead to loss of competitive advantage, as others in the marketplace learn how to do your business better.
Risk management information In the worst case, loss or compromise of your risk management information could provide attackers with valuable technical and operational intelligence insights, making your systems and your organization all the more vulnerable to attack. Additionally, loss or compromise could lead to insurance policies being canceled or premiums being increased, as insurers conclude that the organization cannot adequately fulfill its due diligence responsibilities.
When we view information in such terms—as “What does it cost us if we lose it?”—we decide how vital the information is to us. What this categorization or classification really does is tell us how important it is to protect that information, based on possible loss or impact. We categorize our possible losses, in terms of severity of damage, impact, or costs; we also categorize them in terms of outcomes, processes, and assets they have or depend on. Finally, we categorize them by threat or common vulnerabilities. This kind of risk analysis can help us identify critical locations, elements, or objectives that could be putting the entire organization at risk; in doing so, that focuses our risk analysis further.
Some of us are familiar with simple hierarchical information classification systems used by governments and military services. These often start with “Unclassified” as their lowest level and move up through “For Official Use Only,” “Confidential,” “Secret,” and “Top Secret” as their way of broadly outlining how severely a nation would be impacted if the information was disclosed, stolen, or otherwise compromised. Yet even these cannot stay simple for long. Businesses, private organizations, and the military have another aspect of data categorization in common: the concept of need to know. Also known as least privilege, need to know limits who has access to read, use, or modify data based on whether their job functions require them to do so. Thus, a school’s purchasing department staff have a need to know about suppliers, prices, specific purchases, and so forth, but they do not need to know any of the PII pertaining to students, faculty, or other staff members. Need to know leads to compartmentalization of information approaches, which create procedural boundaries (administrative controls) around such sets of information. (We’ll discuss this more in Chapter 6, “Identity and Access Control.”) All of this leads to needing more powerful ways to group similar sets of information (such as all PII pertaining to students) into groups based on common levels of impacts from compromise and on common security measures that should be used to protect it.
This process of grouping or categorizing potential impacts from different types of risks leads directly to the three tasks of security classification, categorization, and baselining for all the organization’s information assets. These tasks are defined as follows:
Classifying the organization’s information assets (that is, the data, information, and knowledge itself—and not the media on which it is recorded) provides a powerful way to align risk management, security controls, and job or workflow designs with organizational goals, objectives, and priorities. We’ll see in Chapter 12 how this alignment of security functions, from classification through continuous assessment, is (or should be) a major component of an effective security program.
Categorizing data types with similar security levels and needs provides a consistency check. For example, suppose that a manufacturing company has safety-critical data that relates to four different sources of safety compliance regulations (such as chemical and hazardous materials handling, machinery operation, and autonomous systems). Looking across all of these safety groups might reveal common assumptions in the ways in which workflows and processes were designed; these assumptions may need further assessment and might possibly be home to exploitable vulnerabilities in their implementations.
The security baseline provides a common frame of reference that the organization can use to create requirements, designs, processes, and procedures for labeling, handling, storing, and protecting the various types of data it uses. In this way, changes to the threat landscape, to systems and applications, or to security technologies can be focused in on the specific security needs of each data type. This helps the organization spend the right amount of effort to get the right protection, rather than ending up in what might be a blanket overspend to protect everything to the highest possible level of security.
For example, an online pharmacy might use a subset of each customer’s PII, their credit, debit, or other payment card information, information about their insurance carrier, and then the details of each prescription or over-the-counter drug purchased by the customer. Depending upon the country of jurisdiction, each of those types of data may require different protections, such as different levels of encryption, or be subject to audit or inspection at different frequencies. Safety considerations would require that the pharmacy can quickly link any drug recall notices to individual purchases based on drug batch or lot numbers, dates, and so on. Insurance portability and privacy laws may specify other special protection needs. Each type of data may also have different maximum retention periods specified by law. Finally, the pharmacy would need to protect its overall wholesale purchase, inventory, and order history data from inadvertent access, breach, or loss. Just for its primary line of business, five different classification schemes or levels might be needed, each associated with the specific laws, standards, or contracts that dictate the protection required.
This might result in security classification labels for privacy, payment, insurance, safety, and logistics data. They may additionally need classifications for their human resources and financial data, along with company proprietary as a label (and set of handling procedures) for their workflow process knowledge and other trade secrets. Note that these do not ladder up in a simple list from highest to lowest security level; each is a different set, the sets are disjoint (in set theory terms), and each needs its own set of protection and handling procedures. If a single “system high” classification was used instead, then any employee would have access privileges to any data based upon its (one and only) security classification; this would probably violate compliance requirements and might introduce other security risks as well.
For classification programs to be successful, several conditions need to be met. First, each information asset must have an asset owner within the organization, one who understands the business logic for its use and can speak authoritatively about potential impacts should it be compromised in any way. Next, the classification scheme itself needs to balance robustness with simplicity. Too simple, and everything is “top secret” and nobody really understands what that means; too complicated, with too many layers and levels, and it becomes hard to use. Either way, human error will result in mishandling of sensitive data, possibly leading to compromise or loss. Finally, as we’ll see in Chapter 8, “Hardware and Systems Security” there need to be procedures in place for establishing data retention limits and for proper disposal of information when it is no longer required.
Risk analysis is a complex undertaking and often involves trying to sort out what can cause a risk to become an incident. Root cause analysis looks to find what the underlying vulnerability or mechanism of failure is that leads to the incident, for example. By contrast, proximate cause analysis asks, “What was the last thing that happened that caused the risk to occur?” (This is sometimes called the “last clear opportunity to prevent” the incident, a term that insurance underwriters and their lawyers often use.) Our earlier example of backing your car out of the driveway, only to run over a child’s bicycle left in the wrong place, illustrates these ideas. You could have looked first, maybe even walked around the car before you got in and started to drive; you had the last clear opportunity to prevent damage, and thus your actions were the proximate cause. (You failed in your due diligence, in other words.) Your child, however, is the one who left the bicycle in the wrong place; the root of the problem may be the failure to help your child learn and appreciate what his responsibility of due care for his bicycle requires. And who was responsible for teaching due care to your child? (A word of advice: don’t say “My spouse.”)
We’ve looked at a number of examples of risks becoming incidents; for each, we’ve identified an outcome that describes what might happen (customers go to our competitors; we must get our car and the bicycle repaired). Outcomes are part of the basis of estimate with which we can make two kinds of risk assessments: quantitative and qualitative.
Quantitative assessments use simple techniques (like counting possible occurrences, or estimating how often they might occur) along with estimates of the typical cost of each loss:
Other numbers associated with risk assessment relate to how the business or organization deals with time when its systems, processes, and people are not available to do business. This “downtime” can often be expressed as a mean (or average) allowable downtime, or a maximum downtime. Times to repair or restore minimum functionality, and times to get everything back to normal, are also some of the numbers the SSCP will need to deal with. For example:
These types of quantitative assessments help the organization understand what a risk can do when it actually happens (becomes an incident) and what it will take to get back to normal operations and clean up the mess it caused. One more important question remains: how long to repair and restore is too long? Two more “magic numbers” shed light on this question:
We’ll go into these numbers (and others) in greater depth in Chapter 10 as you learn how to help your organization plan for and manage its response to actual information security and assurance incidents. It’s important that you realize that these numbers play three critical roles in your integrated, proactive information defense efforts. All of these quantitative assessments (plus the qualitative ones as well) help you achieve the following:
One final thought about the “magic numbers” is worth considering. The organization’s leadership have their stakeholders’ personal and professional fortunes and futures in their hands. Exercising due diligence requires that management and leadership be able to show, by the numbers, that they’ve fulfilled that obligation and brought it back from the brink of irreparable harm when disaster strikes. Those stakeholders—the organization’s investors, customers, neighbors, and workers—need to trust in the leadership and management team’s ability to meet the bottom line every day. Solid, well-substantiated numbers like these help the stakeholders trust, but verify, that their team is doing their job.
Qualitative assessments focus on an inherent quality, aspect, or characteristic of the risk as it relates to the outcome(s) of a risk occurrence. “Loss of business” could be losing a few customers, losing many customers, or closing the doors and going out of business entirely!
So, which assessment strategy works best? The answer is both. Some risk situations may present us with things we can count, measure, or make educated guesses about in numerical terms, but many do not. Some situations clearly identify existential threats to the organization (the occurrence of the threat puts the organization completely out of business); again, many situations are not as clear-cut. Senior leadership and organizational stakeholders find both qualitative and quantitative assessments useful and revealing.
Qualitative assessment is often used when managers or risk analysts believe that they do not have sufficient data to support a rigorous quantitative analysis. The processes being assessed for impact might be new or unique; the organization or even its industry and market may not have sufficient experience with an activity to make a reliable quantitative estimate of frequency of occurrence or impact. That said, the organization may find that it has more than enough data (from design, testing, simulation, or market research) than it really needs to make a reasonable quantitative estimate with.
At this point, the organization or business needs to be building a risk register, a central repository or knowledge bank of the risks that have been identified in its business and business process systems. This register should be a living document, constantly refreshed as the company moves from risk identification through mitigation to the “new normal” of operations after instituting risk controls or countermeasures.
As an internal document, a company’s risk register is a compendium of its weaknesses and should be considered as closely held, confidential, proprietary business information. It provides a would-be attacker, competitors, or a disgruntled employee with powerful insight into ways that the company might be vulnerable to attacks. This need to protect the confidentiality of the risk register becomes even more acute as the risk register is updated from first-level outcomes or process-based identification through impact assessments, and then linked (as you’ll see in the next chapter, “Operationalizing Risk Mitigation”) with systems vulnerability or root cause/proximate cause assessments.
There is no one agreed or best format or structure for a risk register, although many vendors provide platforms and systems to assist businesses in organizing all of their risk management information and processes. These details are beyond the scope of the SSCP exam, but you’ll need to be aware of the role that a risk register should play in planning and conducting information risk management efforts.
Many nations conduct or sponsor efforts to collect and publish information about system vulnerabilities that are commonly found in commercial-off-the-shelf (COTS) IT systems and elements or that result from common design or system production weaknesses. In the United States, the Mitre Corporation maintains its database of Common Vulnerabilities and Exposures (or CVE) information as a public service; this data is made freely available to anyone, from anywhere, without restriction. Mitre is one of several federally funded research and development corporations (FFRDCs) that research science and technology topics in the national interest; many of its findings are made available as published reports or databases. Mitre operates the National Cybersecurity FFRDC (NCF), which as of this writing is the only federally funded research center for cybersecurity and vulnerability assessment. Its website, https://cve.mitre.org/
, has a rich set of information and resources that SSCPs should become familiar with. In the United States, the National Institute of Standards and Technologies (NIST) operates the National Vulnerability Database, https://nvd.nist.gov/
; in the United Kingdom, these roles are provided by the Government Communications Headquarters (GCHQ, which is roughly equivalent to the U.S. National Security Agency), which you can find at its National Cyber Security Centre at www.ncsc.gov.uk
.
The business impact analysis (BIA) is where the rubber hits the road, so to speak. Risk management must be a balance of priorities, resources, probabilities, and impacts, as you’ve seen throughout this chapter. All this comes together in the BIA. As its name implies, the BIA is a consolidated statement of how different risks could impact the prioritized goals and objectives of an organization.
The BIA reflects a combination of due care and due diligence in that it combines “how we do business” with “how we know how well we’re doing it.”
There is no one right, best format for a BIA; instead, each organization must determine what its BIA needs to capture and how it has to present it to achieve a mix of purposes:
You must recognize one more important requirement at this point: to be effective, a BIA must be kept up to date. The BIA must reflect today’s set of concerns, priorities, assets, and processes; it must reflect today’s understanding of threats and vulnerabilities. Outdated information in a BIA could at best lead to wasted expenditures and efforts on risk mitigation; at worst, it could lead to failures to mitigate, prevent, or contain risks that could lead to serious damage, injury, or death, or possibly put the organization out of business completely.
At its heart, making a BIA is pretty simple: you identify what’s important, estimate how often it might fail, and estimate the costs to you of those failures. You then rank those possible impacts in terms of which basis for risk best suits your organization, be that outcomes, processes, assets, or vulnerabilities. For all but the simplest and smallest of organizations, however, the amount of information that has to be gathered, analyzed, organized, assessed, and then brought together in the BIA can be overwhelming. The BIA is one of the most critical steps in the information risk management process, end to end; it’s also perhaps the most iterative, the most open to reconsideration as things change, and the most in need of being kept alive, current, and useful. Most of that is well beyond the scope of the SSCP examination, and so we won’t go into the mechanics of the business impact analysis process in any further detail. As an SSCP, however, you’ll be expected to continue to grow your knowledge and skills, thus becoming a valued contributor to your organization’s BIA.
Two sets of information provide a rich source of information security requirements for an organization. The first is the legal, regulatory, and cultural context in which the organization must exist. As stated before, failure to fulfill these obligations can put the organization out of existence, and its leaders, owners, stakeholders (and even its employees) at risk of civil or criminal prosecution. The second set of information that should drive the synthesis of information security requirements is the organization’s BIA.
There are typically two major ways that information security requirements take form or are expressed or stated within an organization. The first is to write a system requirements specification (SRS), which is a formal document used to capture high-level statements of function, purpose, and intent. An SRS also contains important system-level constraints. It guides or directs analysts and developers as they design, build, test, deploy, and maintain an information; it also drives end-user training activities.
Organizations also write and implement policies and procedures that state what the information security requirements are and what the people in the organization need to do to fulfill them and comply with them:
You might ask which should come first, the SRS or the policies and procedures. Once senior leadership agrees to a statement of need, it’s probably faster to publish a policy and a new procedure than it is to write the SRS, design the system, test it, deliver it, and train users on the right ways to use it. But be careful! It often takes a lot of time and effort for the people in an organization to operationalize a new policy and the procedures that come with it. Overlooking this training hurdle can cause the new policy or procedures to fail.
Four strategic choices exist when we think of how to protect prioritized assets, outcomes, or processes. These choices are at the strategic level, because just the nature of them is comparable to “life-or-death” choices for the organization. A strategic risk might force the company to choose between abandoning a market or opportunity and taking on a fundamental, gut-wrenching level of change throughout its ethics, culture, processes, or people, for example. We see such choices almost before we’ve started to think about what the alternatives might cost and what they might gain us. These strategic choices are often used in combination to achieve the desired level of assurance against risk. As an SSCP, you’ll assist your organization in making these choices across strategic, tactical, and operational levels of planning, decision making, and actions that people and the organization must take. Note that each of these choices is a verb; these are things that you do, actions you perform. This is key to understanding which ones to choose and how to use them successfully. We’ll look at each individually, and then take a closer look at how they combine and mutually reinforce each other to attain greater protective effect.
There are choices at the strategic and tactical level that seem quite similar and are often mistaken as identical. The best way to keep them separate in your mind might be as follows:
Having identified the risks and prioritized them, what next? What realistic options exist? One (more!) thing to keep in mind is that as you delve into the details of your architecture, and find, characterize, and assess its vulnerabilities against the prioritized set of risks, you will probably find some risks you thought you could and should “fix” that prove far too costly or disruptive to attempt to do so. That’s okay. Like any planning process, risk management and risk mitigation taken together are a living, breathing, dynamic set of activities. Let these assessments shed light on what you’ve already thought about, as well as what you haven’t seen before.
So what are these strategic choices?
To deter means to discourage or dissuade someone from taking an action because of their fear or dislike of the possible consequences. Deterring an attacker means that you get them to change their mind and choose to do something else instead. Your actions and your posture convince the attacker that what they stand to gain by launching the attack will probably not be worth the costs to them in time, resources, or other damages they might suffer (especially if they are caught by law enforcement!). Your actions do this by working on the attacker’s decision cycle. Why did they pick you as a target? What do they want to achieve? How probable is it that they can complete the attack and escape without being caught? What does it cost them to prepare for and conduct the attack? If you can cast sufficient doubt into the attacker’s mind on one or more of these questions, you may erode their confidence; at some point, the attacker gives up and chooses not to go through with their contemplated or planned attack.
By its nature, deterrence is directed onto an active, willful threat actor. Try as you might, you cannot deter an accident, nor can you command the tides not to flood your datacenter. You do have, however, many different ways of getting into the attacker’s decision cycle, demotivating them, and shaping their thinking so that they go elsewhere:
Deterrence can be passive, active, or a combination of the two. Fences, the design of parking, access roads and landscaping, and lighting tend to be passive deterrence measures; they don’t take actions in response to the presence of an attacker, for example. Active measures give the defender the opportunity to create doubt in the attacker’s mind: Is the guard looking my way? Is anybody watching those CCTV cameras?
To detect means to notice or consciously observe that an event of interest is happening. Notice the built-in limitation here: you have to first decide what set of events to “be on the lookout for” and therefore which events you possibly need to make action decisions about in real time. While you’re driving your car down a residential street, for example, you know you have to be watching for other cars, pedestrians, kids, dogs, and others darting out from between parked cars—but you normally would “tune out” watching the skies to see if an airplane was about to try to land on the street behind you. You also need to decide what to do about false alarms, both the false positives (that alarm when an event of interest hasn’t occurred) and the false negatives (the absence of an alarm when an event is actually happening).
If you think of how many false alarms you hear every week from car alarms or residential burglar alarms in your neighborhood, you might ask why we bother to try to detect that an event of interest might possibly be happening. Fundamentally, you cannot respond to something if you do not know it is happening. Your response might be to prevent or disrupt the event, to limit or contain the damage being caused by it, or to call for help from emergency responders, law enforcement, or other response teams. You may also need to activate alternative operations plans so that your business is not severely disrupted by the event. Finally, you do need to know what actually happened so that you can decide what corrective actions (or remediation) to take—what you must do to repair what was damaged and to recover from the disruption the incident has caused.
To prevent an attack means to stop it from happening or, if it is already underway, to halt it in its tracks, thus limiting its damage. A thunderstorm might knock out your commercial electrical power (which is an attack, even if a nondeliberate one), but the uninterruptible power supplies keep your critical systems up and running. Heavy steel fire doors and multiple dead-bolt locks resist all but very determined attempts to cut, pry, or force an entry into your building. Strong access control policies and technologies prevent unauthorized users from logging into your computer systems. Fire-resistant construction of your home’s walls and doors is designed to increase the time you and your family have to detect the fire and get out safely before the fire spreads from its source to where you’re sleeping. (We in the computer trades owe the idea of a firewall to this pre-computer-era, centuries-old idea of keeping harm on one side of a barrier from spreading through to the other.)
Preventive defense measures provide two immediate paybacks to the defender: they limit or contain damage to that which you are defending, and they cost the attacker time and effort to get past them. Combination locks, for example, are often rated in terms of how long it would take someone to just “play with the dial” to guess the combination or somehow sense that they’ve started to make good guesses at it. Fireproof construction standards aim to prevent the fire from burning through (or initiating a fire inside the protected space through heat transfer) for a desired amount of time.
Note that we gain these benefits whether we are dealing with a natural, nonintentional threat, an accident, or a deliberate, intentional attack.
To avoid an attack means to change what you do, and how you do it, in such ways as to not be where your attacker is expecting you to be when they try to attack you. This can be a temporary change to your planned activities or a permanent change to your operations. In this way, you can reduce or eliminate the possible disruptions or damages of an attack from natural, accidental, or deliberate causes:
Like everything in risk management and risk mitigation, these basic elements of choice can be combined in a wide variety of ways:
This last point bears some further emphasis. Organizations will often spend substantial amounts of money, time, and effort to put physical and even logical risk management systems into use, only to then put minimal effort into properly defining the who, what, when, where, how, and why of their use, maintenance, and ongoing monitoring. The money spent on a strong, imposing fence around your property will ultimately go to waste without routinely inspecting it and keeping it maintained. (Has part of it been knocked down by frost heave or a fallen tree? Has someone cut an opening in it? You’ll never know if you don’t walk the fence line often.)
This suggests that continuous follow-through is in fact the weakest link in our information risk management and mitigation efforts. We’ll look at ways to improve on this in the remainder of this book.
Every organization, large or small, public or private, faces an almost limitless sea of risks—things that can go wrong or at least occur in unanticipated ways. Risk management is about the possibilities of future events upsetting or disrupting our plans of today, and the systems and business processes we use today. At its heart, risk management is about ensuring that decisions can be made reliably, on time, and on target; thus we see that information security is really about delivering decision assurance; it’s about increasing our confidence that the decisions we make (large or small) are ones we can count on.
Risk management is the responsibility of the organization’s leaders and stakeholders; they have the primary burdens of due care (to ensure that they’re doing business correctly and effectively) and of due diligence (to continuously monitor and assess how well their business is working and whether it could work better). Since we cannot address every risk, and in fact cannot usually address any specific risk perfectly and completely, we’ve seen that risk management is the art of compromise. As SSCPs, we must balance the organization’s tolerance for risk against its ability and willingness to spend money, time, effort, and other assets to contain, control, or limit the impacts those risks might have if they actually occur.
Risk management frameworks can provide us the managerial structures and the organized knowledge of experience that we need to plan and conduct our risk management and mitigation activities. If risk management is making decisions about risk, risk mitigation is carrying out those decisions.
The interplay between management and mitigation, between decision making and implementation, is continuous. We can, however, see that some actions and decisions are strategic, affecting the very survival or long-term success of the organization. Many others are directly involved in day-to-day business operations; and in between, tactical decisions, plans, and actions translate strategic needs and decisions into the world of the day to day.
The bridge between risk management and risk mitigation is the BIA, the business impact analysis. This analysis combines the organizational priorities and an in-depth understanding of business processes, along with their vulnerabilities. In doing so, it provides the starting point for the next set of hard work: implementing, testing, and operationally using the right set of risk mitigation controls, which we’ll explore in Chapter 4.
Explain information risk and its relationship to information systems and decision making. You need information to make any decision, and if you cannot trust in that information’s confidentiality, integrity, and availability when you must make a decision, then your decision is at risk. Information systems implement the processes that gather data, process it, and help you generate new information; risks that cause these processes to suffer a compromise of confidentiality, integrity, or availability are thus information systems risks. These information systems risks further reduce your confidence that you can make on-time, accurate decisions.
Differentiate between outcomes-based, process-based, asset-based, and threat-based views of risk. Each of these provides alternative ways to view, think about, or assess risks to an organization, and they apply equally to information risks or any other kind of risk. Outcomes-based starts with goals and objectives and what kind of risks can impact your ability to achieve them. Process-based looks at your business processes and how different risks can impact, disrupt, or block your ability to run those processes successfully and correctly. Asset-based risks looks at any tangible asset (hardware, machinery, buildings, people) or intangible asset (knowledge, business know-how, or information of any kind) and asks how risks can decrease the value of the asset or make it lose usefulness to the business. Threat-based, also called vulnerability-based, focuses on how things go wrong—what the root and proximate causes of risks might be—whether natural, accidental, or deliberately caused. Note that threats are intentional acts committed (or contemplated) by humans and human organizations, while hazards are caused by natural events, accidents, or failure due to wear and tear.
Explain why information risk management needs to be integrated and proactive. Information security managers and incident responders need to know the status, state, and health of all elements of the information system, including its risk controls or countermeasures, in order to make decisions about dealing with an incident of interest. The timeliness and integrity of this information is critical to detecting an incident, characterizing it, and containing it before it causes widespread damage or disruption. Integrating all elements of your information risk management systems brings this information together rapidly and effectively to enable timely incident management. To be proactive requires that you think ahead to possible outcomes of risk events, and devise ways to deter, detect, prevent, contain, or avoid the impacts of such events, rather than merely being reactive—waiting until an event happens to learn from it, and only then instituting risk controls for the next time such an event occurs.
Differentiate due care from due diligence for information risk management. Due care and due diligence both aim to strike a prudent, sensible balance between “too little” and “too much” when it comes to implementing any set of responsibilities. Due care requires identifying information risks to high-priority goals, objectives, processes, or assets; implementing controls, countermeasures, or strategies to limit their possible impacts; and operating those controls (and the systems themselves) in prudent and responsible ways. Due diligence requires ongoing monitoring of these controls as well as periodic verification that they still work correctly and that new vulnerabilities or threats, changes in business needs, or changes in the underlying systems have not broken some of these risk control measures.
Know how to conduct an information risk assessment. Start with a prioritized list of outcomes, processes, assets, threats, or a mix of these; it is important to know that you’re assessing possible risks in decreasing order of their importance or concern to leadership and management. The next step is to gather data to help make quantitative and qualitative assessments of the impact of each risk to the organization and its information, should such a risk event occur. Data from common vulnerabilities and exploitations registries (national and international) can assist by pointing out things to look for. As part of this, build a risk registry, a database or listing of the risks you have identified, your impact assessments, and what you’ve learned about them during your investigation. This combined set of information feeds into the BIA process.
Know what a business impact analysis is, and explain its role in information risk management. The BIA brings together everything that has been learned in the information risk assessment process and organizes it in priority order, typically by impact (largest to smallest, soonest versus later in time, highest-priority business objective, etc.). It combines quantitative and qualitative assessments to characterize the impacts these risks might cause if they became incidents. Typically, the BIA will combine risk perspectives so that it characterizes the impacts of a risk to high-interest goals and objectives as well as to costs, revenues, schedules, goodwill, or other stakeholder interests.
Know the role of a risk register in information risk management. A risk register is a document, database, or other knowledge management system that brings together everything the organization learns about risks, as it’s learned. Ideally it is organized in ways that capture analysis results, management decisions, and updates as controls and countermeasures are implemented and put to use. Like the BIA, it should be a living document or database.
Know the difference between qualitative and quantitative assessments and their use. Quantitative assessments attempt to arithmetically compute values for the probability of occurrence and the single loss expectancy. These assessments typically need significant insight into costs, revenues, usage rates, and many other factors that can help estimate lost opportunities, for example. Qualitative assessments, by contrast, depend on experienced people to judge the level or extensiveness of a potential impact, as well as its frequency of occurrence. Both are valuable and provide important insight; quite often, management and leadership will believe they do not have sufficient data to support a quantitative assessment, or enough knowledge and wisdom in an area of operations to make a qualitative judgment.
Know how to calculate the key elements of a quantitative risk assessment. The single loss expectancy (SLE) is the total of all losses that could be incurred as a result of one occurrence of a risk. Typically expressed in monetary terms, it includes repair and restoration costs for hardware, software, facilities, data, people, loss of customer goodwill, lost business opportunity, or other costs directly attributable to the event. The annual rate of occurrence (ARO) is an estimate of how many times per year a particular risk is considered likely to occur. An ARO of 0.5, for example, says that this risk is expected to occur no more often than once every two years. The annual loss expectancy (ALE) is the product of the SLE multiplied by the ARO, and it represents the yearly expected losses because of this one risk.
Know how to determine the safeguard value. The safeguard value is the total cost that may be incurred to specify or design, acquire, install, operate, and maintain a specific risk mitigation control or countermeasure. You need to first complete vulnerabilities assessments in order to know what to fix, control, or counter, however.
Explain what MAO, RTO, and RPO mean. The maximum acceptable outage (MAO) is the time limit to restore all mission-essential systems and services so as to avoid impact to the mission of the organization. Recovery time objectives (RTOs) are established for each system that supports the organization and its missions. Organizations may set more aggressive needs for recovery, and if so, they may be spending more than is necessary to achieve these shorter RTOs. All RTOs must be shorter than the MAO that they support; otherwise, the MAO cannot be achieved. Recovery point objectives (RPOs) relate to the maximum data loss that the organization can tolerate because of a risk event; they can be expressed as numbers of transactions or in units of time. Either way, the RPO represents work that has to be accomplished again, and is paced by what sort of backup and restore capabilities are in place.
Explain threat modeling and its use in information risk assessment. Threat modeling starts with the premise that all systems have an external boundary that separates what the system owner, builder, and user own, control, or use, from what’s not part of the system (that is, the rest of the world and the Internet). Systems are built by putting together other systems or elements, each of which has its boundary. Thus, there are internal boundaries inside every system. Crossing any boundary is an opportunity to ask security-driven questions—whether this attempt is authorized, for an authorized purpose, at this time, for example. The external boundary of a system is thus called its threat surface, and as you identify every way that something or someone can cross a boundary, you are identifying, characterizing, and learning about (modeling) the threats with respect to that surface. The outermost threat surface can (and should) be known without needing to delve into system internal design and construction, but the real payoff is when, layer by layer, these boundaries are examined for possible trapdoors, Trojan horse “features,” or other easily exploitable weaknesses.
Know the basic choices for limiting or containing damage from risks. The choices are deter, detect, prevent, and avoid. Deter means to convince the attacker that costs they’d incur and difficulties they’d encounter by doing an attack are probably far greater than anticipated gains. Detecting that an attack is imminent or actually occurring is vital to taking any corrective, evasive, or containment actions. Prevention either keeps an attack from happening or contains it so that it cannot progress further into the target’s systems. Avoiding the possible damage from a risk requires terminating the activity that incurs the risk, or redesigning or relocating the activity to nullify the risk.
Know what a risk management framework is and what organizations can gain by using one or tailoring one to their needs. Risk management frameworks (RMFs) are compendiums of guidance based on experience in identifying, characterizing, managing, and mitigating risks to public and private organizations. RMFs, typically, are created by government agencies or international standards organizations, and they may be directive or advisory for an organization depending on the kind of business it’s in. RMFs provide rich sets of management processes that you can select from and tailor to the needs of your particular business.
Explain the role of organizational culture and context in risk management. Organizations have their own “group personalities,” which may or may not resemble those of their founders or current senior leaders, managers, or stakeholders. How decisions get made, whether quantitative assessments are preferred (or not) over qualitative ones, and how the appetite for risk is determined are just some of the key elements of culture that set the context for information risk management planning and implementation.
Describe the basic steps of the NIST Special Publication 800-37 Rev. 2 RMF. This RMF describes seven major steps to information and privacy risk management: Prepare, Categorize, Select, Implement, Assess, Authorize, and Monitor. As these names, expressed as verbs, suggest, the actions that organizational leadership, management, and security or risk management specialists should take start at the broad cultural or context level, move through understanding information risk impacts, and choose, build, install, and activate new risk controls or countermeasures. Once activated, these controls are assessed for effectiveness, and senior leadership then declares them part of the new operational baseline. Ongoing monitoring ensures due diligence.
Explain what a zero day exploit means. A zero day exploit involves a vulnerability discovered but not reported to the affected system’s builders, vendors, or users, or the information security community at large. Between the time of its discovery and such reporting and notification, attackers who know of the vulnerability can create an exploit with which they can attack systems affected by that vulnerability. The term suggests that the system’s defenders have zero time to prepare for such an exploit, since they are not aware of the vulnerability or the potential for an attack based on it.
Differentiate a hazard from a threat. Accidents and risk events that occur because of natural causes such as weather or earthquakes are known as hazards by insurance and risk managers. These are unintentional events—weather and Nature are not conscious actors that can decide to cause damage to occur. By contrast, a threat is an action that is taken by (or contemplated to be taken) by a human being or a human organization. It is intentional; a conscious decision is made to attempt to achieve an outcome or result by making the risk event become reality. These humans are known as threat actors in risk management and security terms.
Differentiate a security classification from a security categorization. Security classification is the process of identifying or estimating the possible impacts or losses an organization might suffer if information of a particular type is compromised. This compromise can relate to its confidentiality, integrity, availability, nonrepudiability, authenticity, privacy, or safety characteristics. Laws, regulations, standards, or contracts which establish compliance requirements for protecting sensitive information, combined with business impact assessments, provide the basis for developing a set of information security classification policies, procedures, and labels. Security categorization is a process which groups together sets of information that have comparable security classification and security protection needs or requirements. Categorization allows for more optimal planning and operation of security processes, and avoids the expense and risks associated with a one-size-fits-all approach that treats all data as if it is classified at the most secure level.
Explain how classification and categorization relate to a security baseline. A security baseline is a matrix or table that relates data sets or types based on their classification, categorization, and required or chosen minimum essential security protection methods. This enables security planners to quickly determine which information types (by classification and category) might be put at risk when a protection method, such as an encryption or access control technology, has been shown to be no longer as secure as the organization requires it to be.