Chapter 12
Cross-Domain Challenges

As an SSCP, you've chosen to be part of one of the most cutting-edge endeavors we know. The previous 11 chapters have explored the technical and social aspects of what it takes to deliver information security. This has given you a solid, broad-based foundation of knowledge about the physical, logical, and administrative controls you'll need to use to help your organization or business cope with information risk mitigation and management. Now, let's apply the knowledge and skills you've gained thus far to some of more vexing challenges that face many SSCPs during the course of their jobs.

We'll then take several giant steps back from the details and look at a few current issues that are attracting attention and analysis in the cybersecurity field and see what they suggest about the future. We'll use that as our springboard to look at how that may suggest options for your personal and professional growth, as well as perhaps revealing business opportunities you might want to consider pursuing.

All of that will help us put the final task of preparing to take the SSCP exam itself in perspective; we'll also look at all of the places you might go, once you've crossed that Rubicon.

Operationalizing Security Across the Immediate and Longer Term

As we saw in risk assessment and vulnerability management, securing an information infrastructure and the systems it supports can be done either as a top-down, planned, and structured set of processes or as a by-the-each journey of discovery, decision making, and mitigation actions. Both types of approaches are necessary over the long haul; together, they provide the opportunity to build both a stronger security posture and a broader and deeper knowledge base that supports that posture. They do this by encouraging and supporting the use of simple procedures, checklists, or workflows to capture the lessons learned from experience in the form of a better set of steps to use the next time such tasks must be done.

This also has the advantage of bringing together what seem to be separate and distinct security processes, each of which appears to have a separate lifecycle of activity. The touch points they have in common are the data, information, or knowledge that's used in each step, generated by that step, or observed and internalized by the people doing those stepwise tasks.

For example, risk management done as a formal, structured process aims to take all business objectives, then all processes that support those objectives, and all assets needed by those processes, and connect them together with the risks that those objectives, processes, and assets may face. Objectives are prioritized as part of this, so that assets and processes that support low-priority objectives don't get as detailed an assessment as higher-priority ones do. Vulnerability management ought to be driven by that same prioritized set of objectives, processes, and assets, by means of the systems used in those processes.

Similarly, security assessment, whether on a milestone-driven schedule or an ad hoc basis, helps confirm how well the security controls address the vulnerabilities that potentially threaten the most urgent or highest-priority processes. Ongoing monitoring, day after day and around the clock, is then focused on indicators of compromise (IoCs) and other signals that are related to higher-priority systems and associated risks. Finally, compliance reporting generally aligns with these higher-priority objectives, processes, systems, and assets (and perhaps helped establish those priorities in the first place).

As an SSCP, you might find that on any given day your tasks and activities address this entire spectrum of security needs.

  • Deep, retrospective analysis of monitoring or security assessment data can support hunting for active intruders or their hostile agents that have so far escaped notice; analysis can also be part of characterizing and calibrating security controls to meet anticipated changes in risk and threat exposures or in support of a formal audit.
  • Real-time monitoring of security controls, whether by walk-around inspection or using SIEM systems and dashboards, may help detect and prevent an intrusion or be part of ongoing calibration and adjustment of security controls and processes.
  • Real-time and near-real-time monitoring is often used to ensure that newly installed software, systems, or procedural updates are working correctly and that no new security issues were introduced by the updates.
  • Reviewing proposed changes, throughout the change management and systems development lifecycle, provides valuable ways for insights from current security observations of ongoing operational or production system use to feed into an impending change; the converse flow, from inside the development process out to the production floor, is also essential for a stronger security posture.
  • Troubleshooting and investigating user reports, help desk tickets, or other issues can often provide valuable insight regarding risks, vulnerabilities, security controls, threats, and opportunities to strengthen the overall security posture.

Each of these present opportunities for the organization, its security team members, and all of its members to learn by connecting various experiences together. Simple knowledge management practices can help take tacit knowledge from within the minds of individuals and express it in an explicit, reusable, form. They can include:

  • Making checklists
  • Creating workflows and playbooks
  • Building risk registers and security baselines
  • Annotating test plans, procedures, test logs, and post-test assessment reports
  • Logs and notes of the issues captured during security-related testing, operations, investigations, assessments, and deliberations

From an information technology perspective, the SSCP may find that their workplaces or those of the clients they serve may already have the building blocks of these knowledge management opportunities in place and in use. Connecting those dots together to form a more coherent, living picture—a picture of the security posture, and the programs and processes being used to carry out and support that posture—takes both a shift in management outlook and some procedural guidance to make the most of those opportunities.

First, there may be barriers that impede the flow of knowledge and experience between more formalized systems assessment and management processes, and the day-to-day operational use of those systems. Sometimes, the remedies for this can be as simple as making these connections clearer to users via security awareness, training, and education opportunities. This is the opportunity to help users and line-level managers see the security and operational payback—right to their unit and their workflows—by promptly reporting potential issues and observations. Other times, changes to procedures may be required to strengthen the flow of information from one process to another. Breaking down these stovepipes can be crucial to helping different business units recognize that they might be facing adversaries using related tactics, for example.

Previous chapters have looked at security technologies such as SIEMs and their related deep analytics capabilities; these can provide a powerful platform set of tools and techniques to bring the real-time and the longer-term analysis processes together. SIEMs also often provide for smooth integration and cross-feed to systems development and change management systems, which can help them put vulnerability management data to more effective and timely use.

Let's take a look at other opportunities to automate and organize different security-related tasks into a more unified system.

Continuous Assessment and Continuous Compliance

Formal compliance reporting and the security assessment that supports it are often seen as expensive and time-consuming, as they walk through the many different layers of policies, procedures, test and operational data, and other artifacts to come to their findings and conclusions. Because these are nontrivial efforts, they're also prescribed as medicine to take no more often than once every six months, perhaps annually at the most. Security testing and assessment for other than compliance purposes are also subject to being seen as labor-intensive; both types of assessment activities and the audits associated with them are seen as disruptive to the normal operational rhythms and patterns of the organization; these disruptions erode margins and may actually threaten to reduce rather than enhance competitive advantage.

There is perhaps a strong element of chicken-and-egg to this set of perceptions, however. Arguably, if security assessment and the compliance-related analysis processes could be done incrementally and near continuously, across each day, the organization gains in two significant ways.

  • The time to detect an intruder, an IoC, or other security-related anomaly may be significantly reduced if not brought into same-day or near-real-time detection of most incidents.
  • The disruption to operational business activities, timelines, and rhythms caused by special-purpose security testing and assessment activities can be reduced or better managed when spread over the entire year.

Many security and compliance assessments share common or comparable test activities, which often use scenarios based on common, stressful types of attacks, intrusions, or attempts to defraud or falsify important data. System test automation, which now provides powerful ways to perform more exhaustive, thorough testing during development, can also be harnessed to plan and control the execution of ongoing, incremental testing of production systems while they are in use. This can range from injection tests using blocked or suspicious identities, uniform resource indicators (URIs), or URLs, to attempts to submit falsified (or “ghost”) invoices. Tests can also determine if controls are preventing inappropriate privileged account use or lateral movement. By mapping continuous assessment activities to compliance assessment and reporting needs, each day's assessments play their part in demonstrating that all compliance requirements are being met.

SDNs and SDS

Unless you're a jack of all cybersecurity trades supporting a SOHO client base, you're probably not going to be installing network and systems hardware; you'll not be directly establishing the physical aspects of what define the network, establish its subnetworks, and place those segments, devices, and connections into the workplaces where they'll be used. Even then, most of your security work will be with the graphical user interfaces and webpage management interfaces that almost all network devices now provide. If you're familiar with these already, you're already experienced with using software to define and manage a network and its security posture.

Software-Defined Networks

Software-defined networks (SDNs) are physical and virtual networks that are created, managed, and maintained by using data to model the configuration and arrangement of network devices and their interconnections; SDN applications provide an organized and integrated set of tools and techniques with which networks can be quickly defined, instantiated (that is, have an instance of a definition brought into existence, turned on, and made operational), secured, and torn down when no longer needed. SDNs are typically built up in layers:

  • The physical network connection fabric provides the supporting infrastructure, which includes a collection of servers that can run VMs of the software and firmware that is normally run in switches, routers, gateways, security appliances, and other network systems elements. The physical supporting infrastructure is built once, typically with a mesh architecture for greater cross-connectivity and redundancy.
  • Specialized gateway servers can provide the interfaces from this physical network fabric to external networks, ISP connections to the Internet, and to other physical subsystems for ICS or OT purposes.
  • A virtual connection fabric is defined, typically as a set of script files and supporting parameter sets; when invoked, these cause VM images of the hardware elements of the network to be mounted on the various server elements, and the LAN connections between them to be established between those VMs.
  • Virtual applications servers are then “installed” on the virtual connection fabric, and their connections to the virtual network are established.
  • Virtual network access control is established to define and control access by end users, administrators, and other entities to the specific applications servers, devices, or services that are authorized to use these resources.

At this point, a fully software-defined collection of VMs has been instanced and activated and is available to support the organization's end users and their processing needs. Total elapsed time to deploy such an SDN and the applications hosts it supports can be a matter of minutes.

SDNs are often used with cloud-hosted systems, where they can take advantage of the pooled resources, load management, and service monitoring and measuring capabilities common to most clouds (whether public, private, or any form of hybrid).

Software-Defined Security

Software-defined security (SDS) extends the SDN concept by focusing on all of the configuration items, controls, services, indicators, and devices that provide security capabilities for the network, its hosted applications, its endpoints, and its users. SDS applications and systems provide the means to integrate the command and control of all of these security settings and configurations across a deployed network architecture; they also bring together all of the monitoring, alarm, reporting, and other security-related indicators and information to close the feedback loop, so to speak, for the security analyst.

The real power of an SDS approach comes from using it to create, manage, and use enterprise-spanning scripts to configure all devices (in a given, logically related set) to implement the same security policies. Firewall settings, port remapping, and other network access control features can now be managed as a group of controls, rather than a one-at-a-time device-level update or adjustment.

For example, consider a small business with five locations in different parts of a large metropolitan area, with each location having its own LAN, and these LANs brought together via VPN connections to establish an enterprise-wide private network. Implementing a common change to all five locations—such as updating the guest Wi-Fi service availability hours (at each site) requires each firewall and router at each site to have its settings updated. The corporate-level central security team can remotely log into each of these devices and make these changes, even if the organization is not using an SDN or has SDS capabilities in place. SDS tools would allow one action to be taken by the central security administrator, which would (in scripted, workflow fashion) reach out to each device, make the changes, gather telemetry to confirm the change has been made, and report the results to the administrator.

SDS provides an easy on-ramp to move to a risk-based access control and security model for an enterprise of almost any size. With it in place, a corporate-level risk officer such as its CISO can make near-real-time adjustments to security controls across the organization, smoothly and harmoniously.

In effect, using SDS and other techniques to facilitate risk-based active management of systems takes what formerly had been a strategic-level planning parameter—the sense of risk tolerance—and transforms it into something that can be adjusted throughout the day, week, month, or year, to meet changing conditions in the threat and risk environment the organization faces, both internally and externally.

SOAR: Strategies for Focused Security Effort

As you've seen in previous chapters, the implementation of security policies starts with administrative controls that define roles, responsibilities, and tactical-level processes; these are broken down into further detail to become procedural-level administrative controls, design and construction details for physical security measures, and the configuration parameters of logical or technical controls. Taken together, this whole set of processes represents an organization's way of doing security; and as with any other way of getting business done, that overall process can be improved in a systematic, stepwise, and measurable fashion.

When we make a process repeatable, we take out variation due to error or personal interpretation and preference. Making the process reliable requires that we:

  • have the right measurement tools and instrumentation in place,
  • at the right places and times during that process and its flow of activities,
  • to determine whether that step in the process is being performed correctly,
  • whether it is producing the correct (required) outcome, and
  • if that outcome still makes sense given that the surrounding environment, context, and world have moved on since we first designed that process.

(If you think that sounds like due care and due diligence applied to each business process, you're right!)

In the security domain, a relatively new set of capabilities has been brought together by several vendors. Security orchestration, automation, and response (SOAR) systems take the script-driven control and management of security that SDS provides and the broad-spectrum gathering, integrating, and managing of security information that SIEMs can do and brings them together with a layered approach to planning, organizing, and controlling the many different tasks that security teams have to do in real time and in non–real time.

Breaking that acronym down further reveals that SOAR, as a mindset and as a system or service, enables a security team to mature its security processes while it strengthens the organization's overall security posture.

  • Security, in the context of SOAR, embraces everything in the information security spectrum. It brings together data, plans, procedures, and the operational details of every aspect of security risk management, control, and monitoring; it combines that with continuous monitoring, incident detection and response, and security assessment data, processes, and controls. In short, anything the organization does that touches on information security can be brought under the integrated planning and operational control capabilities of a proper SOAR system.
  • Orchestration refers to the bringing together of many separate and distinct systems, data sources, databases and knowledge bases, and of course all of the distinct and different operational procedures, tools, and systems used by those aspects of an overall security program.
  • Automation of security processes is done by enabling the organization to build out and use layers of workflows, starting from the simplest of low-level tasks and building up more complex sequences into playbooks. For example, a playbook may define, plan, and schedule out a routine systems and networks enumeration and configuration audit, and in doing so it may have dozens of workflows that spell out the details of performing these tasks, coordinating (orchestrating) their activities with other ongoing events, and then integrating their output data streams into the overall SOAR information and knowledge base.
  • Response generally refers to incident response; in SOAR terms, this means focusing the use of its automation and orchestration capabilities to support every aspect of the security incident detection, characterization, containment, eradication, recovery, remediation, and restoration activities that the organization needs to conduct. SOAR's workflows and playbooks provide the active, real-time process flows, checkpoints, and decision points for security analysts, network engineers, and organizational managers to use as they respond to security alarms.

It's tempting to compare the security operations of a typical enterprise's IT and OT systems environment to other complex but commonplace activities, such as an airline flight. In both cases, regulations, contracts, and standards define compliance requirements, and these often dictate step-by-step procedures that must be followed at critical steps in the operation of that system (or for that flight). Crew members and others on the team take actions to prepare the systems (or the airplane) for use. Checklists are used and completed by the crew, the maintenance engineers, or the ramp support team as they go through these steps. For each flight, each step in each checklist is marked as completed (nowadays, this is often done by means of ruggedized phablets or PDAs used by team members). This provides accountability as well as a rich source of data for troubleshooting if necessary. Every step of the journey, from preflight check, passenger and cargo loading, fuel loading, to engine start, has its own checklist. Taxi and takeoff, climb out, and enroute flight, as well as entering their destination terminal's airspace, preparing to land, landing, and taxi to the gate all have checklists.

These checklists make those operations reliable, repeatable, and measurable. SOAR does much the same across the full spectrum of security activities (or as much of that as an organization chooses to bring under its active management).

As a new member of an enterprise security team, you may find that your first days on the job are planned, orchestrated, and automated by SOAR or SOAR-like workflows and playbooks. As your experience with the organization and its security processes grow, you may have the opportunity to get involved in maintaining these workflows and playbooks, perhaps writing new flows for emerging tasks facing the security analysts, technicians, and incident response team members. SOAR workflows and playbooks are also used to provide reliable, repeatable analytics, reports, and dashboard displays of almost any combination of security and business operational data that managers may need; as you learn more about SOAR (as a system or as a methodology), you'll find other opportunities to grow your professional knowledge and skills too.

A “DevSecOps” Culture: SOAR for Software Development

One of the most surprising changes in the world of software and systems development over the past several decades has to be the incredible reduction in the lag between identifying the need for something new or different in a production software system, and its delivery to the operational end user. Traditionally, software was updated by providing a full and complete new build of a major applications suite, an operating system, or a database engine, and that build had to go through fairly rigorous design and test cycles to ensure that the right changes were made, and that they were made correctly. This protected the operational or production environment from “fixes” that broke something that had been working correctly in the previous version or that introduced an intolerable number of new errors and problems.

Incremental delivery of software updates provides the flexibility to change just one or a small handful of software components, without having to reinstall the entire application. In many environments, these software updates can be done without having to shut down every instance of the app that is running on every host, server, or endpoint. As a result, the cycle time from identifying the need for an urgent fix to getting it coded and tested to getting it installed in the production environment could be shortened to a matter of days if not hours.

The DevOps methodology grew out of these capabilities and the practices that were created for their use. As the name suggests, it's a closer, more continuous integration of development activities with their delivery and use by the end users in their production or operational environments. It facilitates finding multiple opportunities around the clock to deliver incremental updates, which can help software maintainers plan their analysis, coding, and testing activities to feed the development side of that pipeline.

DevOps also prompted the development of better automation for each step of its own internal processes. Continuous integration/continuous delivery (CI/CD) make use of workflow and other automation processes to capture, control, monitor, enforce, and protect the hand-offs of information and data between each step of the development, test, and delivery process. CI/CD is often closely integrated with organizational change management and control processes, which can also make these processes more efficient and effective.

Unfortunately, many organizational DevOps cultures and management processes tended to push security considerations to outside of this tight cycle of problem analysis, planning, programing, and delivery. Many different explanations (or excuses) for this exclusion of the security viewpoint from the DevOps decision process and flow have been offered. This has led to the creation of the DevSecOps process model, which (as its name suggests) brings the security professionals back into ongoing and continuous involvement with the fast-paced software update and delivery culture that many organizations find valuable and necessary.

DevSecOps is not a standard way of doing secure software development; rather, it is a loosely defined, constantly evolving set of ideas, attitudes, and approaches. Organizations that believe in it are adapting the basics of DevSecOps to their own processes, tailoring them as their experience and circumstances suggest.

As a result, each organization is finding its own first best ways to fit DevSecOps and SOAR together, either as services and systems of tools, organizational attitudes and cultural values, or both. This is providing many experimental laboratories, you might say, for trying out different ways to make continuous improvement in all aspects of the ways an organization can safely, securely, reliably, and affordably deliver its products and services to its customers and the supply chains it's a part of.

Just-in-Time Education, Training, and Awareness

Many organizations have strengthened their security posture and programs—and become more agile and responsive to IT and OT threats and risks—by rethinking their approach to educating and training their employees about information security. Traditionally, the three main strategies for sharpening the skills and knowledge of an organization's human elements saw education, training, and awareness as if these were distinctly different sets of learning that would be delivered to employees; attendance would be taken, or click-through measures accumulated, both to provide compliance reports to auditors and regulators and to identify workers who were avoiding the learning activities or only paying them lip service.

Turning this paradigm around meant recognizing that getting workers at all levels of the organization to care about information security, know what actions to take and when, and have the skill and proficiency to do them quickly and correctly is not an annual classroom-centered event. Instead, new approaches to motivating, teaching, and skills-building focus on providing just the right amount of security insight and knowledge where and when it's needed and across the operational day-to-day of each employee's engagement and participation with the organization. Techniques for this include:

  • Microtraining delivers short bursts of learning, usually as an example of an exception, out-of-limits, or other form of attack vector in progress. Simulated phishing emails, for example, are sent to users of the organization's email system, with immediate feedback to reinforce their phish-spotting skills (when they report it as a phishing attempt) or a short bit of tightly focused refresher training to improve their perceptiveness.
  • Task-based just-in-time security replaces the need for users to select and use additional security features correctly and efficiently. Deep content scanning of emails or documents as they are being written, for example, can trigger pop-up tips to suggest the use of these features, which can be reinforced with pass/fail tests of the finished work when the user selects options that commit that work into production (such as posting a document, sending a message, or completing a workflow activation step).

These techniques and others combine skills-focused training and process-based conceptual learning with a right-now sense of awareness. They help the individual user embrace the attitude that their own actions and choices, right in each moment, can protect the security of the organization, its information, and their own work—and their employment. Contrast that with a once-per-year training event that employees often find difficult to connect to their own individual work and lives, and the potential paybacks become clear.

Supply Chains, Security, and the SSCP

No organization is an island; it takes in things, ideas, people, money, and energy, and uses its business logic and processes to output things, ideas, money, energy, and other stuff, which is sent to or taken in by other organizations and individuals for their use. Organizations are interwoven into many such supply chains, and as a result, they are more often than not part of important critical infrastructures at the local, national, or international level. This is where the stressing threat cases are coming to: it's where the money is to be made by ransom, data breach, sabotage, disruption, distraction, distortion, and of course destruction. These are supply chain attacks, and they impact every business, every industry, every delivery of government and community services.

Very, very few organizations exist as information-only virtual entities: either directly or indirectly, they physically touch, shape, and transform the environment, people, lives, material goods, and such. Uber, for example, exemplifies a cutting-edge approach to combining cloud-hosted demand-driven resource pooling techniques to satisfy a wide range of customer needs for taxi services by pairing up riders with independent taxi operators; in a true case of bring-your-own-infrastructure, the riders and drivers alike use whatever smartphones, phablets, navigational, and other systems they want to use, all at the edge of Uber's cloud.

Edge, Fog, IoT, and SCADA have become the “new” patrol zones for the SSCP as the security officer on the lookout; these have become the new risk reduction frontiers for many organizations. That is to say, many in the media and industry press think of these as new sets of challenges, but really, they are not. From the 1960s and the first deployments of computer-controlled networks for telephone, transportation, and manufacturing systems, experts and security thought leaders have been trying to warn us all about these ripe-for-the-plucking risk targets. 2020 and the events leading up to that year and since has made many of us see them in more urgent, sharper focus. Let's take a closer look.

ICS, IoT, and SCADA: More Than SUNBURST

The attacks against SolarWinds' managed security services software supply chain (the SUNBURST attack) heightened the vulnerability of IT environments to supply chain attacks; it also brought to attention the potential vulnerabilities when these IT systems interact with OT systems. These interactions often require the use of IP protocols (such as SNMP and ICMP), translated through specialized gateways, into customized, proprietary, or legacy protocols on the networks that directly support the ICS or SCADA controllers and devices on the OT side. Increasingly, the Common Industrial Protocol (CIP) is being used to provide a much-needed family of messages and services for monitoring, controlling, and protecting the systems and devices on the operational technology side of the system. It's this mesh of Internet to OT protocols, and the layers at which vulnerabilities may exist and be exploited, that is often invisible—or ignored—by IT-focused organizational security teams.

Extending Physical Security: More Than Just Badges and Locks

We've looked at physical security as part of access control and asset protection in earlier chapters; it's worth taking a moment to recall that the first “A” in CIANA+PS (or even just in the old CIA mnemonic) is about availability and that systems and information availability hinges on the continued, uninterrupted physical integrity of the electrical power, lighting, and environmental systems and infrastructures that support the IT, OT, and communications systems themselves. All of these infrastructures that enable an organization's IT and OT systems—energy, communications, transportation, and so on—are becoming increasingly overloaded and fragile in many countries and regions around the world.

As an SSCP, you'll probably not be directly involved in engineering, design, or installation of these systems, but you may have opportunities to support ongoing readiness assessments of them, as well as provide for their replacement when activating business continuity and disaster recovery procedures. These opportunities might include:

  • On-site power conditioning and backup capabilities, including fail-over equipment, alternative supplies, parts, and fuel storage for on-site generators, need to be maintained and protected as would any other security control.
  • Internet and public switched telephone services may require redundant or alternate providers, both for load balancing and backup. Many SOHO operators are finding that they need to operate their systems (even if only one laptop in extent) with both an ISP-provided connection and data service over their mobile phone, as a way of maintaining the connectivity they need to work from home or other locations.
  • Physical security protection for remote workers, remotely located IoT or other OT systems, and for the communications links that tie these to the rest of the organization also need to be assessed and possibly improved.

These and other aspects of physical security deserve to be integrated into all aspects of your organization's security program. Make sure they're part of the full lifecycle of risk management, from initial identification and characterization, through design, testing, operational security assessments, and compliance reviews. Along the way, identify opportunities to have these physical controls pay back on their investment by having smarter provisions for continuous monitoring capabilities built in (or added on) and put to use.

As your organization moves toward expanded use of contactless identity technologies (for human and nonhuman entities, as well as packages, devices, and other objects), these will need greater scrutiny by the security team. These contactless solutions rely heavily on IoT technologies, particularly items with low unit cost, which might also lead to choosing systems and elements that are easily hacked, bypassed, or spoofed.

All-Source, Proactive Intelligence: The SOC as a Fusion Center

Many organizations and managed security services providers are moving toward operational and procedural architectures that bring together every aspect of what the organization does (its core services), how it does that (its processes), the risks it faces, and the controls and mitigations it has put in place to manage and mitigate those risks. This fusion center approach, as it's often referred to, brings together many disparate processes, and the organizational units that have owned and operated those processes traditionally, and combines them into a more dynamic, real-time oriented nerve center for development, security, operations, safety, and profitability.

Financial services organizations, such as PayPal, put this fusion center process into action when they combine their fraud detection, IT and cyber security, physical security, and customer-facing applications traffic and performance monitoring and control systems together. They also bring in their help desk, complaints processes (internal or external), customer service functions, compliance reporting, and even investor relations activities, and enable and empower these seemingly separate business processes and people to talk with each other, to collaborate with each other on a real-time basis.

Let's face it: each of these process areas, whether in a financial services organization, an aerospace company, or a government agency, have a real-time operational focus of making and delivering products and services to customers. This is bringing the islands of process control, surveillance, monitoring, and incident detection and response together into one organization-wide SCADA-like system.

One stress case for many organizations is that of the insider threat. Whether a current employee or customer is deliberately or unintentionally causing problems, the first line of defense is to detect actions that may indicate abnormal behavior, particularly when that behavior involves the IT and OT systems and assets. Without a nerve center, an all-source fusion center for threat intelligence, monitoring information, and incident detection, getting the right set of people across multiple departments to recognize indicators of a potential insider threat activity can be slow to happen. The fusion center makes that part of everyday workflows.

As an SSCP, you need to be aware of another important and emerging trend here: more organizations are looking to bring people into their fusion centers who have both a good background of security skills, a threat hunting attitude, and an appreciation of the ways in which business gets done in that organization and its marketplaces. Fusion centers are great opportunities for generalists; they are also great opportunities for specialists who want to expand out from their current comfort zone of knowledge, skills, and aptitudes, and take on different types of challenges.

Other Dangers on the Web and Net

Peeling back the layers of the Internet reveals that there actually are three different strata to what we think of as “the Web”: the public-facing surface web, the legitimate but access-controlled and nonindexed private or deep web, and then the dark web, the one with the unsavory if not downright dangerous reputation. As a practicing security professional, you'll need to be able to characterize just what kind of threats and risks your organization is exposed to from each of these layers of the Internet.

Surface, Deep, and Dark Webs

First, let's look at the surface web, the public-facing, declared set of web pages, uniform resource locators (URLs), and uniform resource indicators (URIs); these public names, such as https://ietf.org, are resolved into IP addresses by the Doman Name System of systems, and these are ultimately resolved into hardware addresses by other protocols such as DHCP or ARP. These resources are exposed to search engines; their spiders and web crawlers can find these pages, find keywords and search terms of statistical significance, and then index that address (and its corresponding URI or URL) into their databases. All of that supports making these information resources public: a book can publish a URL or an IP address (such as 8.8.8.8), and it can be found by any client device connected to the Internet with a minimum of fuss and bother. If you can find it in one of these mapping services, it's part of the surface web.

About 10 percent or less of the content on the Internet is estimated to be on the public web.

Beyond this surface web lies another set of web pages and resources, which are protected by firewalls and other techniques from being indexed by search engines. Almost 90 percent of the content on the Internet is in this deep web, protected from public access, but in the main, legitimate and law-abiding content. Internal organizational data, student records, patient data, tax authority databases, and the data warehouses held by corporate and government interests are all in this protected, restricted deep web. (Many of these deep webs in fact use their own local instances of common search engine technologies to make them searchable and exploitable—but to internal, authorized users, of course.)

A final and much smaller set of unpublished web and IP addresses are collectively known as the dark web because they are almost exclusively associated with activities that their users wish to keep hidden and secret, away from the prying eyes of the states' intelligence or law enforcement services. Also known as overlay networks, these resources are not accessible by normal, public-facing browsers. Instead, onion protocol browsers such as TOR (The Onion Router) must be used to access them. For example, one such (now defunct) site was silkroad7rn2puhj[.]onion, which was the home of the infamous Silk Road illicit marketplace.

And this presents the dilemma of the dark web: some of its content is related to political and social activism, which may (or may not!) be about topics that are illegal and unethical (or blasphemous) in one country, culture, or jurisdiction, and yet legal and acceptable in another. Other parts of the dark web are used by undercover law enforcement, intelligence services, journalists, and other investigators who have (presumably) legal and ethical reasons for keeping their activities, their data, and their connections with one another very, very secret. Criminal activities make up the other portion of the dark web.

Of these activities, two stand out as matters for concern for network security experts. The first is the use of the dark web as a marketplace for stolen information, such as identities, credit cards, transaction histories, and even larger corporate databases. This marketplace portion of the dark web also hosts intellectual property, such as confidential and internal market studies or product development concepts, which have been stolen and thinly rebranded as “intelligence reports” from a purportedly innocent researcher.

The more troubling use of the dark web is as a clearinghouse, sanctuary, and forum for malware, attack tools, probes, and targeting tools; groups that provide these as part of service offerings are also prevalent in this corner of the dark web. As of 2021, much of the chatter in these areas of the dark web focuses on ransomware attack technologies, tools, and methods. Actual negotiations between service providers and the groups that use them (such as APTs) disappear from the dark web by moving into private chat room services.

Deep and Dark: Risks and Countermeasures

First, let's sum up the risks of the surface web—that's all of the risks and threat vectors we've covered in most of the preceding chapters. These attacks come to your organization's web pages, whether internal or external, primarily via surface web channels, and we've looked at how firewalls, access control, blocked versus allowed list management, anti-malware, behavioral modeling, and other techniques can and should be applied to limit exposure to these risks by hardening those threat surfaces.

Second, it's important to realize that the very names “deep web” and “dark web” are becoming marketing terms, more so than being useful analytical distinctions; as a result, these terms are losing their precision and utility.

Risk assessments regarding the deep web present a quandary. While much of the deep web is legal and presumably not posing a threat to your organization (your own organization's internal deep web is not overtly or covertly threatening some other organization, is it?), that subset of the deep web we call the dark web is the home of the hostile. With that in mind, let's look at the following perspectives on threats and risks of the deep and dark webs.

It's also worth recognizing that threats from the deep and dark web may represent edge cases for your organization. You may already have a full-spectrum security program in place; an active SOC protecting your operations, assets, and people; and an ongoing program of continuous security assessment. Before diving headlong into the deep and dark, it is worth considering whether the increase in risk mitigation you'll gain is worth the effort.

With that said, let's take the plunge.

Precursors in the Deep and Dark

Let's take a moment to look at some actionable precursors—ones that both provide a warning of a possible increase in threat of attack or intrusion, while offering clues to actions we can take within the organization to be better prepared. (This contrasts with a commonly held notion of a precursor as being useful for “heightening your awareness” of a risk, but not much else.) The mere presence of your organization's name in a dark web discussion forum, or the presence of any of the publicly facing information your organization provides via the Internet or any other communications media, may not in and of itself mean that somebody, somewhere, is actively conspiring to conduct an attack against your organization, its IT and OT systems, its assets, or its people. Actionable precursors might, however, include the presence of more concrete information being shared, discussed, or even requested, such as:

  • Credentials
  • Technical fingerprints of your systems
  • Customer, employee, or vendor data, even if incomplete
  • Detailed organizational diagrams that link individual names, contact information, and job functions together
  • Sensitive data revealed by inference and analysis of otherwise public-facing data
  • Sensitive, restricted, proprietary, or other forms of data
  • Internal drafts, working papers, meeting minutes, and the like, if not disclosed as part of compliance or legal processes
  • Details regarding your organization's supply chain relationships with other organizations, agencies, and persons

For example, the “Cloud of Logs” dark web markets offer terabyte-sized collections of data from various data breaches, keystroke loggers, and other data collection techniques. TrenData reports seeing many of these logs revealing web browsing histories, cookies, and other information that can be used to bypass many fraud detection technologies used in financial and retail organizations. Finding such information that involves your organization, your customers, or your suppliers “out there” in the Cloud of Logs or similar marketplaces could be an advance, tangible warning of an impending attack or an indicator (or IoC) of an ongoing or previously completed but undetected attack against your systems.

In each case, data you find (or have your threat intelligence services find for you) may present you a timely warning to take a much closer look at the data flows that involve such data, the access controls and other protections you have in place to protect it, and the behaviors of parties that seem to be mentioned in or linked to that data.

Inward Connections from the Deep and Dark

Simplistically, your access control systems should not be allowing any connections to your protected systems and content from entities whose identity and credentials cannot be confirmed, and who is coming to you from a place, location, or circumstances that you can also verify to be trustworthy. Validating all of this information in real time (as part of initial and ongoing authentication and authorization) may require a combination of identity services, behavioral modeling, attribute-based access controls, traceback and geolocation of the entity requesting the connection, or more.

Outbound Connections to the Deep and Dark

Similarly, your own trusted insiders—entities and processes—should not be making connections to locations, systems, and entities that do not meet your security policy constraints or have not otherwise been approved for contact. To do otherwise may lead to exposing assets to an inward threat or sending sensitive data or content to an unauthorized location or entity. Such outbound connections (and data flows) may be indicators of an ongoing attack.

DNS and Namespace Exploit Risks

Many opportunities exist for the DNS system and the namespace that it provides to be abused by attackers. These risks boil down to two related sets of threats:

  • Corruption of DNS as a service infrastructure: Attacks on DNS as a service willfully misdirect traffic to imposter websites, enabling fraud, MITM, and other risks.
  • Attacks via DNS against your systems: A variety of techniques send what appears to be legitimate DNS traffic to your systems, servers, or endpoints, which can then target vulnerabilities in those systems themselves.

Two sets of countermeasures offer some amount of remedy to these problems. DNS security extensions (DNSSEC) are measures that can be applied across the DNS infrastructure, from the root-level and top-level domain name servers on down. This requires concerted effort by many parties to achieve. DNS service filtering, firewalls, and services such as deep packet inspection can provide additional protection for an organization's systems and infrastructure, such as restricting inbound or outbound attempts to connect to suspect or known hostile URLs, IP regions, domains, and so on.

On Our Way to the Future

Let's face it—most of what we do as SSCPs is still very brand new; that's the consequence of constant evolution and revolution. Yes, there are fundamental concepts and theoretical models that are the core of what we do, that information security professionals have been making use of since the 1960s. Yet it was in 1965 that Gordon Moore, who would later found Fairchild Semiconductor and become CEO of Intel, coined what's become known as Moore's law. Every two years, Moore said, the number of devices we can put on a chip will double; that's exponential growth in complexity, interconnectedness, and the power of the devices we compute with. Since then, the number of people on the globe has more than doubled as well, while the number of devices using the Internet has increased from a paltry handful to billions and billions. We've had to invent new numbers to cope with the sheer amount of data created every minute, growing from kilobytes to terabytes and now zettabytes. And we're seeing over a million new pieces of malware cropping up in the wild every day, according to some cybersecurity threat intelligence sources.

Along that arrow of growth headed toward the future, we've also seen an equally explosive growth in the challenges that we as SSCPs face when we try to keep our organizations' information and information systems safe, secure, and reliable. We've seen computer crime go from a laughable idea that prosecutors, courts, and legislators would not think about to the central, unifying concept behind national and personal security. The war on organized crime and terrorism, for example, has become fundamentally a war of analytics, information security, and perception management.

With this explosive growth in capability and demand has come a mind-boggling growth in opportunity. Old ideas suddenly find ways to take wing and fly; new ideas spark in the minds of the young, the old, and children, who organize crowdfunding campaigns to launch new digitally empowered, cloud-hosted businesses. Things we used to think were nice ideas but simply had no business case can suddenly become profitable, successful, and worthwhile products. Think of how 3D printers enable the profitable creation of custom-engineered prosthetics for children as well as adults, for example, or stethoscope apps for your smartphone; during the COVID-19 pandemic, they became a hobbyist-driven part of the personal protective equipment supply chain, locally producing face shields and filter masks, in many areas of the globe. What might be next?

Each chapter has identified some ongoing issues, or problems not yet solved, in its subject domain; along the way, we've seen indicators that some of these might be worth keeping an eye on as we move into our future. Many of these are opportunities for the threat actor to find new ways to create mischief for us; then, too, these same issues are opportunities for us, as SSCPs, to create new and more effective ways of combating the threat actors all around us.

Where there's a problem, there's an opportunity. Let's take a second look.

Cloud Security: Edgier and Foggier

Many organizations are using concepts such as service-oriented architectures, hybrid clouds, and other cloud service models toward a more loosely coupled, distributed, and dispersed IT architecture, at much the same time that many of their endpoints and subsystems are moving further into the OT world. The centralized corporate datacenter surrounded by layers of defended network perimeters may not be completely gone yet; but the demand to access, use, and control data-centric processes from anywhere, anytime, needs security approaches that work better than what many organizations currently use. A relatively new term you'll encounter is secure access service edge (SASE), coined by Gartner in 2019 to refer to a variety of offerings that present users with simple, tightly controlled access direct to the resources, applications, and systems they are authorized to use for specific tasks. SASE systems take a variety of approaches and use different technologies to avoid some of the problems with private VPNs. SASE architectures should provide greater security for mobile users—at least, as Gartner describes it and as vendors positioning their systems and services claim. This is a fast-moving concept space, with a lot of technologies and hype intermingled in reports, white papers, and vendor presentations. That said, even the small-time operators of SOHO and SMB systems can glean some valuable ideas from these, while the industry tries to sort this concept out more effectively.

AI, ML, and Analytics: Explicability and Trustworthiness

Several trend lines are merging together, it seems, as we think about how our information and our information security tools are getting smarter. One trend we see is how applied artificial intelligence (AI) is creating many different paradigms for software to interact with other software and, in the process, make the physical hardware that hosts that software take action in ways that perhaps are not quite what we anticipated when we built it. We already have software tools that can “decide” to look for more information, to interact with other tools, and to share data, metadata, rules, and the results of using those rules to form conclusions. In 2018, The Verge reported on a 2016 video made by Google researcher Nick Foster called “The Selfish Ledger.” Playing on ideas from selfish genes (and selfish memes), the video suggests that we're nearly at the point where the collection of data about an individual subject—a person, a company, or a set of abstract ideas—could decide by itself how and when to influence the software and hardware that hosts it to take actions so that the data object can learn, grow, acquire other data, and maybe even learn to protect itself. As a selfish ledger, it could and would do this without regard for any value this might have for the subject of that information. Imagine such selfish ledgers in the hands of the cyber criminals; how could you defend against them?

Another trend is in machine learning (ML), which is a subset of applied AI. ML, as it's called, tends to use meshes of processing elements (which can each be in software, hardware, or both) that look for statistical relationships between input conditions and desired outputs from that mesh; this training takes thousands of sets of inputs and lets the mesh compute its own parameters by which each processing element manipulates its inputs and its memory of previous results to produce and share an output with others in the mesh. The problem is, these meshes cannot explain to us, their builders and users, why or how they computed the answer that they got and the action they then took (or caused to happen) as a result.

Analytics, the science of applying statistical and associative techniques to derive meaning from data, is already one of the hottest topics in computing, and both of its major forms are becoming even hotter as organizations seek ways to apply them to information security. Business intelligence (BI) takes this into the domain of making business or other decisions, based on what can be inferred about the data. Many of us see this when online merchants or media channels suggest that other users, like us, have also looked at these products or videos, for example. BI and machine learning drive the transformation of news from broadcasting to narrowcasting, in which the same major news channels show you a different set of headlines, based on what that ML “thinks” you're most likely to favor or respond to. BI looks to what has happened and strives to find connections between events. Predictive intelligence (PI) strives to make analytics-based predictions about possible outcomes of decisions that others might make. Both BI and PI are being applied to end-user (or subject) behavior analysis to determine whether a subject's behavior is “not quite business normal” or is a precursor or indicator of a possible change in behavior in ways that put information security at risk. Applied AI and machine learning techniques figure prominently in BI and PI, particularly when applied to information security problems.

In one respect, this is an age-old problem in new clothes. We've never really known what was going on in someone else's head; we watched their behavior, we'd try to correlate that with what they said they'd do, and then we'd decide whether to continue trusting them or not. But at least with some people, we could ask them to explain why they did what they did, and that explanation might help us in our continual decision about how far to trust them. When our ML and AI and other tools cannot explain, how do we trust in the decisions they've made or the actions taken by them.

One major worry about the dramatic growth in the capabilities and processing power of AI and ML systems is that our concepts of computationally infeasible attacks on cryptographic systems may be proven to be just an overly optimistic assertion. It was, after all, the birth of the supercomputer that allowed for massively parallel attacks by the NSA on Soviet and other cryptosystems that drove even more growth in supercomputing, massively parallel software architectures, and network systems performance. Constructing a parallel processing system of hundreds of nodes is nearly child's play these days (high schools have been known to construct them from “obsolete” PC/XTs, following Oak Ridge National Laboratories' recipe for the Stone Soupercomputer Project). We're seeing the same approaches used to cobble together huge systems for cryptocurrency mining as well. It's hard not to feel that it's only a matter of time before our public key infrastructure and asymmetric encryption algorithms fall to the cracker's attacks.

Quantum Communications, Computing, and Cryptography

The algorithms have already been published for using the power of quantum computing architectures to break the integer factorization problem—the heart of the RSA encryption process and the heart of our public key infrastructure. All it would take, said mathematician Peter Shor, was a mega-qubit or giga-qubit processor. So far, the largest of quantum computer processors is claimed to have 2,000 qubits, with larger machines predicted to hit the market by 2023.

The year 2018 saw a growing debate as to whether scalable, high-capacity, and fast quantum computing would threaten our current cryptologic architectures and key distribution processes. Some even went so far as to suggest that if they could, we might not even be able to recognize when such an attack took place or was successful. Elliptical curve algorithms for cryptography (ECC) may be a strong part of the “quantum-proofing” we'll have to implement, but even these may not hold out against truly large-scale quantum computing–based attacks for long.

Paradigm Shifts in Information Security?

It's becoming clearer (say a number of commentators) that while the information security industry has paid a lot of attention to the technical aspects of keeping systems and information secure, we haven't made a lot of progress in strengthening the human element of these systems. It may be more than time for several ideas to start to gain traction:

  • Cybercrime as an economic force has become too large and too pervasive to ignore any longer. With its gross earnings being anywhere from one to three trillion dollars US in 2021, it threatens to distort financial and insurance markets, disrupt supply chains, and damage—not just disrupt—critical infrastructures. Even so, the vast majority (more than 90 percent by some surveys) of successful ransomware attacks and other cybercrimes are targeted against individuals rather than corporations or government activities.
  • Human security behaviors need more of our attention, understanding, guidance, and management. At best, we've tended to focus on the attacker—their motives, goals, objectives, and methods of operations. This is especially important as we ask our insiders, the people purportedly on our side in the struggle for information security, to take on potentially more complex and demanding security-related awareness, understanding, and actions.
  • Transformational communication paradigms are changing the ways in which our workforce, our customers, our prospective customers, our partners, suppliers, and stakeholders all come together to achieve their objectives. Social media technologies are not the issue here—it's the changes in the ways people think about finding, using, and sharing observations, insights, and data that are. Some businesses and organizations get this, and their market effectiveness and the loyalty of their customers and team members show this. Other organizations haven't gotten here yet. The classical systems geek approach to this offers mobile device management (MDM) approaches, and maybe natural language processing (are tweets in natural language, I wonder?), but this seems to be only scratching the surface of the possible.
  • Digital nomads are becoming more the norm as virtual workspaces and virtual organizations proliferate. Whether this is because work in many industries is becoming focused on smaller parts of projects, or even atomized into discrete tasks, many talented people pack up their laptop and smartphone and tour the world while working for (or with) multiple businesses and organizations. Hotel and coffee shop Wi-Fi is becoming passé, as Airbnb-like entrepreneurial cafés offer hourly, daily, or weekly options for high-quality connections, comfortable work surroundings, and no bosses! This is as much a BYOI approach to infrastructure as it is becoming a case of OPI—other people's infrastructure.
  • The semantics of data is becoming more integral to systems and business operations. In the last decade, we've seen significant growth in the use of metadata, tags, and other techniques to let packages of data incorporate their own meanings with them; smarter systems act on the meanings, interpret them, and then apply them as part of how that data is put to use. Security information event monitoring, analysis, and modeling systems are only beginning to look at ways to apply these concepts. Semantic analysis of data, and of its metadata, may be a high-payoff approach to dealing with far too many log entries and far too little human analytical power to spot the precursors or indicators.
  • Greater emphasis on safety and privacy are changing the ways in which we think about managing projects. The traditional trio of cost, schedule, and performance still apply, of course, but in many ways, we're starting to see a greater emphasis on product and system safety, as well as on protecting privacy-related information that will be part of the resulting system.

These and other similar mini-trends have a few things in common. First, they focus on the ways that the revolutionary changes in the nature of apps, platforms, tools, and systems have worked hand in hand with the revolutions in interpersonal and interorganizational work and communications patterns. Second, they are part of the pressure to further decentralize, fragment, or uncouple our organizations and our systems, whether we think about that electronically, contractually, or personally. Taken altogether, they strongly suggest that the core competencies of the SSCP and others in the information systems security ecology may have to change as well.

Perception Management and Information Security

The art and science of perception management deals with a plain and simple fact: when it comes to human behavior, what humans perceive, think, and believe about a situation is the reality of the situation. Reality, to us humans, is not the tangible, physical objects, or the actions of others around us; it is the nonstop narrative we tell ourselves—the video we watch inside our heads, which is our interpretation of what our senses were telling us—and results in our modeling of what might have been happening around us.

For example, consider how different groups of stakeholders and stakeowners might hold any number of beliefs about your organization's information security policies and systems:

  • Customers, prospective clients, and the public have their beliefs as to whether or not our company does a great job of protecting their information.
  • Management and leadership want to believe that their employees understand the need for information security, have taken to heart the training provided, and are working hard to keep the company, their jobs, and shareholder value safe and secure by protecting critical information.
  • Employees often perceive information security programs as being told that “management doesn't trust you.”
  • Regulators believe that few private organizations report truthfully about every information security incident.
  • Information security team members perceive that management isn't interested in taking information security seriously.
  • Departmental managers might believe the company is spending too much on information security and not enough on their department's value-chain-impacting needs.

Think back to what we looked at in Chapter 10, “Incident Response and Recovery,” using this perception management lens, and we might see that the people on the incident response team or the crew in the security operations center are interpreting what the systems are telling them through their own internalized filters that are based on their beliefs. They believe they did a diligent job of setting alarm limits and programmed the right access control list settings into the firewalls and thus what their dashboards and network security systems are telling them must be correct.

Right?

Better-informed use of perception management techniques might find gainful employment on several information security fronts, such as:

  • Presenting security needs, procedures, and techniques to employees, their managers, and leaders as part of gaining improved usefulness from our logical, physical, and administrative controls
  • More effective communication with managers and leaders when escalating issues pertaining to an incident and incident response
  • Better design of incident response procedures, particularly ones involved in the high-stress environment of disaster recovery
  • Better engagement and support from customers, prospective customers, and others, with the controls built into web pages, apps, and even voice-to-voice interactions with the organization and its systems

Widespread Lack of Useful Understanding of Core Technologies

In Chapter 7, “Cryptography,” we saw that an unfortunate number of people look at cryptography as if it were a silver-bullet solution to our problems in meeting their CIANA+PS needs for confidentiality, integrity, availability, nonrepudiation, and authentication; if only we could “sprinkle a little more crypto dust” on things, they all would work so much better. The same underinformed beliefs about migrating to “the cloud” (as if there is only one cloud) often lead to ill-considered decisions about migrating to the right set of cloud-hosted services and platforms. Some people think that the Internet needs a “kill switch,” and they believe they know whose finger ought to hover over it. And so on. . . There could almost be a special section in the local computer bookstore, something like “key IT technologies for lawyers, accountants, and managers,” just like all of the other self-help books focused on business management and leadership. There already are a lot of titles published as if they're aimed at that corner of the bookstore.

It's not that we as systems security specialists need everyone to be full-fledged geeks, steeped in the technologies and proficient in their use. On the contrary: we need to communicate better about these core technologies as they apply to our company's information security needs, to the problems a client is having, or the doubts and fears (or unbridled optimism) of those we work with. And that will require that we make the extra effort to understand how to hear and see the business world and the business logic that our users rely on through their eyes and speak with them about security issues in the language of their business.

This is where your knowledge, skills, and experience have much to offer.

Enduring Lessons

Having gazed into the near-term future and seen some of the tantalizing opportunities that might be awaiting us there, let's return to the present. Our profession has more than 60 years of experience to its credit; it's no surprise that some enduring lessons emerge. We've looked in depth at some of them across many of the preceding chapters, but they're worthy of a last few parting words and some thoughtful reflection at this point.

You Cannot Legislate Security (But You Can Punish Noncompliance)

Well, you can actually legislate better information security for your organization or company. It just doesn't get you very far. You can write all of the administrative controls that you want to; you can get the CEO to sign off on them, and you can push copies of them out to everyone involved. All of that by itself does not change attitudes or habits; none of that change perceptions or behaviors. (And none of it changes the technical implementation of security policies within our networks, servers, endpoints, platforms, and apps, either.)

If the history of workplace safety is any guide, it will take significant financial incentives, driven into place by insurers, reinsurers, and the investment community, to make serious information security ideas become routine practices. The critical infrastructure attacks seen in 2019–2021, for the most part, occurred with systems that met all required compliance standards and requirements; and yet, the true costs to societies of the impacts of these attacks seems to be something that the owners of these systems are allowed to ignore. Changing this dynamic will have profound effects on the art and practice of information security and assurance.

It's About Managing Our Security and Our Systems

Experience has shown us, rather painfully, that unmanaged systems stay secure through what can only be called dumb luck. Attackers may not have found them, hiding in plain sight among the billions of systems on the Internet, or if they did, a few quick looks around didn't tempt them to try to take control or extract valuable data from them. Managing our systems security processes requires us to manage those systems as collections of assets, as designs, as processes, as places people work, and as capabilities they need to get their work done. We must be part of the management and governance of these systems, and the data that makes them valuable and value-producing. We must manage every aspect of the work we and others do to deliver the CIANA, the privacy, the resilience, the safety, and the continuity of operations that our business needs. If it's an important aspect of achieving the business's goals or the organization's objectives, we must actively manage what it takes to deliver that capability and that aspect of decision assurance.

We have to become more adept at managing information risk. We need to become past masters at translating risk, via vulnerability assessment, into risk mitigation plans and procedures; we then must manage how we monitor those risk mitigation systems in action and manage our interpretation of the alarm bells they ring at 5 minutes before quitting time on a Friday afternoon.

Our organizations' leaders are looking to us—their information security professionals—to also help manage the process that keeps what we monitor, what we do, and how we act to secure those systems aligned with the goals and objectives of the organization. We're the ones they'll look to, to fit the security processes into the organizational culture; that, too, is a management challenge.

People Put It Together

No matter how much automation we put into our systems, no matter how simple or complex they are, around and above all of the technical and physical elements of those systems we find the people who need them, use them, build them, operate them, abuse and misuse them, don't believe them, or aren't critical enough in assessing what the systems are trying to tell them about. The people inside our organization, as well as those outsiders we deal with, are both the greatest source of strength, insight, agility, and awareness as well as potentially being the greatest source of anomalies, misuse (accidental or deliberate), frustration, or obstructivism. We have to worry about insiders who might turn into attackers, outsiders trying to intrude into our systems, and insiders who simply don't follow the information security and systems use rules and policies we've put in place for everyone's benefit.

What kind of people do our organizations need working for us if we are to really prepare for and achieve operational continuity despite accident, attack, or bad weather? Michael Workman, Daniel Phelps, and John Gathegi, in Information Security for Managers (2013), cited six basic aptitudes and attitudes we need from our people if they are going to help us get work done right, keep that work and the work systems safe and secure, and in doing so, protect the organization's existence and their own jobs. This kind of people power needs people who:

  • Know what their roles, duties, and responsibilities are
  • Know the boundaries of those roles, and understand and appreciate the consequences (to themselves and to others) of going beyond those boundaries
  • Understand the policies that apply to their jobs and roles, particularly the ones pertaining to information security
  • Have been trained to do the jobs and tasks we ask them to do, and have demonstrated their proficiency in those tasks as a result
  • Know how and why to monitor for signs of trouble, anomalies, or possible security incidents, as well as know what to do and who to contact if they think they've spotted such events
  • Know how to respond when emergencies or security incidents occur; and then respond as required.

As an SSCP working on information security and protecting the organization's IT infrastructure, you have a slice of each of those six “people power preparedness” tasks. Start by thinking about what you would want somebody else in the organization to know, as it pertains to information security, in each of those areas. Jot that down. You're starting to build an awareness, education, and training program! You can be—and should be—one of the security evangelists within your organization. Help set the climate and culture to help encourage better information security hygiene across the organization. Take that evangelism outside as well; work with key customers, partners, stakeholders, and others in your organization's marketplace or context. Keeping that community safer and more secure is in everyone's best interest.

If at this point you're concerned that your own interpersonal skills may not be up to the task, don't worry; go do something about that. Join Toastmasters, which can help even the most poised and confident among us improve our abilities to speak with a group of people. Check with your organization's human resources management group to see what kind of professional development opportunities you can participate in.

Maintain Flexibility of Vision

Whether you're in the early days of a security assessment or down in the details of implementing a particular mitigation control, you'll find that you frequently need to shift not just where your focus and attention is directed but whether you're zoomed in closely on the details or zoomed far out to see the bigger picture and the surrounding context. This is sometimes referred to as having your own strategic telescope and knowing how and when to use it. Part of this is knowing when to zoom out, or zoom in, by walking around, talking with, and listening to people in the various nooks and crannies of the organization. Hear what the end users are telling you, but also try to read between the lines of what they say. Catch the subtext. Think about it.

Most of what an organization knows about how it does what it does is not explicitly known (that is, written down in some tangible form); often, the people who possess this knowledge inside their own heads don't consciously realize what they know either.

As a point of comparison, consider different strategies in neighborhood security and policing. The world over, police departments know that “beat cops,” the uniformed police officers out walking a patrol pattern (or “beat”) in the neighborhoods, become known elements of the community. They become part of the normal; people become more trusting of them, over time, and look to them as sources of help and security, rather than as the power of authority. As one of your organization's information security team, that same opportunity may present itself to you. As Yogi Berra said, it's amazing what you can see if you only look!

Accountability—It's Personal. Make It So

High-integrity systems only happen when they are designed, built, used, and maintained by people of high integrity. High-integrity systems—the kinds of systems that we are willing to entrust with our lives, our fortunes, and our sacred honor—are all around us; we don't recognize how many of them are part of what makes our day-to-day electronic world keep on working. Key to every step in that journey of a high-integrity system, from initial requirements to ongoing operations, is the accountability of each person involved with that system and its processes.

When we step back from the sharp, demanding edge of the high-integrity systems, though, we should not be willing to stop demanding personal and professional accountability from those we work those systems with. What we can do, though, is model the very accountability we know we need to see in everyone around us.

Avoid waffle-speak; say “I made a mistake” instead of “Mistakes were made.” Help others recover from the mistakes they've made, and work to find ways that prevent others from doing the same mistake as well. Do your homework; dig for the facts; verify them, six different ways from Sunday as they used to say, if the situation demands it. By showing the people around you that you are prepared to take your professional obligations seriously and that you deliver on those obligations in that way, every day, you'll find that you actually can lead by example in ways that count.

Stay Sharp

Keep staying sharp—technically, about systems, about threats, and about the overall geopolitical-economic landscape you're part of. Read well beyond the immediate needs of what you're doing at work and well beyond the kind of subjects and sources you've usually, habitually used for information (or infotainment!). By studying to become an SSCP, you've already built a solid foundation of knowledge and skills; continue that learning process as you build onto that foundation.

You've a wealth of opportunities to feed that growing information security learning habit of yours! Podcasts; e-news subscriptions and blogs that focus on IT issues, information security, compliance, and risk management; books; courses—no matter how you take in new ideas best, there's a channel and a format out there in the marketplace ready to bring you the insights and information you need. Work toward other certifications by studying and gaining experience that builds on your new SSCP and empowers you to dive deeper into topics and technologies that suit you.

That said, don't be afraid to wander away from that foundation now and then! Get playful—think outside the box and stretch your mind. Many of the great newspapers of the world (now in their online forms, of course) are deservedly respected for the reach, breadth, and quality of their explorations into all aspects of science and technology, the arts, cultures around the world, business, and industry, and of course their in-depth analysis of current events in politics and economics. Take risks with ideas; let something new excite you and chase after it for a while, researching and thinking about it.

Play. Play logic games; go orienteering; read whodunits and other mysteries, occasionally challenging yourself to see if you can solve the mystery before the author reveals all in the last chapter. Playfulness engages the fun side of your brain; let that happen at work, too.

Teach. Become an advocate for your profession; participate in local school-age programs that help the next generation keep their world safe and secure too.

Speak; write. Attend information security conferences, meet with others, and network with them. Share ideas, and in doing so, become more of an active member of your community of practitioners.

Beware of two very common fallacies, and work to avoid becoming trapped in their webs. The first of these asserts that there is nothing left to discover: we have all of the answers on Topic X or Subject Y that we'll ever need. It's been done before; it's always been done this way; these are variants of this “nothing new under the sun” fallacy. The other is the drive to oversimplify; it demands almost trivially simple answers to what may seem to be simple questions or situations, but in reality are anything but simple.

Those two common fallacies combine to produce a third and possibly more dangerous temptation: the TL/DR reflex. At the risk of an oversimplification, your adversaries do not act as if anything they encounter, any data they find and copy in a breach, is too long to read. Incident reports and analyses, threat intelligence, market surveys, and even detailed news and commentary about local, regional, and world events may take time for you to read, think about, and relate to other things you've learned and experienced. Take that time. Out-thinking your adversary while denying them the opportunity to outwit you requires not only that you think faster but smarter, be better informed, and use more powerful and practiced reasoning and logic.

We talked about this in an earlier chapter, too, but it's worth remembering Kipling's Six Wise Men. Hire these guys! Put them on your mental payroll. Who. What. Why. When. Where. How. Then, put them to work by asking good questions, every day. Asking open-ended questions such as “Why do you think that happened that way?” Invite others to think along with you. Engage their curiosity by using your own.

Your Next Steps

It's time to get ready for the next steps toward your SSCP. You've gone through this book, and you've made good use of the review questions in each chapter; you've done thought experiments as you've exercised applying the concepts in each section, drawn from each of the SSCP domains, to real situations you're involved with at work or familiar with from experience.

You've done the practice exams. You've reviewed all of the study materials. You're ready to take the exam.

What's next? Schedule, take, and succeed at the SSCP exam.

You take the SSCP exam in person, typically at a Pearson Vue test center convenient to you. www.isc2.org will have the latest information about the test process and where to go to schedule your test appointment. Pay close attention to the policies about rescheduling your exam—and if you're a no-show for your scheduled appointment, for any reason, you will lose your entire testing fee! As part of supporting COVID-19 public health measures, (ISC)2 did begin limited online proctored testing. Be sure to check with (ISC)2 and your local testing provider to see if this is available to you when you're ready to test.

Be sure to read and heed the test center's policies about food, drink, personal items, and such. You'll have to leave your phone, watch, car keys, and everything else in your pockets in a locker at the test site; you will be monitored during the testing process, and the test is timed, of course.

If you have any conditions that might require special accommodation in testing, contact the testing company well in advance. Speak with them about the accommodation you are requesting, and be prepared to provide them authoritative documentation to support your request. Contact (ISC)2 if you have any questions or need their assistance in this regard.

Schedule your exam for a day that allows you time to relax and unwind for a day or so beforehand; if possible, take the preceding day and the test day itself off from work or other studies and obligations.

If you can, schedule the exam for a time of day that is your best to do hard, thoughtful, detailed, concentrated work. The exam is quite demanding, so it's to your benefit to work to remove sources of worry and uncertainty about the test and the test process. Find the test center—drive to it to get a good idea of the time it will take you on the test day itself to arrive and still have plenty of time to relax for a bit before starting your test.

The night before, get a good night's sleep. Do the things you know work best for you to help you relax and enjoy a carefree, restful slumber. Set your alarm, and set a backup alarm if you need it, and plan to get to the test center early. Go on in, register, and succeed!

When you complete testing, the test center will give you a preliminary statement of the results you achieved. Smile, thank them for their help, and leave; whether you earned a “preliminary pass” or not is not something you need to share with others in the testing center or their staff.

It may take a few days for (ISC)2 to determine whether your preliminary pass is affirmed. If so, congratulations! They'll work with you on converting you from applicant to associate to full membership status, based on your experience and education to date that is applicable to systems security. Welcome to the team!

At the Close

I want to take this final opportunity to thank you for coming on this journey with me. You're here because you value a world where information can enable and empower people to make their dreams become reality; you value truth, accuracy, and integrity. And you're willing to work hard to protect that world from accidents, bad design, failures in the face of Mother Nature, and enemy action large or small. Integrity matters to you. That's a tough row to hoe. It needs our best.

Let's get to it.

Exam Essentials

Note that as Chapter 12 looks back across all previous chapters to integrate many different concepts, so too do these Exam Essentials and Review Questions.

  • Explain the role of workflows and playbooks in information systems security. Workflows and playbooks provide a means to organize and manage the set of tasks associated with each required information security activity. Checklists may provide a starting point for this; workflows then ensure that tasks and decisions on a checklist are structured in a step-by-step, chronological manner, with each element assigned to a person or process to complete it. Playbooks provide a higher-level grouping of workflows; these may be invoked (performed) in any mix of sequential, in parallel, or asynchronous fashion as required. When used as part of a workflow automation system such as security orchestration, automation, and response (SOAR), workflows and playbooks become part of a powerful knowledge management capability that can help the organization continuously improve and thus mature its security processes.

    Compare and contrast SOAR, SDS, SIEM, and security information monitoring and analysis. Security monitoring describes all activities that gather, assess, and use information from security systems, devices, and sensors to make operational and analysis decisions about the security state of the system(s) being monitored. These can include both IT and OT systems and architectures. Analysis can be trending, pattern matching, behavioral analytics, or any combination of techniques. Security information and event monitoring (SEIM) systems provide tools to bring together data from many different types of security systems and devices, collate and manage that data, and provide analytic and alarm monitoring displays and outputs for action by security administrators. Software-defined networks (SDNs) provide for scripted, managed definition, deployment, and use of virtualized networks, their connection fabric, and the servers and services supported by those networks; this includes the script-driven configuration of security features of such virtualized systems and networks. Software-defined security (SDS) extends the SDN concept by incorporating SIEM or SIEM-like security data management, analytics, and display, which allows a security administrator to see the security state of all systems on the network, and then direct changes to all of the virtual devices and systems directly. SOAR extends SDS by providing hierarchical layers of planning and scripting for all aspects of security data management, monitoring, analysis, control, and response.

    Explain the role of continuous security assessment and continuous compliance in information security operations. Continuous assessment refers to planning and conducting a broad spectrum of security data gathering, testing, audit, or analysis tasks throughout the year. By breaking the overall systems (or organizational) security assessment into small and frequent steps, each step reveals something about whether the overall security posture is working well and still capable of meeting the current threat space. Contrast this with a major, end-to-end assessment activity, which might be performed only once a year (or less often), which can be much more disruptive to production activities as well as only ascertaining anything meaningful about security during or right after the assessment milestone is concluded. Once a continuous assessment process is in place, it can be used to incrementally validate that compliance requirements—as they can be assessed by that specific incremental activity—are being met and maintained. Both, together, can provide more rapid discovery of security issues, while reducing the impact to production operations of major but infrequent end-to-end audits or assessments.

    Explain the relationship of physical security to information security operations. Physical security measures and activities support all aspects of the CIANA+PS characteristics of information security operations, by their design and use. Availability, for example, is supported by physical systems that protect and assure electrical power, environmental protection, and physical access control; all of these must also be provided when BC/DR plans call for alternate processing locations, additional remote work access, etc. Safety is assured in part by physical access control, systems integrity protection mechanisms, and physical protections for availability. Confidentiality, nonrepudiation, privacy, and authenticity are also supported by physical security operations that enhance and enforce access control, detect, or protect against remote passive surveillance, and protect the IT and OT supply chains that the organization depends upon.

    Apply the concept of all-source intelligence as a fusion center activity to information security operations. All-source intelligence is a lifecycle of collecting insights and information from external sources (news media, social networks, threat intelligence services, industry and trade-related security and risk management working groups, law enforcement channels, and others) and internal sources (network and systems security data, human resources, payroll, sales and marketing, internal help desks or suggestion systems), and blending it together with current and recent business operations to provide a more complete awareness of the events, risks, threats, and activities that the organization's systems, data, and people are involved with. As a fusion activity, it cross-correlates all of these sources of information, looking for potentially small and easily overlooked indicators in one aspect of the organization's systems or processes that, when correlated with others from other parts of the organization, may indicate an evolving or incipient threat situation. Integrating this into information security operations is often done by bringing the people, monitoring systems, and supervisory (or incident response) controls together into one common work area (virtual, physical, or a combination of both) to identify, investigate, and take actions as needed. This breaks down intra-organizational barriers to collaboration and data sharing, and generally leads to a stronger, more agile, and more responsive security program.

    Explain the information security operational implications of operational technology use. As more and more organizations incorporate greater levels of remote working, Bring Your Own (BYO) devices and infrastructure, and other smart technologies such as IoT and robotics into their business processes, the more they are exposing their IT systems and information assets to security vulnerabilities in these OT elements and systems. Smart building technologies, physical access controls, and other “purely IT” oriented technologies are examples of vulnerable threat surfaces often overlooked by traditional IT security planning or operations. Similarly, organizations that already make extensive use of OT systems (in manufacturing, logistics, and other operations), which are then integrated with their organizational IT architectures, are exposing those OT assets, systems, and processes to threats that can enter in via vulnerabilities on the IT side of the interface. Protocols such as SNMP and ICMP (on the IT side) and the increasing use of CIP (on the OT side) make such cross-exposure to vulnerabilities more likely.

    Explain the different types of supply chain attacks from an information security operations perspective. Supply chains are created as one organization provides goods or services to another; the supply chain is extended when that receiving organization then adds value to what it has received (by adding goods or services) and then passes that on to another organization. Supply chain attacks are often focused on or directed at IT systems, such as with fraudulent invoicing, attempted data exfiltration, or ransomware attacks, and these are often conducted for the attackers' financial gain or advantage. OT systems are also attacked in much the same way, but these attacks are often conducted to disrupt, disable, or destroy production capacity and systems. Both types of systems show similar tactics—technical intelligence gathering, intrusion, and establishing a covert presence to use to conduct the attack with—but often with very different techniques. Attacks on IT supply chains have demonstrated the ability to corrupt software updates supplied by an otherwise trusted vendor to its customers, as in the SUNBURST and HAFNIUM attacks in 2019, 2020, and 2021. Both IT and OT systems should be subjected to threat and risk analysis and be protected with appropriate physical, logical, and administrative security controls, which are then monitored and assessed frequently if not continuously.

    Describe malformed data attacks and countermeasures.Malformed data attacks occur when attackers input data to systems or applications with the intent of causing the system to behave abnormally. This can involve a hard crash of the system or corruption of data being processed by the system or app, or it can allow the attacker to take control of the system or application by causing it to mistakenly treat inputted data as commands or executable code (known as arbitrary code execution). Typically, these attacks involve substituting command strings such as SQL queries for input data (such as name fields), or buffer overflows, which are attempts to overflow or exceed the length of an input data field. Fraudulent malformed data attacks involve the use of correctly formatted inputs, generally via e-business, web app, or API interfaces, which exploit a weakness in data consistency checking across multiple business processes; a ghost invoice can have all of its fields within correct data formats and limits, but for a vendor or payee that is not authorized in other parts of the organization's systems. Countermeasures should include rigorous input validation tests (both for within logical range and overall logical consistency and correctness) and process designs that enforce separation of authorizations or other checks and balances, along with extensive fuzz testing. Ethical penetration testing is also effective at finding these vulnerabilities.

    Describe the IT and OT concerns with DNS-related attacks and security issues. Use of the Domain Name System (DNS) is almost unavoidable by an organization (or individual) wanting to use the Internet and web-hosted applications, services, or data. As a result, its infrastructure (the DNS root and top-level domain name servers) and protocols exposes the organization's systems to two sets of risks. The first is that attacks on the DNS system can cause DNS system responses to users to be untrustworthy, deceptive, or misdirect users to bogus sites as part of man (or machine) in the middle (MITM) or other attacks. The second is that DNS protocols and their data can be used as part of attacks on an organization's systems, either by attempts to corrupt its internal caches of DNS data or by other means. DNS Security Extensions (DNSSEC) may help with the first problem, but this has to be applied consistently across the large community of systems that make up the DNS and Internet backbone infrastructures. Individual organizations can implement a number of measures, such as deep packet inspection, blocked/allowed list filtering, and UEBA, to help mitigate the second. Attacks against an organization's name or URLs and URIs within the DNS system may require the organization to conduct specific threat intelligence efforts to detect.

    Describe some of the challenges with the use of artificial intelligence, machine learning, and analytics for operational information security. All of these technologies are used in many ways to analyze and assess the different forms of security and incident data. Ideally, they help users make better informed decisions, whether those involve choosing between alternative actions or changing the sensitivity settings on a fraud detection or access control process. Many managers are reluctant to trust fully automated AI- or ML-based systems and analytics with making real-time security control decisions (even though their anti-malware systems have been doing this for years). They hesitate to trust because they don't understand how these systems work; trust is further eroded when the system (and the technicians or analysts using it) cannot explain why the advice it gave or the actions it took were correct.

    Describe ways to increase the effectiveness of information security education, awareness, and training by bringing it more into an operational context or setting. Organizations are transforming their security education, training, and awareness programs by focusing on what workers and team members at every level need to know about security requirements, and how to fulfill those requirements, at each step in each task of the activities they perform, be those at the operational, tactical, or strategic levels. In doing so, they shift the learning and training from a regularly scheduled or self-paced but infrequent event basis to more of a just-in-time learning, proficiency training, and awareness one. Microtraining is often employed with learning events taking perhaps a minute or so of time to help employees correctly apply security concepts and requirements, or relearn task-specific skills; it can also simply heighten awareness of a potential threat or risk and the necessary or advisable choice of action for the employee to take.

Review Questions

  1. Which of the following statements about the use of workflows and playbooks for information security is most correct?
    1. Workflows and playbooks can be part of major applications platforms and may support security functions in those platforms, but provide no useful capability for network or systems security.
    2. Workflows and playbooks can help security teams define and manage routine tasks and activities, but they cannot cope well with the ad hoc, dynamic circumstances of incident response and recovery.
    3. Workflows and playbooks can support incident response by building in decision points, suggestions on conditions to investigate, and reminders or prompts for notifying or reporting to managers and users.
    4. Workflow management systems, and their playbooks, make sense only for organizations with established, well-practiced procedures.
  2. Which of the following statements about security orchestration and automation is most correct?
    1. Orchestration brings together all sources of security and event information; automation provides for workflows and playbooks to define, manage, and perform security tasks with direct interfaces to various security systems, data sources, or agents.
    2. Automation brings together all sources of security and event information; orchestration provides for workflows and playbooks to define, manage, and perform security tasks with direct interfaces to various security systems, data sources, or agents.
    3. These terms both refer to the integration of information and control flows between security devices, data sources or agents, and security operations and management consoles; vendors use them interchangeably.
    4. Automation refers to the use of workflows to control the integration, analysis, and use of security information to perform security activities; orchestration refers to arranging these workflows into higher-level plans and processes, by the use of playbooks.
  3. Various security devices, technologies, and systems seem to have evolved from each other, with each step on that pathway added new, more powerful capabilities to that which was already available. Choose the option which places these systems or technologies in the correct sequence, from most capable to least capable.
    1. SOAR, SIEM, SDN, SDS
    2. SIEM, SDS, SDN, SOAR
    3. SIEM, SDN, SOAR, SDS
    4. SIEM, SDN, SDS, SOAR
  4. The terms continuous assessment and continuous compliance are in frequent use. What does continuous actually mean in these contexts, with respect to how often these test, assessment, audit, and analysis activities are performed?
    1. These are conducted multiple times per business day, but only on a very small basis, such as with a single simulated transaction.
    2. Some may be regularly scheduled for multiple times per day, others less frequently.
    3. The full sequence of activities defined for a major end-to-end assessment or audit is executed task by task across the hours, days, or weeks, until all have been completed by the end of the audit period.
    4. Some may be performed multiple times per day, others less frequently, either periodically or on a scheduled basis, to balance the needs of assessment, compliance monitoring, and business operations.
  5. Which statement about continuous security assessment and continuous compliance is most correct, when comparing it against more traditional integrated or end-to-end assessment or audit events?
    1. Performing assessment and compliance monitoring continuously should improve speed at which problems are identified, while providing ongoing assurance of effectiveness and compliance satisfaction.
    2. Planning and conducting single, integrated assessments or audits is probably more cost-effective, and easier to justify to regulators and compliance authorities, even if continuous assessment might provide better visibility into security controls and their performance.
    3. Both approaches provide about the same overall security and compliance results, but one may be better suited to the organization's management culture and processes.
    4. Performing these tasks continuously will probably cost more in the long run, while providing only a narrow, incremental view as to whether security controls are working properly.
  6. The COVID-19 pandemic caused many organizations to substantially increase the percentage of their employees that worked remotely (from home or other locations). What physical information security issues should these organizations have addressed? Choose one or more selections as appropriate.
    1. Ergometrics and other physiological safety and security aspects of each employee's remote work locations and arrangements.
    2. Physical protection of network infrastructure and endpoint devices, at each employee's remote location, from loss or compromise.
    3. More thorough enforcement of appropriate use policies and restrictions for endpoints and network access used by these employees.
    4. Ensure that each remote work location had suitable backup power and connectivity.
  7. Your client manages a chain of fast-food restaurants that use a variety of digitally controlled cooking systems in the on-site preparation of their products. Up till now, these have only been connected at the local (in-restaurant) control system, which would produce printed reports on their utilization, temperatures, energy use, and so on. The client wants to upgrade these to controllers that can be remotely monitored by apps run by his corporate operations managers and business analysts. The client believes this can be done at minimal risk. As your client's security advisor, which of the following would be your best response?
    1. Concur; there is minimal additional risk, since the new interface is a one-way flow of reports from the controllers to headquarters.
    2. Suggest that the controllers' vendor(s) be asked to provide a security assessment, showing what threats they considered and have taken steps to mitigate; review this and resolve any issues before committing to their proposed upgrade.
    3. Warn the client that without doing extensive threat analysis and redoing the organization's risk assessment, you cannot support the idea of going ahead with this proposal.
    4. Suggest that the client ask the vendor(s) for referrals to other customers who've made this type of upgrade who are willing to talk about their experiences, including any security issues, with us.
  8. Which of the following IT system security features are often included in many OT systems?
    1. Centralized control of usernames, passwords, and credentials, especially for root or admin identities
    2. Secure management of remote push of software, firmware, and control parameter updates
    3. Host intrusion prevention and detection
    4. All of the above
    5. None of the above
  9. Dejah is a student at the local community college. Using the on-campus library's student-accessible Wi-Fi connections, she uses a port mapping tool on her laptop to identify a number of systems on the campus network, including some of its firewalls. Curious, she browses to some of those IP addresses and is presented with device and server login screens; in one case, an address presents her with what looks to be a command-line interface login prompt. Whether intentional or not, which types of information security attack technique is she demonstrating?
    1. Privilege escalation
    2. Lateral movement
    3. Cross-site scripting
    4. Falsified credentials
  10. One of the new members on your security team has heard about new ideas in digital identity creation, management, authentication, and use, but is a bit confused by it all. Which statement best explains to them, for example, what the important difference or differences are between an identity and an entity?
    1. An entity is a thing, such as a software task or process thread, or a device such as a phone or removable storage device; an identity refers only to a human being.
    2. Each individual person, device, or instance of a piece of software or a virtual machine has a unique identity; it may be represented as different entities, such as with different accounts on different systems.
    3. Different systems and organizations use these terms interchangeably to mean the same thing. No matter what it's called, it has to be authenticated and authorized before it should be allowed to access any systems resources.
    4. Each individual person, device, or instance of a piece of software or a virtual machine is a unique entity; it may be represented as different identities, such as with different accounts on different systems.
  11. You work with a small startup company that is building dynamic mashups that combine the search results of user's web queries, map data, and results from analytics run in real time in order to create unique software to meet the user's needs. This software may be a game, an applet or widget, or some other small software appliance, for use with the user's IoT devices. The mashups can also include code from web-hosted code libraries and repositories, most of which are open source and unrestricted for reuse. Which statement best captures the supply chain risks that are unique to this business process that the organization should consider? (Select all that apply.)
    1. Inbound supply chain risks
    2. Outbound supply chain risks
    3. Development risks
    4. Data integrity risks
  12. A new employee is familiarizing themselves with the company's parts catalog system. They discover that they can enter an SQL query such as a SELECT statement into a part number field, and sometimes the application responds by displaying selected data followed with a database command prompt. Which statement best identifies what kind of attack technique this might be?
    1. Cross-site scripting
    2. Buffer overflow
    3. SQL injection
    4. Arbitrary code execution
  13. Which of the following might be the most effective means of detecting malformed data attacks?
    1. User and entity behavior analytics, applied to ongoing activities
    2. Ethical penetration testing to detect vulnerabilities with data typing, validation, and reasonableness checks in software and procedure designs
    3. Periodic audit of processed transactions, accounts, and related records
    4. Trending analysis of errors encountered during user entry and use of forms, workflows, or other data input processes
  14. Arbitrary code execution is a form of:
    1. Firewall configuration errors
    2. Security kernel being disabled by superuser
    3. Buffer overflows
    4. Phishing
  15. You work as the one-person information security team at a small business. One of the senior managers has just watched some cybercrime webinars and as a result has asked you to look into implementing a full set of DNSSEC measures. Which statement best summarizes your response?
    1. You'll develop a plan and proposal to implement these in the organization's web servers and internal network management systems.
    2. Since most of what the company uses is all cloud-hosted applications platforms, you'll check with the CSP to see what they provide and what it might take to enable or use DNSSEC on your company's cloud activities.
    3. Since DNSSEC is for the Internet organizations that run the network and DNS infrastructures, there's really nothing in it that applies to our company.
    4. DNSSEC is not really an end user organization's set of remedies to apply, but there are other things that the company can and should be doing.
  16. Which statement best describes the two forms of analytics commonly used?
    1. Machine learning and Bayesian analysis
    2. Predictive and descriptive
    3. Glass box and hidden methods
    4. Pattern matching and behavioral analysis
  17. Machine learning, analytics, and other artificial intelligence techniques are often not trusted very much by business and organizational managers and leaders. Which of the following might be the biggest barrier to gaining their trust and confidence so that these tools can be put to more automated, real-time use as part of information security?
    1. Ensure that managers and leaders understand the mathematics and logical processes used by these tools.
    2. Conduct sufficient parallel assessment testing to demonstrate that these tools have “passed” their training and work as effectively as humans do.
    3. Use these technologies in ways that can display or summarize their logic and reasoning, allowing users to understand it (at a summary level), question it, or accept it.
    4. Always use them in a dual control process requiring trained human approval before action is taken.
  18. The customer satisfaction manager at your organization shares with you their concern that security measures are introducing too much friction into customer-facing processes. Which statement best describes what this means?
    1. As the sensitivity of security and counterfraud measures are increased, this requires users (such as customers) to respond correctly to more security challenges, more frequently, or for more user activities to require a human manager approval to proceed. These all add time to fulfilling customer service needs.
    2. Security measures have made customer-facing applications and web pages difficult to understand and harder to use; internal users, too, find them hard to follow and use correctly.
    3. Security measures are requiring management analysts to generate additional reports and analytics, which must then be audited, and this is reducing the cost effectiveness of customer-oriented business processes.
    4. Customers are complaining that they don't like the new multifactor authentication and security challenge processes, and dealing with these complaints is taking up too much staff time.
  19. Most organizations have to demonstrate that their users (at all levels) have completed specific security education, training, and awareness activities to satisfy compliance audits and keep their systems and information secure. Which is the best approach to achieve this?
    1. Use a formal classroom setting and instructional methods to deliver lessons that identify security requirements, the compliance requirements that relate to them, organizational security policies, employee responsibilities, and sanctions for violations of security policies.
    2. Combine initial classroom training on security requirements, policies, and employee responsibilities with shorter, self-paced refresher lessons that users must periodically complete.
    3. Combine initial on-the-job orientation education, with task-specific training to identify security policy requirements and link them to employee actions.
    4. Combine microtraining and other just-in-time learning methods and security services delivery, along with a mix of initial education and training and other methods.
  20. Which statement best describes the relationship between continuous assessment and vulnerability management?
    1. Vulnerability management should drive continuous assessment, setting the priority or urgency for assessment activities based on the risks associated with that vulnerability.
    2. Vulnerability management informs the planning and conduct of continuous assessment, the results of which are used as updates to vulnerability management.
    3. There is no direct link between these two sets of processes; rather, they come together via continuous monitoring.
    4. Vulnerability management helps inform and prioritize continuous assessment planning, but it is the results of continuous compliance that closes the loop back to vulnerability management.