Chapter 8
IN THIS CHAPTER
Developing assessment and test strategies
Performing vulnerability assessments, penetration tests, and more
Collecting security process data
Understanding test outputs
Conducting internal, external, and third-party audits
In this chapter, you learn about the various tools and techniques that security professionals use to continually assess and validate an organization’s security environment. This domain represents 12 percent of the CISSP certification exam.
Modern security threats are rapidly and constantly evolving. Likewise, an organization’s systems, applications, networks, services, and users are frequently changing. Thus, it is critical that organizations develop an effective strategy to regularly test, evaluate, and adapt their business and technology environment to reduce the probability and impact of successful attacks, as well as achieve compliance with applicable laws, regulations, and contractual obligations.
Organizations need to implement a proactive assessment and test strategy for both existing and new information systems and assets. The strategy should be an integral part of the risk management process to help the organization identify new and changing risks that are important enough to warrant analysis, decisions, and action.
Security personnel must identify all applicable laws, regulations, and other legal obligations such as contracts to understand what assessments, testing, and auditing are required. Further, security personnel should examine their organization’s risk management framework and control framework to see what assessments, control testing, and audits are suggested or required. The combination of these would then become a part of the organization’s overall strategy for assuring that all its security-related tools, systems, and processes are operating properly.
There are three main perspectives that come into play when planning for an organization’s assessments, testing, and auditing:
Third parties: This is all about audits of critical business activities that have been outsourced to external service providers, or third parties. Here, the systems and personnel being examined belong to an external service provider. Depending upon requirements in applicable laws, regulations, and contracts, these assessments of third parties may be performed by internal personnel, or in some cases external personnel may be required.
Many third-party service providers will commission external audits whose audit reports can be distributed to their customers. This can help service providers avoid multiple audits by its customers. Examples of such audits include SSAE 18, SOC-1, and SOC-2. Service providers also commission security consulting firms to conduct penetration tests on systems and applications, which helps them to reduce the number of customers who would want to do this themselves.
Security control testing employs various tools and techniques, including vulnerability assessments, penetration (or pen) testing, synthetic transactions, interfaces testing, and more. You learn about these and other tools and techniques in the following sections.
A vulnerability assessment is performed to identify, evaluate, quantify, and prioritize security weaknesses in an application or system. Additionally, a vulnerability assessment provides remediation steps to mitigate specific vulnerabilities that are identified in the environment.
There are three general types of vulnerability assessments:
Generally, automated network-based scanning tools are used to identify vulnerabilities in applications, systems, and network devices in a network. Sometimes, system-based scanning tools are used to examine configuration settings to identify exploitable vulnerabilities. Often, network- and system-based tools are used together to build a more complete picture of vulnerabilities in an environment.
A port scan uses a tool that communicates over the network with one or more target systems on various Transmission Control Protocol/Internet Protocol (TCP/IP) ports. A port scan can discover the presence of ports that should probably be disabled (because they serve no useful or necessary purpose on a particular system).
Network-based vulnerability scanning tools send network messages to systems in a network to identify any utilities, programs, or tools that may be configured to communicate over the network. These tools attempt to identify the version of any utilities, programs, and tools; often, it is enough to know the versions of the programs that are running, because scanning tools often contain a database of known vulnerabilities associated with program versions. Scanning tools may also send specially crafted messages to running programs to see if those programs contain any exploitable vulnerabilities.
Tools are also used to identify vulnerabilities in software applications. Generally, these tools are divided into two types: dynamic application security testing (DAST) and static application security testing (SAST). DAST will execute an application and then use techniques such as fuzzing to attempt to identify exploitable vulnerabilities that could permit an attacker to successfully compromise a software application; this would permit an attacker to alter or steal data or take over control of the system. SAST examine an application’s source code and look for exploitable vulnerabilities. Neither DAST nor SAST can find all vulnerabilities, but when used together by skilled personnel, many exploitable vulnerabilities can be found.
Examples of network-based vulnerability scanning tools include Nessus, Rapid7, and Qualys. Examples of system-based vulnerability scanning tools include Microsoft Baseline Security Analyzer (MBSA) and Flexera (formerly Secunia) PSI. Examples of application scanning tools include IBM AppScan, HP WebInspect, HP Fortify, Accunetix, and Burp Suite.
Vulnerability scanning tools (both those used to examine systems and network devices, as well as those that examine applications) generally perform two types of scans: unauthenticated scans and authenticated scans. In an authenticated scan, the scanning tool will be configured with login credentials and will attempt to log in to the device, system, or application to identify vulnerabilities not discoverable otherwise. In an unauthenticated scan, the scanning tool will not attempt to log in; hence, it can only discover vulnerabilities that would be exploitable by someone who does not possess valid login credentials.
Generally, all the types of scanning tools discussed in this section create some sort of a report that contains summary and detailed information about the scan that was performed and vulnerabilities that were identified. Many of these tools produce a good amount of detail, including steps used to identify each vulnerability, the severity of each vulnerability, and steps that can be taken to remediate each vulnerability.
Some vulnerability scanning tools employ a proprietary methodology for vulnerability identification, but most scanning tools include a common vulnerability scoring system (CVSS) score for each identified vulnerability. Application security is discussed in more detail in Chapter 10.
Vulnerability assessments are a key part of risk management (discussed in Chapter 3).
Penetration testing (pen testing for short) is the most rigorous form of vulnerability assessment. The level of effort required to perform a penetration test is far higher than for a port scan or vulnerability scan. Typically, an organization will employ a penetration test on a target system or environment when it wants to simulate an actual attack by an adversary.
A network penetration test of systems and network devices generally begins with a port scan and/or a vulnerability scan. This gives the pen tester an inventory of the attack surface of the network and the systems and devices connected to the network. The pen test will continue with extensive use of manual techniques used to identify and/or exploit vulnerabilities. In other words, the pen tester uses both automated as well as manual techniques to identify and confirm vulnerabilities.
Occasionally, a pen tester will exploit vulnerabilities during a penetration test. Pen testers generally tread carefully here because they must be acutely aware of the target environment. For instance, if a pen tester is testing a live production environment, exploiting vulnerabilities could result in malfunctions or outages in the target environment. In some cases, data corruption or data loss could also result.
When performing a penetration test, the pen tester will often take screen shots showing the exploited system or device. Often, a pen tester does this because system/device owners sometimes don’t believe that their environments contain exploitable vulnerabilities. By including screen shots in the final report, the pen tester is “proving” that vulnerabilities exist and are exploitable.
Pen testers often include details for reproducing exploits in their reports. This is helpful for system or network engineers who often want to reproduce the exploit, so that they can “see for themselves” that the vulnerability does, in fact, exist. It’s also helpful when engineers or developers make changes to mitigate the vulnerabilities; they can use the same techniques to see whether their fixes closed the vulnerabilities.
In addition to scanning networks, some other techniques are generally included in the topic of network penetration testing, including the following:
Packet sniffing: A packet sniffer is a tool that captures all TCP/IP packets on a network, not just those being sent to the system or device doing the sniffing. An Ethernet network is a shared-media network (see Chapter 6), which means that any or all devices on the LAN can (theoretically) view all packets. However, switched-media LANs are more prevalent today and sniffers on switched-media LANs generally pick up only packets intended for the device running the sniffer.
A network adapter that operates in promiscuous mode accepts all packets, not just the packets destined for the system and sends them to the operating system.
An application penetration test is used to identify vulnerabilities in a software application. Although the principles of an application penetration test are the same as a network penetration test, the tools and skills are somewhat different. Someone performing an application penetration test generally will have an extensive background in software development. Indeed, the best application pen testers are often former software developers or software engineers.
Penetration tests are also performed on the controls protecting physical premises, to see whether it is possible for an intruder to bypass security controls such as locked doors and keycard-controlled entrances. Sometimes pen testers will employ various social engineering techniques to gain unauthorized access to work centers and sensitive areas within work centers such as computer rooms and file storage rooms. Often, they plant evidence, such as a business card or other object to prove they were successful.
In addition to breaking into facilities, another popular technique used by physical pen testers is dumpster diving. Dumpster diving is low-tech penetration testing at its best (or worst), and is exactly what it sounds like. Dumpster diving can sometimes be an extraordinarily fruitful way to obtain information about an organization. Organizations in highly competitive environments also need to be concerned about where their trash and recycled paper goes.
Social engineering is any testing technique that employs some means for tricking individuals into performing some action or providing some information that provides the pen tester with the ability to break in to an application, system, or network. Social engineering involves such low-tech tactics as an attacker pretending to be a support technician, then calling an employee and asking for their password. You’d think most people would be smart enough not to fall for this, but people are people (and Soylent Green is people)! Some of the ruses used in social engineering tests include the following:
Reviewing your various security logs on a regular basis (daily, ideally) is a critical step in security control testing. Unfortunately, this important task often ranks only slightly higher than “updating documentation” on many administrators’ “to-do” list. Log reviews often happen only after an incident has already occurred. But that’s not the time to discover that your logging is incomplete or insufficient.
Logging requirements (including any regulatory or legal mandates) need to be clearly defined in an organization’s security policy, including:
Synthetic transactions are real-time actions or events that automatically execute on monitored objects. For example, a tool may be used to regularly perform a series of scripted steps on an e-commerce website to measure performance, identify impending performance issues, and simulate the user experience. Thus, synthetic transactions can help an organization proactively test, monitor, and ensure availability (refer to the C-I-A triad in Chapter 3) for critical systems and monitor service-level agreement (SLA) guarantees.
Application performance monitoring tools traditionally have produced such metrics as system uptime, correct processing, and transaction latency. While uptime certainly is an important aspect of availability, it is only one component. Increasingly, reachability (which is a more user- or application-centric metric) is becoming the preferred metric for organizations that focus on customer experience. After all, it doesn’t do your customers much good if your web servers are up 99.999 percent of the time, but Internet connections from their region of the world are slow, DNS doesn’t resolve quickly, or web pages take 5 or 6 seconds to load in an online world that measures responsiveness in milliseconds! Hence, other key metrics for applications are correct processing (perhaps expressed as a percentage, which should be pretty close to 100 percent!) and transaction latency (the length of time it takes for specific types of transactions to complete). These metrics help operations personnel spot application problems.
Code review and testing (sometimes known as peer review) involves systematically examining application source code to identify bugs, mistakes, inefficiencies, and security vulnerabilities in software programs. Online software repositories, such as Mercurial and Git, enable software developers to manage source code in a collaborative development environment. A code review can be accomplished either manually by carefully examining code changes visually, or by using automated code reviewing software (such as IBM AppScan Source, HP Fortify, and CA Veracode). Different types of code review and testing techniques include
The opposite of use case testing (in which normal or expected behavior in a system or application is defined and tested), abuse/misuse case testing is the process of performing unintended and malicious actions in a system or application in order to produce abnormal or unexpected behavior, and thereby identify potential vulnerabilities.
After misuse case testing identifies a potential vulnerability, a use case can be developed to define new requirements for eliminating or mitigating similar vulnerabilities in other programs and applications.
A common technique used in misuse case testing is known as fuzzing. Fuzzing involves the use of automated tools that can produce dozens (or hundreds, or even more) of combinations of input strings to be fed to a program’s data input fields in order to elicit unexpected behavior. Fuzzing is used, for example, in an attempt to successfully attack a program using script injection. Script injection is a technique where a program is tricked into executing commands in various languages, mainly JavaScript and SQL. Tools such as HP WebInspect, IBM AppScan, Acunetix, and Burp Suite have built-in fuzzing and script injection tools that are pretty good at identifying script injection vulnerabilities in software applications.
Test (or code) coverage analysis measures the percentage of source code that is tested by a given test (or validation) suite. Basic coverage criteria typically include
For example, a security engineer might use a dynamic application security testing tool (DAST), such as AppScan or WebInspect, to test a travel booking program to determine whether the program has any exploitable security defects. Tools such as these are powerful, and they use a variety of methods to “fuzz” input fields in attempts to discover flaws. But the other thing these tools need to do is fill out forms in every conceivable combination, so that all the program’s code will be executed. In this example of a travel booking tool, these combinations would involve every way in which flights, hotels, or cars could be searched, queried, examined, and finally booked. In a complex program, this can be really daunting. Highly systematic analysis would be needed, to make sure that every possibly combination of conditions is tested so that all of a program’s code is tested.
Interface testing focuses on the interface between different systems and components. It ensures that functions (such as data transfer and control between systems or components) perform correctly and as expected. Interface testing also verifies that any execution errors are properly handled and do not expose any potential security vulnerabilities. Examples of interfaces tested include
Assessment of security management processes and systems helps an organization determine the efficacy of its key processes and controls. Periodic testing of key activities is an important part of management and regulatory oversight, to confirm the proper functioning of key processes, as well as identification of improvement areas.
Several factors must be considered when determining who will perform this testing, including:
These factors will also determine required testing methods, including the tools used, testing criteria, sampling, and reporting. For example, in a U.S. public company, an organization is required to self-evaluate its information security controls in specific ways and with specific auditing standards, under the auspices of the Sarbanes–Oxley (SOX) Act of 2002, also known as the Public Company Accounting Reform and Investor Protection Act.
Management must regularly review user and system accounts and related business processes and records to ensure that privileges are provisioned and de-provisioned appropriately and with proper approvals. The types of reviews include
Account management processes are discussed in more detail in Chapter 9.
Management provides resources and strategic direction for all aspects of an organization, including its information security program. As a part of its overall governance, management will need to review key aspects of the security program. There is no single way that this is done; instead, in the style and with the rigor that management reviews other key activities in an organization, management will review the security program. In larger organizations, this review will likely be quite formal, with executive-level reports created periodically for senior management, including key activities, events, and metrics (think eye candy here). In smaller organizations, this review will probably be a lot less formal. In the smallest organizations, as well as organizations lower security maturity levels, there may not be any management review at all. Management review often includes these activities:
The internationally recognized standard, ISO/IEC 27001, “Information technology — Security techniques — Information security management systems — Requirements,” requires that an organization’s management determine what activities and elements in the information security program need to be monitored, the methods to be used, and the individuals or teams that will review them.
Key performance and risk indicators are meaningful measurements of key activities in an information security program that can be used to help management at every level better understand how well the security program and its components are performing.
This is easier said than done; here are a few reasons why:
Organizations will typically develop metrics and key risk indicators (KRIs) around its key security-related activities to ensure that security processes are operating as expected. Metrics help identify improvement areas by alerting management through unexpected trends.
Some of the focus areas for security metrics include the following:
Key risk indicators are so-called because they are harbingers of information risk in an organization. Although the development of operational metrics is not all that difficult, security managers often struggle with the problem of developing key risk indicators that make sense to executive management. For example, the vulnerability management process involves the use of one or more vulnerability scanning tools and subsequent remediation efforts. Here, some good operational metrics include numbers of scans performed, numbers of vulnerabilities identified, and the time required to remediate identified vulnerabilities. These metrics, however, will make no sense to management, because they’re lacking business context. However, one or more good key risk indicators can be derived from data in the vulnerability management process. For instance, “percentage of servers supporting manufacturing whose critical security defects are not remediated within ten days” is a great key risk indicator. This metric directly helps management understand how well the vulnerability management process is performing in a specific business context. This is also a good leading indicator of the risk of a potential breach (which exploits an unpatched, vulnerable server) that could impact business operations (manufacturing, in this case).
Organizations need to routinely review and test system and data backups, and recovery procedures, to ensure they are accurate, complete, and readable. Organizations need to regularly test the ability to actually recover data from backup media, to ensure that they can do so in the event of a hardware malfunction or disaster.
On the surface, this seems easy enough. But, as they say, the devil’s in the details. There are several gotchas and considerations including the following:
Data recovery versus disaster recovery: There are two main reasons for backing up data:
For data recovery, you want your backup media (in whatever form) logically and physically near your production systems, so that the logistics of data recovery are simple. However, disaster recovery requires backup media to be far away from the primary processing site so that it is not involved in the same natural disaster. These two are at odds with one another; organizations sometimes solve this by creating two sets of backup media: One stays in the primary processing center, while the other is stored at a secure, offsite storage facility.
Organizations need to measure the participation in and effectiveness of security training and awareness programs. This will ensure that individuals at all levels in the organization understand how to respond to new and evolving threats and vulnerabilities. Security awareness training is discussed in Chapter 3.
Organizations need to periodically review and test their disaster recovery (DR) and business continuity (BC) plans, to determine whether recovery plans are up-to-date and will result in the successful continuation of critical business processes in the event of a disaster. Disaster recovery and business continuity plan development and testing are discussed in Chapters 3 and 9.
Various systems and tools are capable of producing volumes of log and testing data. Without proper analysis and interpretation, these reports are useless or may be used out of context. Security professionals must be able to analyze log and test data, and report this information in meaningful ways, so that senior management can understand organizational risks and make informed security decisions.
Often this requires that test output and reports be developed for different audiences with information in a form that is useful to them. For example, the output of a vulnerability scan report with its lists of IP addresses, DNS names, and vulnerabilities with their respective common vulnerabilities and exposures codes (CVEs) and CVSSs would be useful to system engineers and network engineers who would use such reports as lists of individual defects to be fixed. But give that report to a senior executive, and he’ll have little idea what it’s about or what it means in business terms. For senior executives, vulnerability scan data would be rolled up into meaningful business metrics and key risk indicators to inform senior management of any appreciable changes in risk levels.
The key here for information security professionals is knowing the meaning of data and transforming it for various purposes and different audiences. Security professionals who do this well are more easily able to obtain funding for additional tools and staff. This is because they’re able to articulate the need for resources in business terms.
Auditing is the process of examining systems and/or business processes to ensure that they’ve been properly designed, are being properly used, and are considered effective. Audits are frequently performed by an independent third-party or an independent group within an organization. This helps to ensure that the audit results are accurate and are not biased because of organizational politics or other circumstances.
Audits are frequently performed to ensure an organization is in compliance with business or security policies and other requirements that the business may be subject to. These policies and requirements can include government laws and regulations, legal contracts, and industry or trade group standards and best practices.
The major factors in play for internal and external audits include
There are three main contexts for audits of information systems and related processes: