Chapter 10

System and Network Assessments

Abstract

An overview of the key elements of technical security testing and assessment with emphasis on specific techniques, their benefits and limitations, and recommendations for their use in testing and evaluation of system and network configurations is discussed.

Keywords

network assessment
system assessment
penetration testing
vulnerability scanning
The object of any testing event or activity is to evaluate the item being tested against some set of external criteria to verify and to validate the item under test meets the defined criteria. The needs for running today’s complex and disparate systems safely and securely are many and varied. The requirements for testing and evaluating these systems find uses in various areas including:
Risk analysis: The evaluations for risk often require the identified risks be rated according to impact and potential harm they could cause to the organization. These evaluations will often include controls and their methodology of implementation which can be determined by analyzing the effectiveness of the installed controls through use of automated toolsets and scanners.
Assessment: There are many ongoing needs and requirements throughout the federal government where the system which processes the information for the agency needs to be tested for its functionality and for its operations. These assessments will include testing after repairs have been completed to the system, testing when some major event or incident has occurred which calls into question the security of the system, evaluating the system subsequent to an external analysis which indicates some anomalous component, and examining the system when warranted by a reported condition or requested by a senior leadership.
Authorization: Each system that is projected to be on or is already on a Federal Backbone or Network requires, under FISMA law, to be reviewed and analyzed for risks, evaluated and tested to ensure the security controls are working correctly, and then assessed to ensure the system is functioning at an acceptable level of risk to operate relatively securely. These authorization efforts are the basis for the Risk Management Framework criteria for federal Automated Information System (AIS) as defined in OMB Circular A-130 and are closely adhered to by federal agencies and authorizing officials. The independent testing needs for authorization on both Major Applications and General Support Systems, such as networks and data centers, provide many opportunities for testing efforts with various methods and techniques as required by the type of system being evaluated.
Security architecture validation: Within each network are the various components, pieces of hardware, appliances, and software applications which comprise the network-based and system-based security controls. Each of these items is designed to provide some level of security for the system or network component it is protecting. However, there are often areas of interface and interconnections between or among components wherein protection is also needed and required. The designs and constructs for these areas are found in the security architecture documentation and drawings. Security architecture is an architectural subset under the enterprise architecture methodology which has developed over the past 15–20 years. Security architecture documents will include the reference models for the technical and business processes, the conceptual and actual drawings of the security processes for the network or system under review, and the various defined information types used within the system or network.
Policy development support: One of the starting points for any assessment is to verify and validate to overarching corporate or organizational structures for security as they are implemented, the security policies. Each organization needs to have a policy document which covers the security, privacy, and liability needs of the organization with respect to the legal and privacy requirements of the people and the information the organization uses and retains. There are a multitude of privacy and legal requirements, regulations, and industry standards in today’s world which all provide guidance for use, retention, transmittal, and storage of these types of data and actions. Assessors need to review the policy documents for the organization to ensure compliance to these various statutory and regulatory needs. As part of the review, assessors should also review the policy development process to verify the organizational efforts to stay current with its regulatory environment and security needs.
Develop a cohesive, well-thought-out operational security testing program: Each organization has many testing and evaluation process events they will be carrying out throughout the year and this program needs to be reviewed and validated. This validation is accomplished to ensure the program performance is providing the right level of information to the decision makers in the organization so they are making their risk-based decision fully informed rather than with only partial data and incomplete testing results. Since these decisions are critical to the organization’s business objectives or missions, this process is very important to verify the completeness of the testing, the validity of the tests conducted, and the full scope of the evaluation procedures conducted by the organization.

The benefits of conducting the assessment and test program in a comprehensive and structured way include the following:

Provides consistency and structure
Minimizes testing risks
Expedites transition of new staff
Addresses resource constraints
Reuses resources
Decreases time required
Reduces cost

Because information security assessment requires resources such as time, staff, hardware, and software, resource availability is often a limiting factor in the type and frequency of security assessments. Evaluating the types of security tests and examinations the organization will execute, developing an appropriate methodology, identifying the resources required, and structuring the assessment process to support expected requirements can mitigate the resource challenge. This gives the organization the ability to reuse pre-established resources such as trained staff and standardized testing platforms; decreases time required to conduct the assessment and the need to purchase testing equipment and software; and reduces overall assessment costs.2

Each benefit has its own value and the accumulated gains for the organization add up to full and complete coverage of all areas to provide the assurance to the senior leadership of the organization that the results of the program do indeed provide the needed information to the decision makers on the risks and treatments of these identified risks to make the risk-based decisions about operating and running these systems safely and securely.

Security testing program within federal agencies: Under FISMA requirements, each federal agency is required to evaluate their information systems for security on an annual and on a triannual basis to ensure their viability to keep the information secure and to operate at an acceptable level of risk.
The annual requirement covers the need to make sure the system is operating as intended and to check the current level of security for adequacy in light of the current threats and vulnerabilities of operating the system. This process has become standardized with OMB providing the listing of the families of security controls to be reviewed and tested each year to go with the high volatility controls and recently installed controls and control fixes the organization has implemented.
The triannual requirement is for the organization to completely reassess and retest all security components and controls for each system to verify and validate to full scope of control implementation with respect to the security requirements of confidentiality, integrity, and availability for the system.
Testing purposes: The actual testing provides multiple important and positive results when conducted in conjunction with the other parts of the security program and activities. The results often document the purposes behind the testing and include the following areas:
Exploitable flaws: Weaknesses and flaws within the system are often found by testing of the system and its security controls. This process is critical to the organization since it is the main way these flaws can be determined, remediated, and repaired to keep the system secure and people utilizing the system on track and safe as they conduct the business supported by this system.
Understanding, calibrating, and documenting the operational security posture: Testing and evaluation provide a means to identify, adjust, and fully document the system, its operating environment, the interfaces to and from the system, and its operating status at the point in time of the test event. As a result of the assessment, the system and its documentation is verified, corrected, and provided to the test results receiver.
Improving the security posture: The major component for testing is to produce an improved and viable security posture for the system under test. This security result is found in the recommendations of the assessor as documented in the Security Assessment Report (SAR) defined later in this book. The improvement of the security is the goal of any testing event and the testing provides identified areas for improvement.
Routine and integral part of the system and network operations: In today’s world with constantly changing operating environments, threat sources consistently changing their methods and techniques of attack, and the consistently changing vulnerability flaws and weaknesses in applications and operating systems, ongoing assessments are necessary to assist the system owners and executives to provide information on the current risks and security of their systems. The following areas and focus points are critical to using the assessment methods mentioned herein for operational utilization:
Most important systems first: The methods for your business to make money (if it is a for-profit corporation) or for your organization to deliver its service (if it is a not-for-profit organization) are vital to the completion of the business objectives and/or mission of your company or organization. Since that area is true, the most critical systems within that business objective need to be available and active at all the times your organization is conducting its business. The assessment of these critical systems and their risks and security therefore is vital to the organization and its normal operations.
- Since all systems within the organization are always considered vital to an organization and its business processes, evaluating the most critical systems first provides a baseline to set the core security and privacy assurance for the organization.
- Always test and evaluate the critical, most important systems first to ensure the business continuity/COOP efforts are focused correctly, and then move on to the essential systems. The organization’s business impact analysis Bureau of Indian Affairs (BIA) is a good place to start since that is where the organization itself has identified those systems most vital to the mission and/or business processes of the unit, component, or agency.
Use caution when testing: During the course of conducting testing on controls and networks, there are instances where the automated tools used can cause interrupts in processing or operations of the system being review. Be careful and understand the tool and its capabilities so as not to cause an unintended denial of service (DOS) or other interrupt of activities on the system.
Security policy accurately reflects the organization’s needs: Reviewing the organizational policies and standards is one of the first and most important steps in conducting any assessment. I have often heard that “security controls are implemented to support security policies” and this is very true. Each policy, as defined by the management and security portions of the agency, provides the basis and foundation for every control implemented in the system. Therefore, these policies set the core requirements for security and assurance which each user and customer of the system expects to be met by the system in its use and implementation. The basic policy sets the needs, requirements, and guidance to the user for use on the system.
- Maximize use of common controls: The agency’s use of common controls provides the assessor the basic viewpoint of security within the organization. If there are many Common Control Providers which follow the basic requirements for data centers, physical and environmental security, personnel, facilities maintenance and management, incident response, and other common areas, then the senior management of the organization is approaching security in a holistic and standardized manner which provides common ground for each department to base their risk and security approaches on for continued assurance and trustworthiness.
- Share assessment results: The continued sharing of assessment results within an organization provides the senior leadership with an updated and current view of the risks the organization is currently facing during the accomplishment of its mission or business processes. This is important to maintain the organizational risk tolerance in light of the changing threats to the world and constant vulnerabilities being identified for each type of system and operating system.
- Develop organization-wide procedures: When an organization provides the various departments the common security procedures for standardized activities, it is creating a “mind-set” for the use and implementation of security which gives the result of utilizing the industry “best practices” for practices and implementation guidance. These procedures provide the individual security professional and the agencies with the methods and means for conducting the security operations of the unit in accordance with both the agency requirements and the external compliance requirements all agencies must meet in today’s world.
- Provide organization-wide tools, templates, and techniques: By developing and implementing standard tools, templates, and security techniques, the organization is providing the best practice approach to security and assurance. With templates for forms, plans, and reports, the unit personnel can identify the critical information components and metrics needed to keep focus on during the course of their daily security activities. By defining standard tools for use, the unit is giving the means for continued identification and classification of the data elements needed to provide the senior leadership with the continuous monitoring points for ongoing risk evaluations and decision making.
Security testing integrated into the risk management process: The FISMA-mandated and NIST-designed Risk Management Framework approach to security assurance and evaluations for USG systems demands that security component and system testing be conducted ongoing throughout the system life cycle of the system under evaluation. Testing and evaluation under the Risk Management Framework is defined as the critical independent need for the authorizing official to use as his/her risk-based decision-making input to ensure the operation and maintenance of the system under evaluation is actually safe and secure to operate in the normal course of activities.
Ensure system administrators and network managers are trained and capable: One of the most important areas of continued security of each system is to make sure the elevated privilege account holders for the system are properly trained in their performance of their daily activities. Insider threat in today’s world is still the highest impact area of potential issues and security concerns and by providing proper training to these elevated privilege users, the organization is reducing the likelihood of misuse and abuse possibilities.
Keep systems up-to-date with patches: Systems today have many areas of potential flaws and weaknesses in the operating systems and applications that they deploy and use. The software industry has developed a methodology over the past 40 years wherein the end user is the tester of the software, rather than develop their (the vendor) own testing program. All major software vendors utilize this construct and we need to constantly be vigilant with the software to keep it up-to-date in its patching.
Capabilities and limitations of vulnerability testing: Vulnerability testing on systems and networks is a common best practice for organizations to ensure their systems are not vulnerable to some exposure or exploit in the software and systems they employ. However, remember the vulnerability scanning tools that are currently being used throughout the industry do not review external mitigation efforts for system weaknesses. For example, a security flaw commonly identified in a system would be a self-signed SSL certificate on a system not externally provided to the organization. However, if the organization utilizes external SSL certificates and the encryption processes are controlled at the entrance to the network, this flaw is not actually visible or exploitable from outside the network. Therefore, this weakness is minor in nature since it is handled externally to the identified server.

800-115 introduction

NIST produced SP 800-115 in September 2008 to give guidance to federal agencies in the conduct of testing events for their systems and networks. The intent of this document is as follows:

The purpose of this document is to provide guidelines for organizations on planning and conducting technical information security testing and assessments, analyzing findings, and developing mitigation strategies. It provides practical recommendations for designing, implementing, and maintaining technical information relating to security testing and assessment processes and procedures, which can be used for several purposes—such as finding vulnerabilities in a system or network and verifying compliance with a policy or other requirements. This guide is not intended to present a comprehensive information security testing or assessment program, but rather an overview of the key elements of technical security testing and assessment with emphasis on specific techniques, their benefits and limitations, and recommendations for their use.3

The big picture, the main purpose for assessing, evaluating, and testing systems, is clearly defined as follows:

This document is a guide to the basic technical aspects of conducting information security assessments. It presents technical testing and examination methods and techniques that an organization might use as part of an assessment, and offers insights to assessors on their execution and the potential impact they may have on systems and networks. For an assessment to be successful and have a positive impact on the security posture of a system (and ultimately the entire organization), elements beyond the execution of testing and examination must support the technical process. Suggestions for these activities—including a robust planning process, root cause analysis, and tailored reporting—are also presented in this guide.

The processes and technical guidance presented in this document enable organizations to:

Develop information security assessment policy, methodology, and individual roles and responsibilities related to the technical aspects of assessment
Accurately plan for a technical information security assessment by providing guidance on determining which systems to assess and the approach for assessment, addressing logistical considerations, developing an assessment plan, and ensuring legal and policy considerations are addressed
Safely and effectively execute a technical information security assessment using the presented methods and techniques, and respond to any incidents that may occur during the assessment
Appropriately handle technical data (collection, storage, transmission, and destruction) throughout the assessment process
Conduct analysis and reporting to translate technical findings into risk mitigation actions that will improve the organization’s security posture.

The information presented in this publication is intended to be used for a variety of assessment purposes. For example, some assessments focus on verifying that a particular security control (or controls) meets requirements, while others are intended to identify, validate, and assess a system’s exploitable security weaknesses. Assessments are also performed to increase an organization’s ability to maintain a proactive computer network defense. Assessments are not meant to take the place of implementing security controls and maintaining system security.4

Now, there are many reasons to conduct technical testing and evaluation activities within an organization or agency. Within the scope of technical testing the criteria for assessments include:
Internet change: The incredible changes which have occurred across the ubiquitous internet in the past several years have created a vast array of new capabilities, technologies, attack methods, and means for exploitation which were never even thought of by the original designers. The full scope of the protocols, services, and activities available today on the internet is mind staggering. You can virtually accomplish any task or perform any endeavor entirely on the internet rapidly and completely.
Intruder attacks: Today the methods and techniques of attack against anyone using the internet are extremely varied and complex. Often many organizations and individuals have no knowledge that they and their data have been compromised and exfiltrated from their systems. The incredible proliferation of malware across the internet has produced a virtual “Wild West” of attackers, “botnets,” cybercrime organizations, and “monetization” of many data types which have never been subjected to these uses in the past. “Hacker” tools and tactics have dramatically increased in their use and the standard internet user is the constant victim of these efforts to gain the money and the resultant return from these attacks.
Powerful systems today: The advances in technology for both hardware and software in the modern PC and laptop machines have provided a dramatic expansion of options, available resources, and advances in the abilities and capabilities of these machines. Additionally, with these technological advances, the network computing components (routers, switches, virtualization, cloud, etc.) have added to the capabilities of organizations and agencies to provide services never thought of before. Many of these systems are well advanced of the computing machine just 6 or 8 years ago and give many new methods and techniques available to today’s attack-minded hacker.
Complex system administration: With the technological advances mentioned above, the administration of these machines and systems has dramatically advanced as well. Multiple servers with many inputs and outputs to systems both inside a network and outside the organizational boundaries are almost required to have log-ins from multiple users literally all over the world. The control and review of these very active systems has become a full-time job for many system administrators of these systems.
Reasons for conducting technical and nontechnical testing on today’s systems are as follows:
Highly cost-effective in preventing incidents and uncovering unknown vulnerabilities: Testing can provide a detailed review of the systems and their vulnerability to both inside and outside attack techniques. This process leads to preventive measures being taken by the organization which both is cost-effective in result and often uncovers previously unknown (to the organization) issues and potential vulnerabilities in the systems under test.
Testing – most conclusive determinant: Testing provides security artifacts and results documentation which can support evaluation efforts, audit findings, and control recommendations with definitive objective documentation in the various security areas of confidentiality, integrity, availability, authentication, and reliability. The objective, scientific basis for the results provides the independent-type reporting necessary for the organizational decision makers to use to make their risk-based decisions on the viability and risk management practices for the system and the agency.
Methods for achieving the mission/business goals: Testing and evaluations support the operations of systems in the areas of verifying configuration changes, providing realistic results in an operational environment, and ensuring the security and safety of the system is being maintained continuously as it is providing the operating results expected and desired by the organization. By performing ongoing testing, the organization is deriving continuous compliance with external requirements and standards.

Assessment techniques

SP 800-115 defines three basic techniques for technical assessments. They are as follows:
Review Techniques. These are examination techniques used to evaluate systems, applications, networks, policies, and procedures to discover vulnerabilities, and are generally conducted manually. They include documentation, log, rule-set, and system configuration review; network sniffing; and file integrity checking.
Target Identification and Analysis Techniques. These testing techniques can identify systems, ports, services, and potential vulnerabilities, and may be performed manually but are generally performed using automated tools. They include network discovery, network port and service identification, vulnerability scanning, wireless scanning, and application security examination.
Target Vulnerability Validation Techniques. These testing techniques corroborate the existence of vulnerabilities, and may be performed manually or by using automatic tools, depending on the specific technique used and the skill of the test team. Target vulnerability validation techniques include password cracking, penetration testing, social engineering, and application security testing.5
SP 800-115 goes on and adds the nontechnical means for assessments as follows:

Additionally, there are many non-technical techniques that may be used in addition to or instead of the technical techniques.

One example is physical security testing, which confirms the existence of physical security vulnerabilities by attempting to circumvent locks, badge readers, and other physical security controls, typically to gain unauthorized access to specific hosts.
Another example of a non-technical technique is manual asset identification. An organization may choose to identify assets to be assessed through asset inventories, physical walkthroughs of facilities, and other non-technical means, instead of relying on technical techniques for asset identification.6
SP 800-115 goes on to explain:

Examinations primarily involve the review of documents such as policies, procedures, security plans, security requirements, standard operating procedures, architecture diagrams, engineering documentation, asset inventories, system configurations, rule-sets, and system logs. They are conducted to determine whether a system is properly documented, and to gain insight on aspects of security that are only available through documentation. This documentation identifies the intended design, installation, configuration, operation, and maintenance of the systems and network, and its review and cross-referencing ensures conformance and consistency. For example, an environment’s security requirements should drive documentation such as system security plans and standard operating procedures—so assessors should ensure that all plans, procedures, architectures, and configurations are compliant with stated security requirements and applicable policies. Another example is reviewing a firewall’s rule-set to ensure its compliance with the organization’s security policies regarding Internet usage, such as the use of instant messaging, peer-to-peer (P2P) file sharing, and other prohibited activities.

Examinations typically have no impact on the actual systems or networks in the target environment aside from accessing necessary documentation, logs, or rule-sets. (One passive testing technique that can potentially impact networks is network sniffing, which involves connecting a sniffer to a hub, tap, or span port on the network. In some cases, the connection process requires reconfiguring a network device, which could disrupt operations.) However, if system configuration files or logs are to be retrieved from a given system such as a router or firewall, only system administrators an modified or deleted.

Testing involves hands-on work with systems and networks to identify security vulnerabilities, and can be executed across an entire enterprise or on selected systems. The use of scanning and penetration techniques can provide valuable information on potential vulnerabilities and predict the likelihood that an adversary or intruder will be able to exploit them. Testing also allows organizations to measure levels of compliance in areas such as patch management, password policy, and configuration management.

Although testing can provide a more accurate picture of an organization’s security posture than what is gained through examinations, it is more intrusive and can impact systems or networks in the target environment. The level of potential impact depends on the specific types of testing techniques used, which can interact with the target systems and networks in various ways—such as sending normal network packets to determine open and closed ports, or sending specially crafted packets to test for vulnerabilities. Any time that a test or tester directly interacts with a system or network, the potential exists for unexpected system halts and other denial of service conditions. Organizations should determine their acceptable levels of intrusiveness when deciding which techniques to use. Excluding tests known to create denial of service conditions and other disruptions can help reduce these negative impacts.

Testing does not provide a comprehensive evaluation of the security posture of an organization, and often has a narrow scope because of resource limitations—particularly in the area of time. Malicious attackers, on the other hand, can take whatever time they need to exploit and penetrate a system or network. Also, while organizations tend to avoid using testing techniques that impact systems or networks, attackers are not bound by this constraint and use whatever techniques they feel necessary. As a result, testing is less likely than examinations to identify weaknesses related to security policy and configuration. In many cases, combining testing and examination techniques can provide a more accurate view of security.7

Network testing purpose and scope

Networking evaluations and testing areas of focus include the following components, equipment, and devices:
Firewalls
Routers and switches
Network-perimeter security systems (Intrusion Detection System (IDS))
Application servers
Other servers such as for Domain Name System (DNS) or directory servers or file servers (Common Internet File System (CIFS)/Server Message Block (SMB), Network File System (NFS), File Transfer Protocol (FTP), etc.)
These various components are often standardized by the organization with common configurations, formal change management request actions, and continued monitoring and maintenance, all similar to computing equipment residing on the network such as file and application servers and workstations and laptops. Within this normal network view, there are common access controls utilized through the Access Control List (ACL) implementation.

ACL Reviews

A rule-set is a collection of rules or signatures that network traffic or system activity is compared against to determine what action to take—for example, forwarding or rejecting a packet, creating an alert, or allowing a system event. Review of these rule-sets is done to ensure comprehensiveness and identify gaps and weaknesses on security devices and throughout layered defenses such as network vulnerabilities, policy violations, and unintended or vulnerable communication paths. A review can also uncover inefficiencies that negatively impact a rule-set’s performance.

Rule-sets to review include network- and host-based firewall and IDS/IPS rule-sets, and router access control lists. The following list provides examples of the types of checks most commonly performed in rule-set reviews:

1. For router access control lists
Each rule is still required (for example, rules that were added for temporary purposes are removed as soon as they are no longer needed)
Only traffic that is authorized per policy is permitted, and all other traffic is denied by default
2. For firewall rule-sets
Each rule is still required
Rules enforce least privilege access, such as specifying only required Internet Protocol (IP) addresses and ports
More specific rules are triggered before general rules
There are no unnecessary open ports that could be closed to tighten the perimeter security
The rule-set does not allow traffic to bypass other security defenses
For host-based firewall rule-sets, the rules do not indicate the presence of backdoors, spyware activity, or prohibited applications such as peer-to-peer file sharing programs
3. For IDS/IPS rule-sets
Unnecessary signatures have been disabled or removed to eliminate false positives and improve performance
Necessary signatures are enabled and have been fine-tuned and properly maintained.8

System-Defined Reviews

There are many types of configurations for each type of computer and system. With these varied types there comes a strong need to have each system, machine, or device under standard configuration control and management. This process places the system under test in a relatively controlled environment with its configuration files, software leads, and other areas. The machines and devices which should be under this type of control include:
Computer system (mainframe, minicomputer)
Network system (LAN)
Network domain
Host (computer system)
Network nodes, routers, switches, and firewalls
Network and/or computer application on each computer system

System configuration review is the process of identifying weaknesses in security configuration controls, such as systems not being hardened or configured according to security policies. For example, this type of review will reveal unnecessary services and applications, improper user account and password settings, and improper logging and backup settings. Examples of security configuration files that may be reviewed are Windows security policy settings and Unix security configuration files such as those in /etc.

Assessors using manual review techniques rely on security configuration guides or checklists to verify that system settings are configured to minimize security risks. To perform a manual system configuration review, assessors access various security settings on the device being evaluated and compare them with recommended settings from the checklist. Settings that do not meet minimum security standards are flagged and reported.

Automated tools are often executed directly on the device being assessed, but can also be executed on a system with network access to the device being assessed. While automated system configuration reviews are faster than manual methods, there may still be settings that must be checked manually. Both manual and automated methods require root or administrator privileges to view selected security settings.

Generally it is preferable to use automated checks instead of manual checks whenever feasible. Automated checks can be done very quickly and provide consistent, repeatable results. Having a person manually checking hundreds or thousands of settings is tedious and error-prone.9

Testing roles and responsibilities

Take approval of CIO/upper management.
Alert security officers, management, and users.
Avoid confusion and unnecessary expense.
Alert local law enforcement officials, if necessary.

Security testing techniques

1. Network scanning
2. Vulnerability scanning
3. Password cracking
4. Log review
5. Integrity checkers
6. Virus detection
7. War dialing
8. War driving
9. Penetration testing
1. Network scanning:
a. Network scanning (sniffing):

Network sniffing is a passive technique that monitors network communication, decodes protocols, and examines headers and payloads to flag information of interest. Besides being used as a review technique, network sniffing can also be used as a target identification and analysis technique. Reasons for using network sniffing include the following:

Capturing and replaying network traffic
Performing passive network discovery (e.g., identifying active devices on the network)
Identifying operating systems, applications, services, and protocols, including unsecured (e.g., telnet) and unauthorized (e.g., peer-to-peer file sharing) protocols
Identifying unauthorized and inappropriate activities, such as the unencrypted transmission of sensitive information
Collecting information, such as unencrypted usernames and passwords.

Network sniffing has little impact on systems and networks, with the most noticeable impact being on bandwidth or computing power utilization. The sniffer—the tool used to conduct network sniffing—requires a means to connect to the network, such as a hub, tap, or switch with port spanning. Port spanning is the process of copying the traffic transmitted on all other ports to the port where the sniffer is installed. Organizations can deploy network sniffers in a number of locations within an environment. These commonly include the following:

At the perimeter, to assess traffic entering and exiting the network
Behind firewalls, to assess that rule-sets are accurately filtering traffic
Behind IDSs/IPSs, to determine if signatures are triggering and being responded to appropriately
In front of a critical system or application to assess activity
On a specific network segment, to validate encrypted protocols.

One limitation to network sniffing is the use of encryption. Many attackers take advantage of encryption to hide their activities—while assessors can see that communication is taking place, they are unable to view the contents. Another limitation is that a network sniffer is only able to sniff the traffic of the local segment where it is installed. This requires the assessor to move it from segment to segment, install multiple sniffers throughout the network, and/or use port spanning. Assessors may also find it challenging to locate an open physical network port for scanning on each segment. In addition, network sniffing is a fairly labor-intensive activity that requires a high degree of human involvement to interpret network traffic.10

b. Port scanning:

Network discovery uses a number of methods to discover active and responding hosts on a network, identify weaknesses, and learn how the network operates. Both passive (examination) and active (testing) techniques exist for discovering devices on a network. Passive techniques use a network sniffer to monitor network traffic and record the IP addresses of the active hosts, and can report which ports are in use and which operating systems have been discovered on the network. Passive discovery can also identify the relationships between hosts—including which hosts communicate with each other, how frequently their communication occurs, and the type of traffic that is taking place—and is usually performed from a host on the internal network where it can monitor host communications. This is done without sending out a single probing packet. Passive discovery takes more time to gather information than does active discovery, and hosts that do not send or receive traffic during the monitoring period might not be reported.

Network port and service identification involves using a port scanner to identify network ports and services operating on active hosts—such as FTP and HTTP—and the application that is running each identified service, such as Microsoft Internet Information Server (IIS) or Apache for the HTTP service. Organizations should conduct network port and service identification to identify hosts if this has not already been done by other means (e.g., network discovery), and flag potentially vulnerable services. This information can be used to determine targets for penetration testing.

All basic scanners can identify active hosts and open ports, but some scanners are also able to provide additional information on the scanned hosts. Information gathered during an open port scan can assist in identifying the target operating system through a process called OS fingerprinting. For example, if a host has TCP ports 135, 139, and 445 open, it is probably a Windows host, or possibly a UNIX host running Samba. Other items—such as the TCP packet sequence number generation and responses to packets—also provide a clue to identifying the OS. But OS fingerprinting is not foolproof. For example, firewalls block certain ports and types of traffic, and system administrators can configure their systems to respond in nonstandard ways to camouflage the true OS.

c. Network services discovery:

Active discovery techniques send various types of network packets, such as Internet Control Message Protocol (ICMP) pings, to solicit responses from network hosts, generally through the use of an automated tool. One activity, known as OS fingerprinting, enables the assessor to determine the system’s OS by sending it a mix of normal, abnormal, and illegal network traffic. Another activity involves sending packets to common port numbers to generate responses that indicate the ports are active. The tool analyzes the responses from these activities, and compares them with known traits of packets from specific operating systems and network services—enabling it to identify hosts, the operating systems they run, their ports, and the state of those ports. This information can be used for purposes that include gathering information on targets for penetration testing, generating topology maps, determining firewall and IDS configurations, and discovering vulnerabilities in systems and network configurations.

Network discovery tools have many ways to acquire information through scanning. Enterprise firewalls and intrusion detection systems can identify many instances of scans, particularly those that use the most suspicious packets (e.g., SYN/FIN scan, NULL scan). Assessors who plan on performing discovery through firewalls and intrusion detection systems should consider which types of scans are most likely to provide results without drawing the attention of security administrators, and how scans can be conducted in a more stealthy manner (such as more slowly or from a variety of source IP addresses) to improve their chances of success. Assessors should also be cautious when selecting types of scans to use against older systems, particularly those known to have weak security, because some scans can cause system failures. Typically, the closer the scan is to normal activity, the less likely it is to cause operational problems.

Network discovery may also detect unauthorized or rogue devices operating on a network. For example, an organization that uses only a few operating systems could quickly identify rogue devices that utilize different ones. Once a wired rogue device is identified, 12 it can be located by using existing network maps and information already collected on the device’s network activity to identify the switch to which it is connected. It may be necessary to generate additional network activity with the rogue device—such as pings—to find the correct switch. The next step is to identify the switch port on the switch associated with the rogue device, and to physically trace the cable connecting that switch port to the rogue device.

A number of tools exist for use in network discovery, and it should be noted that many active discovery tools can be used for passive network sniffing and port scanning as well. Most offer a graphical user interface (GUI), and some also offer a command-line interface. Command-line interfaces may take longer to learn than GUIs because of the number of commands and switches that specify what tests the tool should perform and which an assessor must learn to use the tool effectively. Also, developers have written a number of modules for open source tools that allow assessors to easily parse tool output. For example, combining a tool’s Extensible Markup Language (XML) output capabilities, a little scripting, and a database creates a more powerful tool that can monitor the network for unauthorized services and machines. Learning what the many commands do and how to combine them is best achieved with the help of an experienced security engineer. Most experienced IT professionals, including system administrators and other network engineers, should be able to interpret results, but working with the discovery tools themselves is more efficiently handled by an engineer.

Some of the advantages of active discovery, as compared to passive discovery, are that an assessment can be conducted from a different network and usually requires little time to gather information. In passive discovery, ensuring that all hosts are captured requires traffic to hit all points, which can be time-consuming—especially in larger enterprise networks.

A disadvantage to active discovery is that it tends to generate network noise, which sometimes results in network latency. Since active discovery sends out queries to receive responses, this additional network activity could slow down traffic or cause packets to be dropped in poorly configured networks if performed at high volume. Active discovery can also trigger IDS alerts, since unlike passive discovery it reveals its origination point. The ability to successfully discover all network systems can be affected by environments with protected network segments and perimeter security devices and techniques. For example, an environment using network address translation (NAT)—which allows organizations to have internal, non-publicly routed IP addresses that are translated to a different set of public IP addresses for external traffic—may not be accurately discovered from points external to the network or from protected segments. Personal and host-based firewalls on target devices may also block discovery traffic. Misinformation may be received as a result of trying to instigate activity from devices. Active discovery presents information from which conclusions must be drawn about settings on the target network.

For both passive and active discovery, the information received is seldom completely accurate. To illustrate, only hosts that are on and connected during active discovery will be identified—if systems or a segment of the network are offline during the assessment, there is potential for a large gap in discovering devices. Although passive discovery will only find devices that transmit or receive communications during the discovery period, products such as network management software can provide continuous discovery capabilities and automatically generate alerts when a new device is present on the network. Continuous discovery can scan IP address ranges for new addresses or monitor new IP address requests. Also, many discovery tools can be scheduled to run regularly, such as once every set amount of days at a particular time. This provides more accurate results than running these tools sporadically.

Some of the tools for conducting these discovery activities include Network Mapper (NMAP) and GFI LanGuard products. Protocols which are observed during scanning and sniffing activities would include:

- Internet Protocol (IP)
- Internet Control Message Protocol (ICMP)
- User Datagram Protocol (UDP)
- Transmission Control Protocol (TCP)
d. Fingerprinting: The process of identifying the type of operating system, its current patch level and revision, along with the various additional management data about the system under review is known in this context as “fingerprinting the server.”
Knowing the version and type of a running web server allows testers to determine known vulnerabilities and the appropriate exploits to use during testing. There are several different vendors and versions of servers on the market today. Knowing the type of server that you are testing significantly helps in the testing process, and will also change the course of the test. This information can be derived by sending the server specific commands and analyzing the output, as each version of server software may respond differently to these commands. By fingerprinting the target’s server and enumerating as much information as possible, an attacker may develop an accurate attack scenario, which will effectively exploit an identified vulnerability in the software type/version being utilized by the target host. This is one of the initial steps involved in penetration testing and the outside attack tactics and techniques hackers use in attempting to gain unauthorized access to systems and servers.
As the OWASP Testing Guide defines: “By knowing how each type of web server responds to specific commands and keeping this information in a web server fingerprint database, a penetration tester can send these commands to the web server, analyze the response, and compare it to the database of known signatures. Please note that it usually takes several different commands to accurately identify the web server, as different versions may react similarly to the same command. Rarely, however, different versions react the same to all HTTP commands. So, by sending several different commands, you increase the accuracy of your guess” [1].
e. Banner grabbing:
“Some scanners can help identify the application running on a particular port through a process called service identification. Many scanners use a services file that lists common port numbers and typical associated services—for example, a scanner that identifies that TCP port 80 is open on a host may report that a web server is listening at that port—but additional steps are needed before this can be confirmed. Some scanners can initiate communications with an observed port and analyze its communications to determine what service is there, often by comparing the observed activity to a repository of information on common services and service implementations. These techniques may also be used to identify the service application and application version, such as which Web server software is in use—this process is known as version scanning. A well-known form of version scanning, called banner grabbing, involves capturing banner information transmitted by the remote port when a connection is initiated. This information can include the application type, application version, and even OS type and version. Version scanning is not foolproof, because a security-conscious administrator can alter the transmitted banners or other characteristics in hopes of concealing the service’s true nature. However, version scanning is far more accurate than simply relying on a scanner’s services file.”11 An example of an NMAP output is shown as follows:
image

Common reasons that scanning of networks and machines is conducted include:

- Check for unauthorized hosts: Identifying machines, rogue devices, and hosts is commonly a result of scanning. This process helps create a network topology of the active devices and servers on the network, whether they were previously identified or not.
- Identify vulnerable services: Each device which is scanned provides output data of all the services, ports, and protocols that device is currently running at the time of the scan. These outputs identify the open areas of access to the device which could be exposed to outside attack or exploitation.
- Identify deviations from the allowed services defined in the organization’s security policy: The core foundation for any security activity within an organization is the enforcement of the organizational security policy and scanning provides the technical support and evidence that this policy is being correctly adhered to and in place. The scanning gives the tester proof the security components for the policy are in effect and active for the agency or organizations.
- Prepare for penetration testing: Penetration testing, as defined below, needs to be focused on the hosts, machines, and devices of the network and the system; therefore, the scanning gives the tester the right machine names, IP addresses, and services which are active. Many times, the penetration testing will identify the deficiencies in the systems by using the scan data to drill into the security of the machines and devices.
- Configure IDS: Scanning provides many areas for the security personnel to identify and isolate responses needed for tuning the network IDS (NIDS) devices to properly respond to deficiencies and behavioral patterns.

Potential recommendations for testers to identify corrective actions to results from this type of evaluation include:

Investigate and disconnect unauthorized hosts.
Disable or remove unnecessary and vulnerable services.
Modify hosts to restrict access to vulnerable services to a limited number of required hosts.
Modify enterprise firewalls to restrict outside access to known vulnerable services.
2. Vulnerability scanning:

Like network port and service identification, vulnerability scanning identifies hosts and host attributes (e.g., operating systems, applications, open ports), but it also attempts to identify vulnerabilities rather than relying on human interpretation of the scanning results. Many vulnerability scanners are equipped to accept results from network discovery and network port and service identification, which reduces the amount of work needed for vulnerability scanning. Also, some scanners can perform their own network discovery and network port and service identification. Vulnerability scanning can help identify outdated software versions, missing patches, and misconfigurations, and validate compliance with or deviations from an organization’s security policy. This is done by identifying the operating systems and major software applications running on the hosts and matching them with information on known vulnerabilities stored in the scanners’ vulnerability databases.12

Vulnerability scanning is often used to conduct the following test and evaluation activities:

a. Identify active hosts on network.
b. Define the active and vulnerable services (ports) on hosts.
c. Identify applications.
d. Identify the running operating systems.
e. Pinpoint the vulnerabilities associated with discovered OS and applications.
f. Locate misconfigured settings on servers, workstations, and network devices.
g. Track inventory and categorize assets.
h. Verify vulnerabilities against inventory.
i. Classify and rank risks.
j. Identify patches, fixes, and workarounds.
k. Rescan to validate remediation (application of patches, fixes, and workarounds).
l. Test compliance with host application usage/security policies.
m. Establish a baseline for penetration testing.

There are many books and papers available which identify values, techniques, and tactics needed for use of scanning so I will not try to go into detail on those uses of vulnerability scanning here.

a. Potential recommendations for testers to identify corrective actions to results from this type of evaluation include:
- Upgrade or patch vulnerable systems.
- Deploy mitigating measures.
- Improve configuration management program and procedures.
- Assign a staff member to:
(i) Monitor vulnerability alerts/mailing lists.
(ii) Examine applicability to environment.
(iii) Initiate appropriate system changes.
- Modify the organization’s security policies and architecture.
3. Password cracking:

When a user enters a password, a hash of the entered password is generated and compared with a stored hash of the user’s actual password. If the hashes match, the user is authenticated. Password cracking is the process of recovering passwords from password hashes stored in a computer system or transmitted over networks. It is usually performed during assessments to identify accounts with weak passwords. Password cracking is performed on hashes that are either intercepted by a network sniffer while being transmitted across a network, or retrieved from the target system, which generally requires administrative-level access on, or physical access to, the target system. Once these hashes are obtained, an automated password cracker rapidly generates additional hashes until a match is found or the assessor halts the cracking attempt.13

a. Identifies weak passwords:

Password crackers can be run during an assessment to ensure policy compliance by verifying acceptable password composition. For example, if the organization has a password expiration policy, then password crackers can be run at intervals that coincide with the intended password lifetime. Password cracking that is performed offline produces little or no impact on the system or network, and the benefits of this operation include validating the organization’s password policy and verifying policy compliance.14

b. Stored and transmitted in encrypted form: Hashes of passwords are primarily stored in hash (one-way encryption) format and are often stored in a password hash file on the server (SAM file on Windows boxes, “/etc/password” file on UNIX boxes). These files are usually the first target of malicious attackers to retrieve the passwords and then crack the administrative passwords off the machine so they can then re-enter the machine and use the correct password on the first log-in attempt and bypass the common security tool of locking out the account if incorrectly entering the password three times.
c. Dictionary attack/hybrid attack/brute force:

One method for generating hashes is a dictionary attack, which uses all words in a dictionary or text file. There are numerous dictionaries available on the Internet that encompass major and minor languages, names, popular television shows, etc. Another cracking method is known as a hybrid attack, which builds on the dictionary method by adding numeric and symbolic characters to dictionary words. Depending on the password cracker being used, this type of attack can try a number of variations, such as using common substitutions of characters and numbers for letters (e.g., p@ssword and h4ckme). Some will also try adding characters and numbers to the beginning and end of dictionary words (e.g., password99, password$%).

Yet another password-cracking method is called the brute force method. This generates all possible passwords up to a certain length and their associated hashes. Since there are so many possibilities, it can take months to crack a password. Although brute force can take a long time, it usually takes far less time than most password policies specify for password changing. Consequently, passwords found during brute force attacks are still too weak. Theoretically, all passwords can be cracked by a brute force attack, given enough time and processing power, although it could take many years and require serious computing power. Assessors and attackers often have multiple machines over which they can spread the task of cracking passwords, which greatly shortens the time involved.13

d. Theoretically all passwords are “crackable”:

Password cracking can also be performed with rainbow tables, which are lookup tables with pre-computed password hashes. For example, a rainbow table can be created that contains every possible password for a given character set up to a certain character length. Assessors may then search the table for the password hashes that they are trying to crack. Rainbow tables require large amounts of storage space and can take a long time to generate, but their primary shortcoming is that they may be ineffective against password hashing that uses salting. Salting is the inclusion of a random piece of information in the password hashing process that decreases the likelihood of identical passwords returning the same hash. Rainbow tables will not produce correct results without taking salting into account—but this dramatically increases the amount of storage space that the tables require. Many operating systems use salted password hashing mechanisms to reduce the effectiveness of rainbow tables and other forms of password cracking.15

e. “LanMan” password hashes:

The “LanMan” hash is a compromised password hashing function that was the primary hash that Microsoft LAN Manager and Microsoft Windows versions prior to Windows NT used to store user passwords. Support for the legacy LAN Manager protocol continued in later versions of Windows for backward compatibility, but was recommended by Microsoft to be turned off by administrators; as of Windows Vista, the protocol is disabled by default, but continues to be used by some non-Microsoft Common Internet File System (CIFS) implementations.

The LM hash is not a true one-way hash encryption function as the password can be determined from the hash because of several weaknesses in its design:

a. Passwords are limited to a maximum of only 14 characters
b. Passwords longer than 7 characters are divided into two pieces and each piece is hashed separately
c. All lower case letters in the password are changed to upper case before the password is hashed
d. The LM hash also does not use cryptographic salt, a standard technique to prevent pre-computed dictionary attacks
e. Implementation issue — since the LanMan Hashes change only when a user changes their password, they can be used to carry out a pass the hash side channel attack.

While LAN Manager is considered obsolete and current Windows operating systems use the stronger NTLMv2 or Kerberos authentication methods, Windows systems before Windows Vista/Windows Server 2008 enabled the LAN Manager hash by default for backward compatibility with legacy LAN Manager and Windows Me or earlier clients, or legacy NetBIOS-enabled applications.16

For many years, LanMan hashes have been identified as weak password implementation techniques, but they persist and continue to be used throughout the server community as people do not often change the server implementation and view them as relatively obscure and difficult to retrieve.

4. Log reviews:

Log review determines if security controls are logging the proper information, and if the organization is adhering to its log management policies. As a source of historical information, audit logs can be used to help validate that the system is operating in accordance with established policies. For example, if the logging policy states that all authentication attempts to critical servers must be logged, the log review will determine if this information is being collected and shows the appropriate level of detail. Log review may also reveal problems such as misconfigured services and security controls, unauthorized accesses, and attempted intrusions. For example, if an intrusion detection system (IDS) sensor is placed behind a firewall, its logs can be used to examine communications that the firewall allows into the network. If the sensor registers activities that should be blocked, it indicates that the firewall is not configured securely.

Examples of log information that may be useful when conducting technical security assessments include:

Authentication server or system logs may include successful and failed authentication attempts.
System logs may include system and service startup and shutdown information, installation of unauthorized software, file accesses, security policy changes, account changes (e.g., account creation and deletion, account privilege assignment), and privilege use.
Intrusion detection and prevention system logs may include malicious activity and inappropriate use.
Firewall and router logs may include outbound connections that indicate compromised internal devices (e.g., rootkits, bots, Trojan horses, spyware).
Firewall logs may include unauthorized connection attempts and inappropriate use.
Application logs may include unauthorized connection attempts, account changes, use of privileges, and application or database usage information.
Antivirus logs may include update failures and other indications of outdated signatures and software.
Security logs, in particular patch management and some IDS and intrusion prevention system (IPS) products, may record information on known vulnerable services and applications.

Manually reviewing logs can be extremely time-consuming and cumbersome. Automated audit tools are available that can significantly reduce review time and generate predefined and customized reports that summarize log contents and track them to a set of specific activities. Assessors can also use these automated tools to facilitate log analysis by converting logs in different formats to a single, standard format for analysis. In addition, if assessors are reviewing a specific action—such as the number of failed logon attempts in an organization—they can use these tools to filter logs based on the activity being checked.17

Log management and analysis should be conducted frequently on major servers, firewalls, IDS devices, and other applications. Logs that should be considered for use and review in any log management system include:

a. Firewall logs
b. IDS logs
c. Server logs
d. Other logs that are collecting audit data – especially network devices
e. Snort – free IDS sensors and their data components
5. File integrity checkers:

File integrity checkers provide a way to identify that system files have been changed computing and storing a checksum for every guarded file, and establishing a file checksum database. Stored checksums are later recomputed to compare their current value with the stored value, which identifies file modifications. A file integrity checker capability is usually included with any commercial host-based IDS, and is also available as a standalone utility.

Although an integrity checker does not require a high degree of human interaction, it must be used carefully to ensure its effectiveness. File integrity checking is most effective when system files are compared with a reference database created using a system known to be secure—this helps ensure that the reference database was not built with compromised files. The reference database should be stored offline to prevent attackers from compromising the system and covering their tracks by modifying the database. In addition, because patches and other updates change files, the checksum database should be kept up-to-date. For file integrity checking, strong cryptographic checksums such as Secure Hash Algorithm 1 (SHA-1) should be used to ensure the integrity of data stored in the checksum database. Federal agencies are required by Federal Information Processing Standard (FIPS) PUB 140-2, Security Requirements for Cryptographic Modules, to use SHA (e.g., SHA-1, SHA-256).18

File integrity checkers usually have the following features which provide the tester a specialized method of evaluating the file or directory structures:

a. Compute and store a checksum
b. Recomputed regularly
c. Initial reference database
d. False-positive alarm adjustment
6. Antivirus protection/virus detectors: Antivirus (AV) software was originally developed to detect and remove computer viruses. However, with the proliferation of other kinds of malware, AV software started to provide protection from other computer threats. In particular, modern AV software can protect from: backdoors, rootkits, Trojan horses, worms, malicious Layered Service Providers (LSPs), dialers, fraudtools, malicious Browser Helper Objects (BHOs), browser hijackers, ransomware, keyloggers, adware, and spyware. Some virus detector products also include protection from other computer threats, such as infected and malicious URLs, spam, scam and phishing attacks, online identity (privacy), online banking attacks, social engineering techniques, advanced persistent threat (APT), botnets, and even Decision Disk Operating System (DDoS) attacks.

It is primarily used to detect and isolate the following types of threats:

a. Virus, Trojan, or worm.
b. Malicious code.
c. More sophisticated programs also look for virus-like activity in an attempt to identify new or mutated viruses.

Two primary types are as follows:

a. Network infrastructure–based AV software
b. End-user machine–based AV software

There are several methods which AV engine can use to identify malware:

a. Signature-based detection: Is the most common method. To identify viruses and other malware, the AV engine compares the contents of a file to its database of known malware signatures. Traditionally, AV software heavily relied on signatures to identify malware.
b. Heuristic-based detection: Is generally used together with signature-based detection. It detects malware based on characteristics typically used in known malware code.
c. Behavioral-based detection is similar to heuristic-based detection and used also in IDS. The main difference is that, instead of characteristics hardcoded in the malware code itself, it is based on the behavioral fingerprint of the malware at run time. Clearly, this technique is able to detect (known or unknown) malware only after they have starting doing their malicious actions.
d. Sandbox detection is a particular behavioral-based detection technique that, instead of detecting the behavioral fingerprint at run time, executes the programs in a virtual environment, logging what actions the program performs. Depending on the actions logged, the AV engine can determine if the program is malicious or not. If not, then, the program is executed in the real environment. Albeit this technique has shown to be quite effective, given its heaviness and slowness, it is rarely used in end-user AV solutions.
e. Data mining techniques are one of the latest approaches applied in malware detection. Data mining and machine learning algorithms are used to try to classify the behavior of a file (as either malicious or benign) given a series of file features that are extracted from the file itself.
7. War dialing:

Several available software packages allow network administrators—and attackers—to dial large blocks of telephone numbers to search for available modems. This process is called war dialing. A computer with four modems can dial 10,000 numbers in a matter of days. War dialers provide reports on numbers with modems, and some dialers have the capacity to attempt limited automatic attacks when a modem is discovered. Organizations should conduct war dialing at least once per year to identify their unauthorized and the organization’s phone system. (It should be considered, however, that many unauthorized modems may be turned off after hours and might go undetected.) War dialing may also be used to detect fax equipment. Testing should include all numbers that belong to an organization, except those that could be impacted by receiving a large number of calls (e.g., 24-hour operation centers and emergency numbers). Most types of war dialing software allow testers to exempt specific numbers from the calling list.

Skills needed to conduct remote access testing include TCP/IP and networking knowledge; knowledge of remote access technologies and protocols; knowledge of authentication and access control methods; general knowledge of telecommunications systems and modem/PBX operations; and the ability to use scanning and security testing tools such as war dialers.19

Some of the criteria for war dialing include:

a. Going after unauthorized modems
b. Dialing large blocks of phone numbers in search of available modems
c. Including all numbers that belong to an organization
8. Wireless LAN testing:

Wireless technologies, in their simplest sense, enable one or more devices to communicate without the need for physical connections such as network or peripheral cables. They range from simple technologies like wireless keyboards and mice to complex cell phone networks and enterprise wireless local area networks (WLAN). As the number and availability of wireless-enabled devices continues to increase, it is important for organizations to actively test and secure their enterprise wireless environments. Wireless scans can help organizations determine corrective actions to mitigate risks posed by wireless-enabled technologies.

Wireless scanning should be conducted using a mobile device with wireless analyzer software installed and configured—such as a laptop, handheld device, or specialty device. The scanning software or tool should allow the operator to configure the device for specific scans, and to scan in both passive and active modes. The scanning software should also be configurable by the operator to identify deviations from the organization’s wireless security configuration requirements.

The wireless scanning tool should be capable of scanning all Institute of Electrical and Electronics Engineers (IEEE) 802.11a/b/g/n channels, whether domestic or international. In some cases, the device should also be fitted with an external antenna to provide an additional level of radio frequency (RF) capturing capability. Support for other wireless technologies, such as Bluetooth, will help evaluate the presence of additional wireless threats and vulnerabilities. Note that devices using nonstandard technology or frequencies outside of the scanning tool’s RF range will not be detected or properly recognized by the scanning tool. A tool such as an RF spectrum analyzer will assist organizations in identifying transmissions that occur within the frequency range of the spectrum analyzer. Spectrum analyzers generally analyze a large frequency range (e.g., 3 to 18 GHz)—and although these devices do not analyze traffic, they enable an assessor to determine wireless activity within a specific frequency range and tailor additional testing and examination accordingly.

Passive scanning should be conducted regularly to supplement wireless security measures already in place, such as WIDPSs. Wireless scanning tools used to conduct completely passive scans transmit no data, nor do the tools in any way affect the operation of deployed wireless devices. By not transmitting data, a passive scanning tool remains undetected by malicious users and other devices. This reduces the likelihood of individuals avoiding detection by disconnecting or disabling unauthorized wireless devices.

Passive scanning tools capture wireless traffic being transmitted within the range of the tool’s antenna. Most tools provide several key attributes regarding discovered wireless devices, including service set identifier (SSID), device type, channel, media access control (MAC) address, signal strength, and number of packets being transmitted. This information can be used to evaluate the security of the wireless environment, and to identify potential rogue devices and unauthorized ad hoc networks discovered within range of the scanning device. The wireless scanning tool should also be able to assess the captured packets to determine if any operational anomalies or threats exist.

Wireless scanning tools scan each IEEE 802.11a/b/g/n channel/frequency separately, often for only several hundred milliseconds at a time. The passive scanning tool may not receive all transmissions on a specific channel. For example, the tool may have been scanning channel 1 at the precise moment when a wireless device transmitted a packet on channel 5. This makes it important to set the dwell time of the tool to be long enough to capture packets, yet short enough to efficiently scan each channel. Dwell time configurations will depend on the device or tool used to conduct the wireless scans. In addition, security personnel conducting the scans should slowly move through the area being scanned to reduce the number of devices that go undetected.

Rogue devices can be identified in several ways through passive scanning:

The MAC address of a discovered wireless device indicates the vendor of the device’s wireless interface. If an organization only deploys wireless interfaces from vendors A and B, the presence of interfaces from any other vendor indicates potential rogue devices.
If an organization has accurate records of its deployed wireless devices, assessors can compare the MAC addresses of discovered devices with the MAC addresses of authorized devices. Most scanning tools allow assessors to enter a list of authorized devices. Because MAC addresses can be spoofed, assessors should not assume that the MAC addresses of discovered devices are accurate—but checking MAC addresses can identify rogue devices that do not use spoofing.
Rogue devices may use SSIDs that are not authorized by the organization.
Some rogue devices may use SSIDs that are authorized by the organization but do not adhere to its wireless security configuration requirements.

The signal strength of potential rogue devices should be reviewed to determine whether the devices are located within the confines of the facility or in the area being scanned. Devices operating outside an organization’s confines might still pose significant risks because the organization’s devices might inadvertently associate to them.

Organizations can move beyond passive wireless scanning to conduct active scanning. This builds on the information collected during passive scans, and attempts to attach to discovered devices and conduct penetration or vulnerability-related testing. For example, organizations can conduct active wireless scanning on their authorized wireless devices to ensure that they meet wireless security configuration requirements—including authentication mechanisms, data encryption, and administration access if this information is not already available through other means.

Organizations should be cautious in conducting active scans to make sure they do not inadvertently scan devices owned or operated by neighboring organizations that are within range. It is important to evaluate the physical location of devices before actively scanning them. Organizations should also be cautious in performing active scans of rogue devices that appear to be operating within the organization’s facility. Such devices could belong to a visitor to the organization who inadvertently has wireless access enabled, or to a neighboring organization with a device that is close to, but not within, the organization’s facility. Generally, organizations should focus on identifying and locating potential rogue devices rather than performing active scans of such devices.

Organizations may use active scanning when conducting penetration testing on their own wireless devices. Tools are available that employ scripted attacks and functions, attempt to circumvent implemented security measures, and evaluate the security level of devices. For example, tools used to conduct wireless penetration testing attempt to connect to access points (AP) through various methods to circumvent security configurations. If the tool can gain access to the AP, it can obtain information and identify the wired networks and wireless devices to which the AP is connected.

Security personnel who operate the wireless scanning tool should attempt to locate suspicious devices. RF signals propagate in a manner relative to the environment, which makes it important for the operator to understand how wireless technology supports this process. Mapping capabilities are useful here, but the main factors needed to support this capability are a knowledgeable operator and an appropriate wireless antenna.

If rogue devices are discovered and physically located during the wireless scan, security personnel should ensure that specific policies and processes are followed on how the rogue device is handled—such as shutting it down, reconfiguring it to comply with the organization’s policies, or removing the device completely. If the device is to be removed, security personnel should evaluate the activity of the rogue device before it is confiscated. This can be done through monitoring transmissions and attempting to access the device.

If discovered wireless devices cannot be located during the scan, security personnel should attempt to use a WIDPS to support the location of discovered devices. This requires the WIDPS to locate a specific MAC address that was discovered during the scan. Properly deployed WIDPSs should have the ability to assist security personnel in locating these devices, and usually involves the use of multiple WIDPS sensors to increase location identification granularity. Because the WIDPS will only be able to locate a device within several feet, a wireless scanning tool may still be needed to pinpoint the location of the device.

For organizations that want to confirm compliance with their Bluetooth security requirements, passive scanning for Bluetooth-enabled wireless devices should be conducted to evaluate potential presence and activity. Because Bluetooth has a very short range (on average 9 meters [30 feet], with some devices having ranges of as little as 1 meter [3 feet]), scanning for devices can be difficult and time-consuming. Assessors should take range limitations into consideration when scoping this type of scanning. Organizations may want to perform scanning only in areas of their facilities that are accessible by the public—to see if attackers could gain access to devices via Bluetooth—or to perform scanning in a sampling of physical locations rather than throughout the entire facility. Because many Bluetooth-enabled devices (such as cell phones and personal digital assistants [PDA]) are mobile, conducting passive scanning several times over a period of time may be necessary. Organizations should also scan any Bluetooth infrastructure, such as access points, that they deploy. If rogue access points are discovered, the organization should handle them in accordance with established policies and processes.

A number of tools are available for actively testing the security and operation of Bluetooth devices. These tools attempt to connect to discovered devices and perform attacks to surreptitiously gain access and connectivity to Bluetooth-enabled devices. Assessors should be extremely cautious of performing active scanning because of the likelihood of inadvertently scanning personal Bluetooth devices, which are found in many environments. As a general rule, assessors should use active scanning only when they are certain that the devices being scanned belong to the organization. Active scanning can be used to evaluate the security mode in which a Bluetooth device is operating, and the strength of Bluetooth password identification numbers (PIN). Active scanning can also be used to verify that these devices are set to the lowest possible operational power setting to minimize their range. As with IEEE 802.11a/b/g rogue devices, rogue Bluetooth devices should be dealt with in accordance with policies and guidance.20

Uses for wireless scanning include identifying the following areas for further testing and evaluation:

a. 802.11
b. Serious flaws in its current implementation of Wired Equivalent Privacy (WEP)
c. Default configuration
d. Websites that publish the locations of discovered wireless networks
e. Insertion attacks
f. Interception and monitoring of wireless traffic
g. DOS
h. Client-to-client attacks
9. Penetration testing: Penetration testing is security testing in which assessors mimic real-world attacks to identify methods for circumventing the security features of an application, system, or network. It often involves launching real attacks on real systems and data that use tools and techniques commonly used by attackers. Most penetration tests involve looking for combinations of vulnerabilities on one or more systems that can be used to gain more access than could be achieved through a single vulnerability. Penetration testing can also be useful for determining:
a. How well the system tolerates real world–style attack patterns
b. The likely level of sophistication an attacker needs to successfully compromise the system
c. Additional countermeasures that could mitigate threats against the system
d. Defenders’ ability to detect attacks and respond appropriately

Penetration testing can be invaluable, but it is labor-intensive and requires great expertise to minimize the risk to targeted systems. Systems may be damaged or otherwise rendered inoperable during the course of penetration testing, even though the organization benefits in knowing how a system could be rendered inoperable by an intruder. Although experienced penetration testers can mitigate this risk, it can never be fully eliminated. Penetration testing should be performed only after careful consideration, notification, and planning.

Penetration testing often includes nontechnical methods of attack. For example, a penetration tester could breach physical security controls and procedures to connect to a network, steal equipment, capture sensitive information (possibly by installing keylogging devices), or disrupt communications. Caution should be exercised when performing physical security testing – security guards should be made aware of how to verify the validity of tester activity, such as via a point of contact or documentation. Another nontechnical means of attack is the use of social engineering, such as posing as a help desk agent and calling to request a user’s passwords, or calling the help desk posing as a user and asking for a password to be reset.

The objectives of a penetration test are to simulate an attack using tools and techniques that may be restricted by law. This practice then needs the following areas for consideration and delineation in order to properly conduct this type of testing:

a. Formal permission
b. IP addresses/ranges to be tested
c. Any restricted hosts
d. List of acceptable testing techniques
e. When and how long?
f. IP addresses of the machines launching test
g. POCs for the testing team, targeted systems, and the networks
h. Measures to prevent law enforcement being called with false alarms
i. Handling of info collected by testing team
1. Overt or covert testing: There are several ways to conduct these types of tests. Testing can be conducted either overtly (also known as blue team or white-hat testing) or covertly (also known as red team or black-hat testing).

Overt security testing, also known as white hat testing, involves performing external and/or internal testing with the knowledge and consent of the organization’s IT staff, enabling comprehensive evaluation of the network or system security posture. Because the IT staff is fully aware of and involved in the testing, it may be able to provide guidance to limit the testing’s impact. Testing may also provide a training opportunity, with staff observing the activities and methods used by assessors to evaluate and potentially circumvent implemented security measures. This gives context to the security requirements implemented or maintained by the IT staff, and also may help teach IT staff how to conduct testing.

Covert security testing, also known as black hat testing, takes an adversarial approach by performing testing without the knowledge of the organization’s IT staff but with the full knowledge and permission of upper management. Some organizations designate a trusted third party to ensure that the target organization does not initiate response measures associated with the attack without first verifying that an attack is indeed underway (e.g., that the activity being detected does not originate from a test). In such situations, the trusted third party provides an agent for the assessors, the management, the IT staff, and the security staff that mediates activities and facilitates communications. This type of test is useful for testing technical security controls, IT staff response to perceived security incidents, and staff knowledge and implementation of the organization’s security policy. Covert testing may be conducted with or without warning.

The purpose of covert testing is to examine the damage or impact an adversary can cause—it does not focus on identifying vulnerabilities. This type of testing does not test every security control, identify each vulnerability, or assess all systems within an organization. Covert testing examines the organization from gain network access. If an organization’s goal is to mirror a specific adversary, this type of testing requires special considerations—such as acquiring and modeling threat data. The resulting scenarios provide an overall strategic view of the potential methods of exploit, risk, and impact of an intrusion. Covert testing usually has defined boundaries, such as stopping testing when a certain level of access is achieved or a certain type of damage is achievable as a next step in testing. Having such boundaries prevents damage while still showing that the damage could occur.

Besides failing to identify many vulnerabilities, covert testing is often time-consuming and costly due to its stealth requirements. To operate in a stealth environment, a test team will have to slow its scans and other actions to stay “under the radar” of the target organization’s security staff. When testing is performed in-house, training must also be considered in terms of time and budget. In addition, an organization may have staff trained to perform regular activities such as scanning and vulnerability assessments, but not specialized techniques such as penetration or application security testing. Overt testing is less expensive, carries less risk than covert testing, and is more frequently used—but covert testing provides a better indication of the everyday security of the target organization because system administrators will not have heightened awareness.21

Penetration test scenarios should focus on locating and targeting exploitable defects in the design and implementation of an application, system, or network. Tests should reproduce both the most likely and the most damaging attack patterns – including worst-case scenarios such as malicious actions by administrators. Since a penetration test scenario can be designed to simulate an inside attack, an outside attack, or both, external and internal security testing methods are considered. If both internal and external testing are to be performed, the external testing usually occurs first.
Outsider scenarios simulate the outsider attacker who has little or no specific knowledge of the target and who works entirely from assumptions. To simulate an external attack, testers are provided with no real information about the target environment other than targeted IP addresses or address ranges, and perform open source research by collecting information on the targets from public web pages, newsgroups, and similar sites. Port scanners and vulnerability scanners are then used to identify target hosts. If given a list of authorized IP addresses to use as targets, assessors should verify that all public addresses (i.e., not private, unroutable addresses) are under the organization’s purview before testing begins. Websites that provide domain name registration information (e.g., WHOIS) can be used to determine owners of address spaces. Since the testers’ traffic usually goes through a firewall, the amount of information obtained from scanning is far less than if the test were undertaken from an insider perspective. After identifying hosts on the network that can be reached from outside, testers attempt to compromise one of the hosts. If successful, this access may then be used to compromise other hosts that are not generally accessible from outside the network. Penetration testing is an iterative process that leverages minimal access to gain greater access.
Insider scenarios simulate the actions of a malicious insider. An internal penetration test is similar to an external test, except that the testers are on the internal network (i.e., behind the firewall) and have been granted some level of access to the network or specific network systems. Using this access, the penetration testers try to gain a greater level of access to the network and its systems through privilege escalation. Testers are provided with network information that someone with their level of access would normally have – generally as a standard employee, although depending on the goals of the test it could instead be information that a system or network administrator might possess.
Penetration testing is important for determining the vulnerability of an organization’s network and the level of damage that can occur if the network is compromised. It is important to be aware that depending on an organization’s policies, testers may be prohibited from using particular tools or techniques or may be limited to using them only during certain times of the day or days of the week. Penetration testing also poses a high risk to the organization’s networks and systems because it uses real exploits and attacks against production systems and data. Because of its high cost and potential impact, penetration testing of an organization’s network and systems on an annual basis may be sufficient. Also, penetration testing can be designed to stop when the tester reaches a point when an additional action will cause damage. The results of penetration testing should be taken seriously, and any vulnerabilities discovered should be mitigated. Results, when available, should be presented to the organization’s managers. Organizations should consider conducting less labor-intensive testing activities on a regular basis to ensure that they are maintaining their required security posture. A well-designed program of regularly scheduled network and vulnerability scanning, interspersed with periodic penetration testing, can help prevent many types of attacks and reduce the potential impact of successful ones.22

Four phases of penetration testing

image
1. Planning: “In the planning phase, rules are identified, management approval is finalized and documented, and testing goals are set. The planning phase sets the groundwork for a successful penetration test. No actual testing occurs in this phase.”15 In planning a penetration test, always include a legal review for the test event with the corporate counsel staff of the organization that is being tested, as there are some strong legal issues which need to be identified and documented prior to conducting the testing. These issues include conducting testing on government systems from outside, which under CFAA and CSA is typically considered illegal as well as the actual testing may include breaching a system with sensitive information. These various criteria are included elsewhere in this handbook, but I wish to re-emphasize these again here as a caution.
2. Discovery:

The discovery phase of penetration testing includes two parts. The first part is the start of actual testing, and covers information gathering and scanning. Network port and service identification is conducted to identify potential targets. In addition to port and service identification, other techniques are used to gather information on the targeted network:

a) Host name and IP address information can be gathered through many methods, including DNS interrogation, InterNIC (WHOIS) queries, and network sniffing (generally only during internal tests)
b) Employee names and contact information can be obtained by searching the organization’s Web servers or directory servers
c) System information, such as names and shares can be found through methods such as NetBIOS enumeration (generally only during internal tests) and Network Information System (NIS) (generally only during internal tests)
d) Application and service information, such as version numbers, can be recorded through banner grabbing.

In some cases, techniques such as dumpster diving and physical walk-throughs of facilities may be used to collect additional information on the targeted network, and may also uncover additional information to be used during the penetration tests, such as passwords written on paper.

The second part of the discovery phase is vulnerability analysis, which involves comparing the services, applications, and operating systems of scanned hosts against vulnerability databases (a process that is automatic for vulnerability scanners) and the testers’ own knowledge of vulnerabilities. Human testers can use their own databases – or public databases such as the National Vulnerability Database (NVD) – to identify vulnerabilities manually. Manual processes can identify new or obscure vulnerabilities that automated scanners may miss, but are much slower than an automated scanner.

Some of the various discovery techniques used during penetration testing are identified as follows:

a. DNS queries
b. InterNIC (whois) queries
c. Target organization’s website information
d. Social engineering techniques including:
- Dumpster diving: Gathering info on a target by digging through what they have thrown out
e. Packet sniffing/capture
f. NetBIOS enumeration
g. Network Information System (NIS)
h. Banner grabbing
i. Vulnerability analysis:
- Services
- Applications
- Operating systems
- Manual
- Automated scanners
3. Attack: Executing an attack is at the heart of any penetration test. The figure below represents the individual steps of the attack phase – the process of verifying previously identified potential vulnerabilities by attempting to exploit them. The four steps to any attack are as follows:
a. Gaining access
b. Escalating privilege
c. System browsing
d. Installing additional test software

If an attack is successful, the vulnerability is verified and safeguards are identified to mitigate the associated security exposure. In many cases, exploits that are executed do not grant the maximum level of potential access to an attacker. They may instead result in the testers learning more about the targeted network and its potential vulnerabilities, or induce a change in the state of the targeted network’s security. Some exploits enable testers to escalate their privileges on the system or network to gain access to additional resources. If this occurs, additional analysis and testing are required to determine the true level of risk for the network, such as identifying the types of information that can be gleaned, changed, or removed from the system. In the event an attack on a specific vulnerability proves impossible, the tester should attempt to exploit another discovered vulnerability. If testers are able to exploit a vulnerability, they can install more tools on the target system or network to facilitate the testing process. These tools are used to gain access to additional systems or resources on the network, and obtain access to information about the network or organization. Testing and analysis on multiple systems should be conducted during a penetration test to determine the level of access an adversary could gain. This process is represented in the feedback loop in the figure above between the attack and the discovery phase of a penetration test.

image

While vulnerability scanners check only for the possible existence of a vulnerability, the attack phase of a penetration test exploits the vulnerability to confirm its existence.

Most vulnerabilities exploited by penetration testing fall into the following categories:

a. Misconfigurations: Misconfigured security settings, particularly insecure default settings, are usually easily exploitable.
b. Kernel flaws: Kernel code is the core of an OS, and enforces the overall security model for the system – so any security flaw in the kernel puts the entire system in danger.
c. Buffer overflows: A buffer overflow occurs when programs do not adequately check input for appropriate length. When this occurs, arbitrary code can be introduced into the system and executed with the privileges – often at the administrative level – of the running program.
d. Insufficient input validation: Many applications fail to fully validate the input they receive from users. An example is a web application that embeds a value from a user in a database query. If the user enters SQL commands instead of or in addition to the requested value, and the web application does not filter the SQL commands, the query may be run with malicious changes that the user requested – causing what is known as a SQL injection attack.
e. Symbolic links: A symbolic link (symlink) is a file that points to another file. Operating systems include programs that can change the permissions granted to a file. If these programs run with privileged permissions, a user could strategically create symlinks to trick these programs into modifying or listing critical system files.
f. File descriptor attacks: File descriptors are numbers used by the system to keep track of files in lieu of filenames. Specific types of file descriptors have implied uses. When a privileged program assigns an inappropriate file descriptor, it exposes that file to compromise.
g. Race conditions: Race conditions can occur during the time a program or process has entered into a privileged mode. A user can time an attack to take advantage of elevated privileges while the program or process is still in the privileged mode.
h. Incorrect file and directory permissions: File and directory permissions control the access assigned to users and processes. Poor permissions could allow many types of attacks, including the reading or writing of password files or additions to the list of trusted remote hosts.
4. Reporting: The reporting phase occurs simultaneously with the other three phases of the penetration test. In the planning phase, the assessment plan—or ROE—is developed. In the discovery and attack phases, written logs are usually kept and periodic reports are made to system administrators and/or management. At the conclusion of the test, a report is generally developed to describe identified vulnerabilities, present a risk rating, and give guidance on how to mitigate the discovered weaknesses. Section 8 discusses post-testing activities such as reporting in more detail.23

Post-test actions to be taken

As a result of the penetration testing, several areas for action by the tester/assessor include:
Identifying the issues that need to be addressed quickly
Delineating the how of the test – the most important step in the testing process
Common causes and methods for addressing them:
Lack of (or poorly enforced) organizational security
Misconfiguration
Software (un)reliability
Failure to apply patches

Penetration testing is important for determining the vulnerability of an organization’s network and the level of damage that can occur if the network is compromised. It is important to be aware that depending on an organization’s policies, testers may be prohibited from using particular tools or techniques or may be limited to using them only during certain times of the day or days of the week. Penetration testing also poses a high risk to the organization’s networks and systems because it uses real exploits and attacks against production systems and data. Because of its high cost and potential impact, penetration testing of an organization’s network and systems on an annual basis may be sufficient. Also, penetration testing can be designed to stop when the tester reaches a point when an additional action will cause damage. The results of penetration testing should be taken seriously, and any vulnerabilities discovered should be mitigated. Results, when available, should be presented to the organization’s managers. Organizations should consider conducting less labor-intensive testing activities on a regular basis to ensure that they are maintaining their required security posture. A well-designed program of regularly scheduled network and vulnerability scanning, interspersed with periodic penetration testing, can help prevent many types of attacks and reduce the potential impact of successful ones.24

General schedule for testing categories

Category 1: This category tests to verify the systems or activities that provide security or other critical functions for the organization or agency:
Firewalls, routers, and perimeter defense systems such as for intrusion detection
Public access systems such as web and email servers
DNS and directory servers, and other internal systems that would likely be intruder targets
Category 2: This category tests all other systems besides the critical ones:
Assessment testing:
- Vulnerability scanning: Vulnerability scanning is designed to allow a cybersecurity analyst to create a prioritized list of vulnerabilities for a customer who is likely already aware that they are not where they need to be in terms of information assurance and computer security. The customer already understands that they have open vulnerabilities (perhaps on new computer systems, networks, etc.) and simply need assistance identifying and prioritizing them.
Also note that during initial vulnerability scans and assessments, the more potential vulnerabilities identified, the better.
- Log review: Log reviews are important security activity for the purposes of isolating anomalous events, identifying troubles in the system or network, and providing evidence during troubleshooting and incident response actions. SP 800-92 Log Management has many techniques and identified tactics for conducting log reviews.
- Penetration testing: Penetration testing is a process designed to simulate a cyber attacker who has a specific goal. Penetrating testing, therefore, is often focused on a particular piece of software or network service.
These tests are conducted by a cybersecurity analyst for customers who are already compliant with the regulations for cybersecurity and information assurance, but are concerned about vulnerabilities relating to a particular system or part of their network. A typical goal could be to access a new network service like a customer-facing database.
The standard output for a penetration test is a report detailing how the cybersecurity analyst breached specific cybersecurity defenses during the simulated attack, and suggestions on how to remediate this vulnerability.
- Configuration checklist review: The federal government security actions today typically include configuring machines and services in accordance with standard configurations which are defined by checklists for each type of machine and the settings for hardening the system under review. The first level of testing, compliance testing, involves running through the checklists against the actual machine and reviewing these various settings and ensuring they are actually installed correctly. Often the scanning tools mentioned above have these various settings already installed and the report outputs identify any setting which does not meet the criteria.