Chapter 11

Security Component Fundamentals for Assessment

Abstract

Specific security fundamentals for management controls, operational controls, and various technical controls are defined and discussed with emphasis on what areas are important for testing and evaluation

Keywords

management controls
operational controls
technical controls
theory
fundamentals
The key to the management, oversight, and governance of the security components and program in the organization is the understanding of the risks involved and how each is treated and tolerated by the organization. As the assessor for a US governmental system, it is important to grasp and work with the fundamental requirements for these systems. With the SP 800-53 structured approach to security controls, the assessor can review each management, technical, and operational area of security directly. NIST SP 800-53, rev. 4, is divided into 18 control families comprising 3 security classes of controls:
1. Management controls: Focus on the management of the computer security system and the management of risk for a system. They are techniques and concerns that are normally addressed by management, through policy and documentation.
2. Operational controls: Address security issues related to mechanisms primarily implemented and executed by people (as opposed to systems). Often, they require technical or specialized expertise and rely on management activities as well as technical controls.
3. Technical controls: Technical controls are security controls that are configured within the system. They can provide automated protection for unauthorized access or misuse, facilitate detection of security violations, and support security requirements for applications and data.
Each family of controls starts with the base “−1” control which defines the policies necessary for the family of controls. All 18 families of controls within the SP 800-53 are defined in this manner. These are commonly known as the “XX-1 Policy and Procedures” controls. An information security policy is an aggregate of directives, rules, and practices that prescribes how an organization manages, protects, and distributes information. Information security policy is an essential component of information security governance – without the policy, governance has no substance and rules to enforce.
Information security policy should be based on a combination of appropriate legislation, such as FISMA; applicable standards, such as NIST Federal Information Processing Standards (FIPS) and guidance; and internal agency requirements. Therefore, the assessor will identify the relevant governmental documents for each policy and then check the system documentation for reference to those documents. Agency information security policy should address the fundamentals of agency information security governance structure, including:
1. Information security roles and responsibilities
2. Statement of security control baseline and rules for exceeding the baseline
3. Rules of behavior that agency users are expected to follow and minimum repercussions for noncompliance
We will discuss each of these families of controls in this chapter, starting with the management controls.

Management areas of consideration

There are many areas which the assessor needs to consider when evaluating and testing the various management controls installed on the systems under test as shown below in the listing of the families of controls. The starting point for most of these areas is the oversight and governance requirements. So the first area of management controls to review would be the security program and its operations section.
The management areas covered by SP 800-53 controls are varied and wide in their scope.
The basic ideas behind the controls are to provide direct information security program elements to assist managers in establishing, implementing, and running an information security program. Typically, the organization looks to the program for overall responsibility to ensure the selection and implementation of appropriate security controls and to demonstrate the effectiveness of satisfying their stated security requirements.
Key elements to review for any security management program are as follows:
Senior management commitment and support: As the cornerstone for successful establishment and continuance of an information security management program, commitment and support from senior management should exist.
Policies and procedures: As a structured framework, policy and procedures start with a general organization policy providing concise top management declaration of direction.
Organization: Responsibilities for the protection of individual assets and for carrying out specific security processes should be clearly defined. The information security policy should provide general guidance on the allocation of security roles and responsibilities in the organization.
Security awareness and education: All employees of an organization and, where relevant, third-party users should receive appropriate training and regular updates on the importance of security in organizational policies and procedures.
Monitoring and compliance: In assessing the effectiveness of an organization’s security program(s) on a continuous basis, IS auditors must have an understanding of the organization’s monitoring activities in assessing the effectiveness of security programs and controls established.
Incident handling and response: A computer security incident is an adverse event that threatens some aspect of computer security.
image
While the standards such as NIST, ISO, and Information Security Forum (ISF) divide their materials into chapters, these do not translate into a security architecture landscape very well. Therefore, the Open Security Architecture Forum2 proposes an architecture that identifies topics of poor coverage, determines priorities for new patterns, and helps the community coordinate their risk management (RM) activities. Open Security Architecture (OSA) is a not-for-profit organization, supported by volunteers for the benefit of the security community.

Management controls

The management controls are defined in SP 800-53 as the overarching controls needed for oversight, compliance, and acquisition of security components, equipment, and processes for security within a federal system. The basic structure of controls is to define the security action to be taken, supplemental guidance for use and installation of the control, any enhancements to each control, references, and then the parameters or variables that the organization can use to install and implement the control.

Program Management (PM)

Information Security Program Plan

The information security program plan can be represented in a single document or compilation of documents at the discretion of the organization. The plan documents the organization-wide PM controls and organization-defined common controls. The security plans for individual information systems and the organization-wide information security program plan together provide complete coverage for all security controls employed within the organization. Common controls are documented in an appendix to the organization’s information security program plan unless the controls are included in a separate security plan for an information system (e.g., security controls employed as part of an intrusion detection system (IDS) providing organization-wide boundary protection inherited by one or more organizational information systems). The organization-wide information security program plan will indicate which separate security plans contain descriptions of common controls.

Critical Infrastructure Plan

The organization addresses information security issues in the development, documentation, and updating of a critical infrastructure and key resources (CIKR) protection plan. Critical infrastructure assets are essential for the functioning of a society and economy. Most commonly associated with the term are facilities for:
1. Electricity generation, transmission, and distribution
2. Gas production, transport, and distribution
3. Oil and oil products production, transport, and distribution
4. Telecommunication
5. Water supply (drinking water, waste water/sewage, stemming of surface water (e.g., dikes and sluices))
6. Agriculture, food production and distribution
7. Heating (e.g., natural gas, fuel oil, district heating)
8. Public health (hospitals, ambulances)
9. Transportation systems (fuel supply, railway network, airports, harbors, inland shipping)
10. Financial services (banking, clearing)
11. Security services (police, military)
The main document of the US government for the critical infrastructure is HSPD-7, Critical Infrastructure Identification, Prioritization, and Protection, which references the CIKR of the United States.

Essential Services that Underpin American Society

It is the policy of the United States to enhance the protection of our nation’s CIKR against terrorist acts that could:
1. Cause catastrophic health effects or mass casualties comparable to those from the use of a weapon of mass destruction
2. Impair federal departments and agencies’ abilities to perform essential missions, or to ensure the public’s health and safety
3. Undermine state and local government capacities to maintain order and to deliver minimum essential public services
4. Damage the private sector’s capability to ensure the orderly functioning of the economy and delivery of essential services
5. Have a negative effect on the economy through the cascading disruption of other CIKR
6. Undermine the public’s morale and confidence in our national economic and political institutions

Industrial Control Systems Characteristics

Pervasive throughout critical infrastructure
Need for real-time response
Extremely high availability, predictability, and reliability
An industrial control system (ICS) is an information system used to control industrial processes such as manufacturing, product handling, production, and distribution. ICSs include supervisory control and data acquisition (SCADA) systems, distributed control systems (DCS), and programmable logic controllers (PLC). ICS are typically found in the electric, water, oil and gas, chemical, pharmaceutical, pulp and paper, food and beverage, and discrete manufacturing (automotive, aerospace, and durable goods) industries as well as in air and rail transportation control systems.
Security PM is designed to and often struggles with meeting several and often conflicting requirements:
Minimizing risk to the safety of the public
Preventing serious damage to environment
Preventing serious production stoppages or slowdowns
Protecting critical infrastructure from cyber attacks and human error
Safeguarding against compromise of proprietary information
So the assessor must review the program documents, reports, and reviews to verify the documented requirements are actually being met while the A&A – controls review and implementation process reflects the security is being maintained during operational activities.

Information security resources

The assessor will determine if the organization:
1. Ensures that all capital planning and investment requests include the resources needed to implement the information security program and documents all exceptions to this requirement
2. Employs a business case/Exhibit 300/Exhibit 53 to record the resources required
3. Ensures that information security resources are available for expenditure as planned
Organizations may designate and empower an Investment Review Board (IRB; or similar group) to manage and provide oversight for the information security-related aspects of the capital planning and investment control process. Which ties into the Capital Planning and Investment Control (SP 800-65) criteria of an Exhibit 300 must be submitted for all major investments in accordance with this section.
Major information technology (IT) investments also must be reported on the agency’s Exhibit 53. Exhibit 300s and the Exhibit 53, together with the agency’s Enterprise Architecture (EA) program, define how to manage the IT Capital Planning and Control Process.
All IT investments must clearly demonstrate the investment is needed to help meet the agency’s strategic goals and mission. They should also support the President’s Management Agenda (PMA). The capital asset plans and business cases (Exhibit 300) and “Agency IT Investment Portfolio” (Exhibit 53) demonstrate the agency management of IT investments and how these governance processes are used when planning and implementing IT investments within the agency.
Investments in the development of new or the continued operation of existing information systems, both general support systems and major applications, proposed for funding in the President’s budget must:
1. Be tied to the agency’s information architecture. Proposals should demonstrate that the security controls for components, applications, and systems are consistent with and an integral part of the IT architecture of the agency.
2. Be well planned, by:
a. Demonstrating that the costs of security controls are understood and are explicitly incorporated in the life-cycle planning of the overall system in a manner consistent with OMB guidance for capital programming
b. Incorporating a security plan that discusses risk management.
3. Manage risks, by:
a. Demonstrating specific methods used to ensure that risks and the potential for loss are understood and continually assessed, that steps are taken to maintain risk at an acceptable level, and that procedures are in place to ensure that controls are implemented effectively and remain effective over time
b. Demonstrating specific methods used to ensure that the security controls are commensurate with the risk and magnitude of harm that may result from the loss, misuse, or unauthorized access to or modification of the system itself or the information it manages
c. Identifying additional security controls that are necessary to minimize risks to and potential loss from those systems that promote or permit public access, other externally accessible systems, and those systems that are interconnected with systems over which program officials have little or no control
4. Protect privacy and confidentiality, by:
a. Deploying effective security controls and authentication tools consistent with the protection of privacy, such as public key–based digital signatures, for those systems that promote or permit public access
b. Ensuring that the handling of personal information is consistent with relevant government-wide and agency policies, such as privacy statements on the agency’s websites
5. Account for departures from NIST guidance. For non-national security applications, to ensure the use of risk-based cost-effective security controls, describe each occasion when employing standards and guidance that are more stringent than those promulgated by the NIST.
To promote greater attention to security as a fundamental management priority, OMB continues to take steps to integrate security into the capital planning and budget process. To further assist in this integration, the Plan of Action and Milestones (POAMs; M02-01) and annual security reports and executive summaries must be cross-referenced to the budget materials sent to OMB in the fall including Exhibits 300 and 53.

Measures of performance (SP 800-55)

NIST SP 800-55 is a guide to assist in the development, selection, and implementation of measures to be used at the information system and program levels. These measures indicate the effectiveness of security controls applied to information systems and supporting information security programs. Such measures are used to facilitate decision making, improve performance, and increase accountability through the collection, analysis, and reporting of relevant performance-related data – providing a way to tie the implementation, efficiency, and effectiveness of information system and program security controls to an agency’s success in achieving its mission. The performance measures development process described in SP 800-55 will assist agency information security practitioners in establishing a relationship between information system and program security activities under their purview and the agency mission, helping to demonstrate the value of information security to their organization.
Additionally, performance measurements are required to ensure the IT system is in compliance with existing laws, rules, and regulations, such as FISMA.
Factors that must be considered during the development and implementation of an IT measurement program are as follows:
Measures must yield quantifiable information: percentages, averages, and numbers.
Data that supports the measures needs to be readily obtainable.
Only repeatable information security processes should be considered for measurement.
Measures must be useful for tracking performance and directing resources.

Measures of performance

Metric types
Metrics development and implementation approach
Metrics development process

Metric Types

“Am I implementing the tasks for which I am responsible?”
“How efficiently or effectively am I accomplishing those tasks?”
“What impact are those tasks having on the mission?”

Metrics Development Process

The place of information security metrics within a larger organizational context demonstrates that information security metrics can be used to progressively measure implementation, efficiency, effectiveness, and the business impact of information security activities within organizations or for specific systems.
The information security metrics development process consists of two major activities:
1. Identifying and defining the current information security program
2. Developing and selecting specific metrics to measure implementation, efficiency, effectiveness, and the impact of the security controls
The process steps do not need to be sequential. Rather, the process illustrated in the following diagram provides a framework for thinking about metrics and aids in identifying metrics to be developed for each system. The type of metric depends on where the system is within its life cycle and on the maturity of the information system security program. This framework facilitates tailoring metrics to a specific organization and to the different stakeholder groups present within each organization.
Phases 5, 6, and 7 involve developing metrics that measure process implementation, effectiveness and efficiency, and mission impact, respectively. The specific aspect of information security that metrics will focus on at any given point will depend on information security program maturity. Implementation evidence, required to prove higher levels of effectiveness, will change from establishing existence of policy and procedures to quantifying implementation of these policies and procedures, then to quantifying results of implementation of policies and procedures, and ultimately to identifying the impact of implementation on the organization’s mission.
Based on existing policies and procedures, the universe of possible metrics can be prohibitively large; therefore, agencies should prioritize metrics to ensure that the final set selected for initial implementation has the following attributes:
image
1. Facilitates improvement of high-priority security control implementation. High priority may be defined by the latest Government Accountability Office (GAO) or Inspector General (IG) reports, results of a risk assessment, or an internal organizational goal.
2. Uses data that can realistically be obtained from existing processes and data repositories.
3. Measures processes that already exist and are relatively stable. Measuring nonexistent or unstable processes will not provide meaningful information about security performance and will therefore not be useful for targeting specific aspects of performance. On the other hand, attempting such measurement may not be entirely useless, because such a metric will certainly produce poor results and will therefore identify an area that needs improvement.
Metrics can be derived from existing data sources, including security certification and accreditation, security assessments, POAM, incident statistics, and agency-initiated or independent reviews. Agencies may decide to use a weighting scale to differentiate the importance of selected metrics and to ensure that the results accurately reflect existing security program priorities. This process would involve assigning values to each metric based on the importance of a metric in the context of the overall security program. Metrics weighting should be based on the overall risk mitigation goals, is likely to reflect higher criticality of department-level initiatives versus smaller-scale initiatives, and is a useful tool that facilitates integration of information security into the departmental capital planning process.
A phased approach may be required to identify short-, mid-, and long-term metrics in which the implementation time frame depends on a combination of system-level effectiveness, metric priority, data availability, and process stability. Once applicable metrics that contain the qualities described above are identified, they will need to be documented with supporting detail, including frequency of data collection, data source, formula for calculation, implementation evidence for measured activity, and a guide for metric data interpretation. Other information about each metric can be defined based on an organization’s processing and business requirements.
image

Metrics Program Implementation

Prepare for data collection.
Collect data and analyze results.
Identify corrective actions.
Develop business case and obtain resources.
Apply corrective actions.

Federal enterprise architecture

As part of the management criteria for controls and the system under review, the federal requirement defined in the Clinger Cohen Act of 1996 requires all systems be included in the EA for the agency. This process is identified and delineated in the Federal Enterprise Architecture (FEA) process as adopted by the federal CIO Council.
The FEA practice adopted three core principles to guide its strategic direction. They are as follows:
1. Business-driven: The FEA is most useful when it is closely aligned with government strategic plans and executive-level direction. Agency mission statements, presidential management directives, and agency business owners give direction to each agency’s EA and to the FEA.
2. Proactive and collaborative across the federal government: Adoption of the FEA is achieved through active participation by the EA community in its development and use. The FEA community is responsible for the development, evolution, and adoption of the FEA.
3. Architecture improves the effectiveness and efficiency of government information resources: Architecture development is an integral part of the capital investment process. No IT investment should be made without a business-approved architecture.
The FEA consists of a set of interrelated “reference models” designed to facilitate cross-agency analysis and the identification of duplicative investments, gaps, and opportunities for collaboration within and across agencies. Collectively, the reference models comprise a framework for describing important elements of the FEA in a common and consistent way.
Through the use of this common framework and vocabulary, IT portfolios can be better managed and leveraged across the federal government. This chapter introduces the purposes and structures of the five FEA reference models:
1. Performance Reference Model (PRM)
2. Business Reference Model (BRM)
3. Service Component Reference Model (SRM)
4. Technical Reference Model (TRM)
5. Data Reference Model (DRM)
image
Information protection needs are technology-independent, required capabilities to counter threats to the organization through the compromise of information (i.e., loss of confidentiality, integrity, or availability).
Information protection needs are derived from the mission/business needs defined by the organization, the mission/business processes selected to meet the stated needs, and the organizational RM strategy. Information protection needs determine the required security controls for the organization and the associated information systems supporting the mission/business processes. Inherent in defining an organization’s information protection needs is an understanding of the level of adverse impact that could result if a compromise of information occurs.
The security categorization process is used to make such potential impact determinations, which is related to and feeds the development of the security categorization requirements for each system as found in FIPS-199, guided by SP 800-60. These reference the process defined in the first step of the Risk Management Framework (RMF) as for in the previous chapters.

System and services acquisition (SA)

From OMB Budget Circular A-11, the Exhibit 300 is the capture mechanism for all of the analyses and activities required for full internal review (e.g., IRB, CIO). More importantly, Exhibit 300 is the document that OMB uses to assess investments and ultimately make funding decisions, and therefore should be leveraged by agencies to clearly demonstrate the need for life cycle and annual funding requests. Following selection into the agency’s IT portfolio, the agency aggregates Exhibit 300s into the Exhibit 53. The Exhibit 53 provides an overview of the agency’s entire IT portfolio by listing every IT investment, life cycle, and budget-year cost information.
Exhibit 300s are companions to an agency’s Exhibit 53. Exhibit 300s and the Exhibit 53, together with the agency’s EA program, define how to manage the IT Capital Planning and Control Process. Exhibit 53A is a tool for reporting the funding of the portfolio of all IT investments within a department while Exhibit 300A is a tool for detailed justifications of major “IT investments.” Exhibit 300B is for the management of the execution of those investments through their project life cycle and into their useful life in production.
By integrating the disciplines of architecture, investment management, and project implementation, these programs provide the foundation for sound IT management practices, end-to-end governance of IT capital assets, and the alignment of IT investments with an agency’s strategic goals. As architecture-driven IT investments are funded in the “invest“ (development/acquisition) phase, they move forward into the implementation phase where system development life-cycle processes are followed and actual versus planned outputs, schedule, and operational performance expenditures are tracked utilizing performance-based management processes.

Security services life cycle

image
SP 800-14, Generally Accepted Principles and Practices for Securing Information Technology Systems, provides a foundation on which organizations can establish and review IT security programs. The eight Generally Accepted System Security Principles in SP 800-14 are designed to provide the public or private sector audience with an organization-level perspective when creating new systems, practices, or policies.

General Considerations for Security Services

Strategic/mission
Budgetary/funding
Technical/architectural
Organizational
Personnel
Policy/process
To facilitate identification and review of these considerations, security program managers may use a set of questions when considering security products for their programs.
1. Identify the user community.
2. Define the relationship between the security product and the organization’s mission.
3. Identify data sensitivity.
4. Identify an organization’s security requirements.
5. Review security plan.
6. Review policies and procedures.
7. Identify operational issues such as daily operation, maintenance, and training.
This then leads to the assessor reviewing the acquisition criteria for various security components, services, and equipment along with the documentation, contract requirements, and the varied support design reports and analyses to ensure there are considerations defined for selecting the information security products and services from the following viewpoints:
Organizational
Product
Vendor
Security checklists for IT products
Organizational conflict of interest

Information security and external parties

The security of the organization’s information and information processing facilities that are accessed, processed, communicated to, or managed by external parties should be maintained, and should not be reduced by the introduction of external-party products or services. Any access to the organization’s information processing facilities and processing and communication of information by external parties should be controlled. Where there is a business need for working with external parties that may require access to the organization’s information and information processing facilities, or in obtaining or providing a product and service from or to an external party, a risk assessment should be carried out to determine security implications and control requirements. Controls should be agreed to and defined in an agreement with the external party.
These external party arrangements can include:
Service providers, such as internet service providers (ISPs), network providers, telephone services, and maintenance and support services
Managed security services
Customers
Outsourcing facilities and/or operations, for example, IT systems, data collection services, and call center operations
Management and business consultants, and auditors
Developers and suppliers, for example, of software products and IT systems
Cleaning, catering, and other outsourced support services
Temporary personnel, student placement, and other casual short-term appointments

CA – security assessment and authorization

This is the control family for the RMF and its implementation. So an assessor will review and identify all the components of the RMF, the identities of the key roles and the people assigned those roles, the process functions, and the key organizational documents which the agency has produced to support these RMF processes as identified in the previous chapters of this book and in SP 800-37, rev. 1.

PL – planning family and family plans

The assessor must ensure the organization plans and coordinates security-related activities affecting the information system before conducting such activities in order to reduce the impact on organizational operations (i.e., mission, functions, image, and reputation), organizational assets, and individuals. Security-related activities include, for example, security assessments, audits, system hardware and software maintenance, and contingency plan testing/exercises. Organizational advance planning and coordination includes both emergency and nonemergency (i.e., planned or nonurgent unplanned) situations.
This process is documented in the System Security Plan (SSP) which will include the organizational rules of behavior for each user of the system under review and the system hardware and software inventory.

System Security Plan

The security plan contains sufficient information (including specification of parameters for assignment and selection statements in security controls either explicitly or by reference) to enable an implementation that is unambiguously compliant with the intent of the plan and a subsequent determination of risk to organizational operations and assets, individuals, other organizations, and the nation if the plan is implemented as intended.

Rules of Behavior

These establishes and makes readily available to all information system users the rules that describe their responsibilities and expected behavior with regard to information and information system usage, and receives signed acknowledgment from users indicating that they have read, understand, and agree to abide by the rules of behavior, before authorizing access to information and the information system.

Information Security Hardware

The organization:
1. Develops an information security architecture for the information system that:
a. Describes the overall philosophy, requirements, and approach to be taken with regard to protecting the confidentiality, integrity, and availability of organizational information
b. Describes how the information security architecture is integrated into and supports the EA
c. Describes any information security assumptions about, and dependencies on, external services
2. Reviews and updates the information security architecture periodically to reflect updates in the EA

RA – risk assessment family

Risk Management

RM is the process of balancing the risk associated with organizational or business activities with an adequate level of control that will enable the business to meet its mission and/or objectives.
RM is the identification, assessment, and prioritization of risk followed by coordinated and economical application of resources to minimize, monitor, and control the probability and/or impact of adverse events or to maximize the realization of opportunities.
Holistically, RM covers all concepts and processes affiliated with managing risk, including the systematic application of management policies, procedures, and practices; the tasks of communicating, consulting, and establishing the context; and identifying, analyzing, evaluating, treating, monitoring, and reviewing risk.
As an assessor, one area to always focus on during the review of the policy and procedural documentation, as well as during the key personnel interviews, is the area of responsibility versus accountability. These are typically defined as follows:
Responsibility: Belongs to those who must ensure that the activities are completed successfully
Accountability: Applies to those who either own the required resources or have the authority to approve the execution and/or accept the outcome of an activity within specific RM processes
The risk factors formula is usually a good place to start the review of risks and how they are viewed and treated within the organization. The formula is relatively straightforward for the organization to use and can be a key element to the organizational risk posture as the assessor reviews and interviews the various management staff during the assessment. The formula is as follows: risk = T × V × $ × C, where C = likelihood × impact. Here:
1. T = threats to the organization
2. V = vulnerabilities within the organization
3. $ = assets being protected
4. C = consequences of risk
The risk assessment family of controls provides areas of focus for the organization and the assessor to review and update their security posture on an ongoing basis throughout the life cycle of the system under review.
image

Security Categorization

A clearly defined authorization boundary is a prerequisite for an effective security categorization. Security categorization describes the potential adverse impacts to organizational operations, organizational assets, and individuals should the information and information system be comprised through a loss of confidentiality, integrity, or availability.
The organization conducts the security categorization process as an organization-wide activity with the involvement of the chief information officer, senior information security officer, information system owner, mission owners, and information owners/stewards. The organization also considers potential adverse impacts to other organizations and, in accordance with the USA PATRIOT Act of 2001 and Homeland Security Presidential Directives, potential national-level adverse impacts in categorizing the information system. The security categorization process facilitates the creation of an inventory of information assets, and, in conjunction with configuration management (CM)-8, a mapping to the information system components where the information is processed, stored, and transmitted.

Risk and Vulnerability Assessments

A clearly defined authorization boundary is a prerequisite for an effective risk assessment. Risk assessments take into account vulnerabilities, threat sources, and security controls planned or in place to determine the level of residual risk posed to organization. They also take into account risk posed to organizational operations, organizational assets, or individuals from external parties (e.g., service providers, contractors operating information systems on behalf of the organization, individuals accessing organizational information systems, outsourcing entities).
In accordance with OMB policy and related e-authentication initiatives, authentication of public users accessing federal information systems may also be required to protect nonpublic or privacy-related information. As such, organizational assessments of risk also address public access to federal information systems. The General Services Administration provides tools supporting that portion of the risk assessment dealing with public access to federal information systems.
Risk assessments (either formal or informal) can be conducted by organizations at various steps in the RMF including information system categorization, security control selection, security control implementation, security control assessment, information system authorization, and security control monitoring.
RA-3 is a noteworthy security control in that the control must be partially implemented prior to the implementation of other controls in order to complete the first two steps in the RMF. Risk assessments can play an important role in the security control selection process during the application of tailoring guidance for security control baselines and when considering supplementing the tailored baselines with additional security controls or control enhancements.

RA-5 Vulnerability Scanning

The security categorization of the information system guides the frequency and comprehensiveness of the vulnerability scans. Vulnerability analysis for custom software and applications may require additional, more specialized techniques and approaches (e.g., web-based application scanners, source code reviews, source code analyzers).
Vulnerability scanning includes scanning for specific functions, ports, protocols, and services that should not be accessible to users or devices and for improperly configured or incorrectly operating information flow mechanisms.
The organization considers using tools that express vulnerabilities in the Common Vulnerabilities and Exposures (CVE) naming convention and that use the Open Vulnerability Assessment Language (OVAL) to test for the presence of vulnerabilities.
The Common Weakness Enumeration (CWE) and the National Vulnerability Database (NVD) are also excellent sources for vulnerability information. In addition, security control assessments such as red team exercises are another source of potential vulnerabilities for which to scan.
The assessor then evaluates the organizational risk tolerance process, usually based on the guidance from SP 800-39 and implemented through the RMF process defined in SP 800-37, rev. 1, for overall treatments of risk within the organization as found through the implementation of the security controls on the system under review. He or she, the assessor, then reviews what is important to the agency and the operations by determining the critical success factors from the management perspective.

Critical success factors to information security management

Managers and employees within an organization often tend to consider information security as a secondary priority if compared with their own efficiency or effectiveness matters, because these have a direct and material impact on the outcome of their work.
For this reason, a strong commitment and support by the senior management on security training is needed, over and above the aforementioned role concerning the information security policy.
Management must demonstrate a commitment to security by clearly approving and supporting formal security awareness and training (AT). This may require special management-level training, since security is not necessarily a part of management expertise. The security training for different functions within the organization needs to be customized to address specific security needs. Different functions have different levels of risk. Application developers need technical security training, whereas management requires training that will show the linkage between information security management and the needs of the organization.
A second vital point is that a professional risk-based approach must be used systematically to identify sensitive and critical information resources and to ensure that there is a clear understanding of threats and risks. Thereafter, appropriate risk assessment activities should be undertaken to mitigate unacceptable risks and ensure that residual risks are at an acceptable level.

Operational areas of consideration

There are many areas which the assessor needs to consider when evaluating and testing the various operation controls installed on the systems under test as shown below in the listing of the families of controls. The starting point for most of these areas is the user. The user of the system is often, as I teach in my classes, both the first line of defense and the first line of offense with respect to security on the system. So the first area of operational controls to review would be the security awareness, training, and education section.

Operational security controls key concepts

AT
CM
Contingency planning (CP)
Incident response (IR)
Maintenance (MA)
Media protection (MP)
Physical and environmental protection (PE)
Personnel security (PS)
System and information integrity (SI)

Awareness and Training

With the three areas of awareness, training, and education typically defined in an organizational context by the personnel or human resources department, it is important to focus on the areas of security training being provided to the organization. Concentrate the assessing efforts on the four groups of students for the training. These groups are as follows:
1. End users
2. System administrators – elevated privilege users
3. Security personnel
4. Executives – senior management
Each of these groups has unique security training requirements and we need to ensure these are being addressed by the organization in its training and awareness program. Keep in mind that in several industrial verticals, these training requirements are mandated by either statutory or regulatory requirements, such as the DOD 8570 Workforce regulatory guidance and the end user training requirement found in the Computer Security Act from 1987.
NIST has developed two Special Publications on training of users and support personnel: SP 800-50, Building an Information Technology Security Awareness and Training Program, published in October 2003; and SP 800-16, A Role-Based Model for Federal Information Technology/Cyber Security Training – third and final draft version from March 2014. Each of these publications provides detailed and explicit information on training and security awareness educational efforts for users, system administrators, security personnel, and executive-level managers.

A successful IT security program consists of: 1) developing IT security policy that reflects business needs tempered by known risks; 2) informing users of their IT security responsibilities, as documented in agency security policy and procedures; and 3) establishing processes for monitoring and reviewing the program.

Security awareness and training should be focused on the organization’s entire user population. Management should set the example for proper IT security behavior within an organization. An awareness program should begin with an effort that can be deployed and implemented in various ways and is aimed at all levels of the organization including senior and executive managers. The effectiveness of this effort will usually determine the effectiveness of the awareness and training program. This is also true for a successful IT security program.

An awareness and training program is crucial in that it is the vehicle for disseminating information that users, including managers; need in order to do their jobs. In the case of an IT security program, it is the vehicle to be used to communicate security requirements across the enterprise.

An effective IT security awareness and training program explains proper rules of behavior for the use of agency IT systems and information. The program communicates IT security policies and procedures that need to be followed. This must precede and lay the basis for any sanctions imposed due to noncompliance. Users first should be informed of the expectations. Accountability must be derived from a fully informed, well-trained, and aware workforce.3

Learning is a continuum; it starts with awareness, builds to training, and evolves into education. The basic construct for this continuum is shown as follows and is found in SP 800-16 and SP 800-50:

Awareness

Security awareness efforts are designed to change behavior or reinforce good security practices. Awareness is defined in NIST Special Publication 800-16 as follows: “Awareness is not training. The purpose of awareness presentations is simply to focus attention on security. Awareness presentations are intended to allow individuals to recognize IT security concerns and respond accordingly. In awareness activities, the learner is the recipient of information, whereas the learner in a training environment has a more active role. Awareness relies on reaching broad audiences with attractive packaging techniques. Training is more formal, having a goal of building knowledge and skills to facilitate the job performance.”
An example of a topic for an awareness session (or awareness material to be distributed) is virus protection. The subject can simply and briefly be addressed by describing what a virus is, what can happen if a virus infects a user’s system, what the user should do to protect the system, and what the user should do if a virus is discovered.
image

Training

Training is defined in NIST Special Publication 800-16 as follows: “The ‘Training’ level of the learning continuum strives to produce relevant and needed security skills and competencies by practitioners of functional specialties other than IT security (e.g., management, systems design and development, acquisition, auditing).” The most significant difference between training and awareness is that training seeks to teach skills, which allow a person to perform a specific function, while awareness seeks to focus an individual’s attention on an issue or set of issues. The skills acquired during training are built upon the awareness foundation, in particular, upon the security basics and literacy material. A training curriculum must not necessarily lead to a formal degree from an institution of higher learning; however, a training course may contain much of the same material found in a course that a college or university includes in a certificate or degree program.
An example of training is an IT security course for system administrators, which should address in detail the management controls, operational controls, and technical controls that should be implemented. Management controls include policy, IT security PM, RM, and life-cycle security. Operational controls include personnel and user issues, CP, incident handling, AT, computer support and operations, and physical and environmental security issues. Technical controls include identification and authentication (IA), logical Access Controls (ACs), audit trails, and cryptography.

Education

Education is defined in NIST Special Publication 800-16 as follows: “The ‘Education’ level integrates all of the security skills and competencies of the various functional specialties into a common body of knowledge, adds a multidisciplinary study of concepts, issues, and principles (technological and social), and strives to produce IT security specialists and professionals capable of vision and pro-active response.”
An example of education is a degree program at a college or university. Some people take a course or several courses to develop or enhance their skills in a particular discipline. This is training as opposed to education. Many colleges and universities offer certificate programs, wherein a student may take two, six, or eight classes, for example, in a related discipline, and is awarded a certificate on completion. Often, these certificate programs are conducted as a joint effort between schools and software or hardware vendors. These programs are more characteristic of training than education. Those responsible for security training need to assess both types of programs and decide which one better addresses identified needs.4

Configuration Management

One of the major areas of focus for any assessor is system changes and CM. There have been many occurrences I have reviewed wherein the development team and the operations team supporting systems have instituted upgrades and changes to system which altered or removed security components with no security review or sign-off on the validity or viability of the change. I have personally seen where the end users requested a change to a processing system to speed up the processing time and the development staff accomplished this through removing the required encryption on the transactional data, and it was approved and installed. The security staff had no idea this change was installed until they scanned the system and found multiple errors in the FIPS-140 and Secure Sockets Layer (SSL) areas where the encryption processing had been removed.
Security CM involves the systems, the hardware and software inventories, the changes to systems, and their interchange with the users on a daily basis. Each change has a security component and all reviews and evaluations of system changes require security checks, configuration reviews, and component evaluations to ensure all the currently installed security controls are maintained and not altered by the proposed change. If a control is modified by the change, detailed engineering and operational examination is needed to make the system safe and secure if the change is approved and installed. All of this activity, usually under the control of the Configuration Control section of the organization, should be defined and documented throughout the system life cycle of the system under review.
NIST SP 800-128, Guide for Security-Focused Configuration Management of Information Systems, published in August 2011, provides organizations and assessors with many areas of focus and guidance for security CM actions and activities. It starts out by saying: “An information system is composed of many components4 that can be interconnected in a multitude of arrangements to meet a variety of business, mission, and information security ds. How these information system components are networked, configured, and managed is critical in providing adequate information security and supporting an organization’s risk management process.
An information system is typically in a constant state of change in response to new, enhanced, corrected, or updated hardware and software capabilities, patches for correcting software flaws and other errors to existing components, new security threats, changing business functions, etc. Implementing information system changes almost always results in some adjustment to the system configuration. To ensure that the required adjustments to the system configuration do not adversely affect the security of the information system or the organization from operation of the information system, a well-defined configuration management process that integrates information security is needed.
Organizations apply configuration management (CM) for establishing baselines and for tracking, controlling, and managing many aspects of business development and operation (e.g., products, services, manufacturing, business processes, and information technology). Organizations with a robust and effective CM process need to consider information security implications with respect to the development and operation of information systems including hardware, software, applications, and documentation. Effective CM of information systems requires the integration of the management of secure configurations into the organizational CM process or processes. For this reason, this document assumes that information security is an integral part of an organization’s overall CM process; however, the focus of this document is on implementation of the information system security aspects of CM, and as such the term security-focused configuration management (SecCM) is used to emphasize the concentration on information security. Though both IT business application functions and security-focused practices are expected to be integrated as a single process, SecCM in this context is defined as the management and control of configurations for information systems to enable security and facilitate the management of information security risk.”5

Configuration management has been applied to a broad range of products and systems in subject areas such as automobiles, pharmaceuticals, and information systems. Some basic terms associated with the configuration management discipline are briefly explained below.

Configuration Management (CM) comprises a collection of activities focused on establishing and maintaining the integrity of products and systems, through control of the processes for initializing, changing, and monitoring the configurations of those products and systems.
A Configuration Item (CI) is an identifiable part of a system (e.g., hardware, software, firmware, documentation, or a combination thereof) that is a discrete target of configuration control processes.
A Baseline Configuration is a set of specifications for a system, or CI within a system, that has been formally reviewed and agreed on at a given point in time, and which can be changed only through change control procedures. The baseline configuration is used as a basis for future builds, releases, and/or changes.
A Configuration Management Plan (CM Plan) is a comprehensive description of the roles, responsibilities, policies, and procedures that apply when managing the configuration of products and systems. The basic parts of a CM Plan include:
Configuration Control Board (CCB) – Establishment of and charter for a group of qualified people with responsibility for the process of controlling and approving changes throughout the development and operational lifecycle of products and systems; may also be referred to as a change control board;
Configuration Item Identification – methodology for selecting and naming configuration items that need to be placed under CM;
Configuration Change Control – process for managing updates to the baseline configurations for the configuration items; and
Configuration Monitoring – process for assessing or testing the level of compliance with the established baseline configuration and mechanisms for reporting on the configuration status of items placed under CM.

The configuration of an information system is a representation of the system’s components, how each component is configured, and how the components are connected or arranged to implement the information system. The possible conditions in which an information system or system component can be arranged affect the security posture of the information system. The activities involved in managing the configuration of an information system include development of a configuration management plan, establishment of a configuration control board, development of a methodology for configuration item identification, establishment of the baseline configuration, development of a configuration change control process, and development of a process for configuration monitoring and reporting.6

image

The Phases of Security-Focused Configuration Management

Here are the four defined steps for security CM as found in SP 800-128:
image
A. Planning
As a part of planning, the scope or applicability of SecCM processes are identified. Planning includes developing policy and procedures to incorporate SecCM into existing information technology and security programs, and then disseminating the policy throughout the organization. Policy addresses areas such as the implementation of SecCM plans, integration into existing security program plans, Configuration Control Boards (CCBs), configuration change control processes, tools and technology, the use of common secure configurations (A common secure configuration is a recognized, standardized, and established benchmark (e.g., National Checklist Program, DISA STIGs, etc.) that stipulates specific secure configuration settings for a given IT platform.) and baseline configurations, monitoring, and metrics for compliance with established SecCM policy and procedures. It is typically more cost-effective to develop and implement a SecCM plan, policies, procedures, and associated SecCM tools at the organizational level.
B. Identifying & Implementing Configurations
After the planning and preparation activities are completed, a secure baseline configuration for the information system is developed, reviewed, approved, and implemented. The approved baseline configuration for an information system and associated components represents the most secure state consistent with operational requirements and constraints. For a typical information system, the secure baseline may address configuration settings, software loads, patch levels, how the information system is physically or logically arranged, how various security controls are implemented, and documentation. Where possible, automation is used to enable interoperability of tools and uniformity of baseline configurations across the information system.
C. Controlling Configuration Changes
In this phase of SecCM, the emphasis is put on the management of change to maintain the secure, approved baseline of the information system. Through the use of SecCM practices, organizations ensure that changes are formally identified, proposed, reviewed, analyzed for security impact, tested, and approved prior to implementation. As part of the configuration change control effort, organizations can employ a variety of access restrictions for change including access controls, process automation, abstract layers, change windows, and verification and audit activities to limit unauthorized and/or undocumented changes to the information system.
D. Monitoring
Monitoring activities are used as the mechanism within SecCM to validate that the information system is adhering to organizational policies, procedures, and the approved secure baseline configuration. Planning and implementing secure configurations and then controlling configuration change is usually not sufficient to ensure that an information system which was once secure will remain secure. Monitoring identifies undiscovered/undocumented system components, misconfigurations, vulnerabilities, and unauthorized changes, all of which, if not addressed, can expose organizations to increased risk. Using automated tools helps organizations to efficiently identify when the information system is not consistent with the approved baseline configuration and when remediation actions are necessary. In addition, the use of automated tools often facilitates situational awareness and the documentation of deviations from the baseline configuration.7
Each area of CM is addressed and covered by security controls identified in SP 800-53 CM family of controls. These areas for assessor focus include:
1. CM Policy and Procedures – CM 1
2. CM Plan – CM 1 and CM 9
3. Configuration Control Board – CM 3
4. Component Inventory – CM 8
5. Configuration Items – CM 3
6. Secure Configurations – CM 6 and CM 7
7. Minimum Security Baseline Configuration – CM 2
8. Configuration Change Control – CM 3 and CM 5
9. Security Impact Analysis – CM 4
10. Configuration Monitoring – all CM controls
Additional guidance for inventory identification and management is also provided in the NIST Interagency Report – NISTIR 7693, Specifications for Asset Identification.

Contingency Planning

Information systems are vital elements in most mission/business processes. Because information system resources are so essential to an organization’s success, it is critical that identified services provided by these systems are able to operate effectively without excessive interruption. CP supports this requirement by establishing thorough plans, procedures, and technical measures that can enable a system to be recovered as quickly and effectively as possible following a service disruption. It is unique to each system, providing preventive measures, recovery strategies, and technical considerations appropriate to the system’s information confidentiality, integrity, and availability requirements and the system impact level.
Evaluating a recovery and preparedness process for a system, an organization or an application can involve many areas of technology, operations, and the personnel identified throughout an organization. There are many focal points of concern which require analysis and attention of the assessor. As the major area for the controls related to the security objective of availability, CP has become a focal point for assessors to determine the commitment of the organization’s senior management to the security of their operational systems and applications.
Under Federal Continuity Directive (FCD)-1 and FCD-2 all federal information systems require a contingency plan for recovery and restoration efforts. Additional guidance is provided by NIST is SP 800-34 and templates available on the csrc.nist.gov website.
Information system CP represents a broad scope of activities designed to sustain and recover critical system services following an emergency event. Information system CP fits into a much broader security and emergency management effort that includes organizational and business process continuity, disaster recovery planning, and incident management. Ultimately, an organization would use a suite of plans to properly prepare response, recovery, and continuity activities for disruptions affecting the organization’s information systems, mission/business processes, personnel, and the facility. Because there is an inherent relationship between an information system and the mission/business process it supports, there must be coordination between each plan during development and updates to ensure that recovery strategies and supporting resources neither negate each other nor duplicate efforts.

Continuity and contingency planning are critical components of emergency management and organizational resilience but are often confused in their use. Continuity planning normally applies to the mission/business itself; it concerns the ability to continue critical functions and processes during and after an emergency event. Contingency planning normally applies to information systems, and provides the steps needed to recover the operation of all or part of designated information systems at an existing or new location in an emergency. Cyber Incident Response Planning is a type of plan that normally focuses on detection, response, and recovery to a computer security incident or event.8

image
Details for each type of plan and its development, use, and maintenance are found in SP 800-34.
The primary focus of each plan is listed as follows:
Plan Purpose Scope Plan relationship
Business Continuity Plan (BCP) Provides procedures for sustaining mission/business operations while recovering from a significant disruption Addresses mission/business processes at a lower or expanded level from Continuity of Operations (COOP) MEFs Mission/business process focused plan that may be activated in coordination with a COOP plan to sustain non-MEFs
COOP Plan Provides procedures and guidance to sustain an organization’s Metro Ethernet Forums (MEFs) at an alternate site for up to 30 days; mandated by federal directives Addresses MEFs at a facility; information systems are addressed based only on their support of the mission essential functions MEF focused plan that may also activate several business unit-level BCPs, Information System Contingency Plans (ISCPs), or Disaster Recovery Plans (DRPs), as appropriate
Crisis Communications Plan Provides procedures for disseminating internal and external communications; means to provide critical status information and control rumors Addresses communications with personnel and the public; not information system-focused Incident-based plan often activated with a COOP or BCP, but may be used alone during a public exposure event
Critical Infrastructure Protection (CIP) Plan Provides policies and procedures for protection of national critical infrastructure components, as defined in the National Infrastructure Protection Plan Addresses critical infrastructure components that are supported or operated by an agency or organization Risk management plan that supports COOP plans for organizations with critical infrastructure and key resource assets
Cyber Incident Response Plan Provides procedures for mitigating and correcting a cyber attack, such as a virus, worm, or Trojan horse Addresses mitigation and isolation of affected systems, cleanup, and minimizing loss of information Information system-focused plan that may activate an ISCP or DRP, depending on the extent of the attack
DRP Provides procedures for relocating information systems operations to an alternate location Activated after major system disruptions with long-term effects Information system-focused plan that activates one or more ISCPs for recovery of individual systems
ISCP Provides procedures and capabilities for recovering an information system Addresses single information system recovery at the current or, if appropriate, alternate location Information system-focused plan that may be activated independent from other plans or as part of a larger recovery effort coordinated with a DRP. COOP, and/or BCP
Occupant Emergency Plan (OEP) Provides coordinated procedures for minimizing loss of life or injury and protecting property damage in response to a physical threat Focuses on personnel and property particular to the specific facility: not mission/business process or information system-based Incident-based plan that is initiated immediately after an event, preceding a COOP or DRP activation


Seven Steps to Contingency Planning as Defined in SP 800-34

SP 800-34, rev. 1, provides instructions, recommendations, and considerations for federal information system CP. CP refers to interim measures to recover information system services after a disruption. Interim measures may include relocation of information systems and operations to an alternate site, recovery of information system functions using alternate equipment, or performance of information system functions using manual methods. This guide addresses specific CP recommendations for three platform types and provides strategies and techniques common to all systems:
Client/server systems
Telecommunications systems
Mainframe systems
This guide defines the following seven-step CP process that an organization may apply to develop and maintain a viable CP program for their information systems. These seven progressive steps are designed to be integrated into each stage of the system development life cycle:
1. Develop the CP policy statement. A formal policy provides the authority and guidance necessary to develop an effective contingency plan.
2. Conduct the business impact analysis (BIA). The BIA helps identify and prioritize information systems and components critical to supporting the organization’s mission/business processes.
3. Identify preventive controls. Measures taken to reduce the effects of system disruptions can increase system availability and reduce contingency life-cycle costs.
4. Create contingency strategies. Thorough recovery strategies ensure that the system may be recovered quickly and effectively following a disruption.
5. Develop an information system contingency plan. The contingency plan should contain detailed guidance and procedures for restoring a damaged system unique to the system’s security impact level and recovery requirements.
6. Ensure plan testing, training, and exercises. Testing validates recovery capabilities, whereas training prepares recovery personnel for plan activation and exercising the plan identifies planning gaps; combined, the activities improve plan effectiveness and overall organization preparedness.
7. Ensure plan maintenance. The plan should be a living document that is updated regularly to remain current with system enhancements and organizational changes.
image
The assessor should be looking for multiple areas of focus which the organization has applied in its CP activities. SP 800-34 provides the agencies and organizations the guidance to conduct these events and the assessor gathers the evidence to ensure these events have been conducted in accordance with these guidelines.
Key points to review and assess include:
1. The CP policy statement:
a. Policy should define the organization’s overall contingency objectives and establish the organizational framework and responsibilities for system CP.
b. To be successful, senior management, most likely the CIO, must support a contingency program and be included in the process to develop the program policy.
c. The policy must reflect the FIPS-199 impact levels and the contingency controls that each impact level establishes. Key policy elements are as follows:
- Roles and responsibilities
- Scope as applies to common platform types and organization functions (i.e., telecommunications, legal, media relations) subject to CP
- Resource requirements
- Training requirements
- Exercise and testing schedules
- Plan maintenance schedule
- Minimum frequency of backups and storage of backup media
2. The ISCPs must be written in coordination with other plans associated with each target system as part of organization-wide resilience strategy. Such plans include the following:
a. Information SSPs
b. Facility-level plans, such as the OEP and DRP
c. MEF support such as the COOP plan
d. Organization-level plans, such as CIP plans

BIA Requirements

The BIA purpose is to correlate the system with the critical mission/business processes and services provided, and based on that information, characterize the consequences of a disruption. The ISCP Coordinator can use the BIA results to determine contingency planning requirements and priorities. Results from the BIA should be appropriately incorporated into the analysis and strategy development efforts for the organization’s COOP, BCPs, and DRP.

Three steps are typically involved in accomplishing the BIA:

1. Determine mission/business processes and recovery criticality. Mission/Business processes supported by the system are identified and the impact of a system disruption to those processes is determined along with outage impacts and estimated downtime. The downtime should reflect the maximum time that an organization can tolerate while still maintaining the mission.
2. Identify resource requirements. Realistic recovery efforts require a thorough evaluation of the resources required to resume mission/business processes and related interdependencies as quickly as possible. Examples of resources that should be identified include facilities, personnel, equipment, software, data files, system components, and vital records.
3. Identify recovery priorities for system resources. Based upon the results from the previous activities, system resources can be linked more clearly to critical mission/business processes and functions. Priority levels can be established for sequencing recovery activities and resources.

The sample BIA process and data collection activities, outlined in this section and illustrated below, consisting of a representative information system with multiple components (servers), are designed to help the ISCP Coordinator streamline and focus contingency plan development activities to achieve a more effective plan.9

image

Numbers that Matter – Critical Recovery Numbers

The assessor always needs to keep the numbers that matter to the business objectives and mission when reviewing the CP and COOP documentation, evidence, and testing results. So, what are these numbers?
Maximum tolerable downtime (MTD)
Recovery time objective (RTO)
Recovery point objective (RPO)

The ISCP Coordinator should next analyze the supported mission/business processes and with the process owners, leadership and business managers determine the acceptable downtime if a given process or specific system data were disrupted or otherwise unavailable. Downtime can be identified in several ways.

Maximum Tolerable Downtime (MTD). The MTD represents the total amount of time the system owner/authorizing official is willing to accept for a mission/business process outage or disruption and includes all impact considerations. Determining MTD is important because it could leave contingency planners with imprecise direction on (1) selection of an appropriate recovery method, and (2) the depth of detail which will be required when developing recovery procedures, including their scope and content.
Recovery Time Objective (RTO). RTO defines the maximum amount of time that a system resource can remain unavailable before there is an unacceptable impact on other system resources, supported mission/business processes, and the MTD. Determining the information system resource RTO is important for selecting appropriate technologies that are best suited for meeting the MTD. When it is not feasible to immediately meet the RTO and the MTD is inflexible, a Plan of Action and Milestone should be initiated to document the situation and plan for its mitigation.
Recovery Point Objective (RPO). The RPO represents the point in time, prior to a disruption or system outage, to which mission/business process data can be recovered (given the most recent backup copy of the data) after an outage. Unlike RTO, RPO is not considered as part of MTD. Rather, it is a factor of how much data loss the mission/business process can tolerate during the recovery process.

Because the RTO must ensure that the MTD is not exceeded, the RTO must normally be shorter than the MTD. For example, a system outage may prevent a particular process from being completed, and because it takes time to reprocess the data, that additional processing time must be added to the RTO to stay within the time limit established by the MTD.10

Example:

COOP Versus ISCP – The Basic Facts

Recovery times

COOP functions must be sustained within 12 h and for up to 30 days from an alternate site; ISCP RTOs are determined by the system-based BIA.
Information systems that support COOP functions must have an RTO that meets COOP requirements.
Information systems that do not support COOP functions do not require alternate sites as part of the ISCP recovery strategy, but may have an alternate site security control requirement.

Recovery Strategies

FIPS-199 availability impact level Information system target priority and recovery Backup/recovery strategy
Low Low priority – any outage with little impact, damage, or disruption to the organization

Backup: Tape backup

Strategy: Relocate or cold site

Moderate Important or moderate priority – any system that, if disrupted, would cause a moderate problem to the organization and possibly other networks or systems

Backup: Optical backup. WAN/VLAN replication

Strategy: Cold or warm site

High Mission-critical or high priority – the damage or disruption to these systems would cause the most impact on the organization, mission, and other networks and systems

Backup: Mirrored systems and disc replication

Strategy: Hot site

Site Cost Hardware equipment Telecommunications Setup time Location
Cold site Low None None Long Fixed
Warm site Medium Partial Partial/full Medium Fixed
Hot site Medium/high Full Full Short Fixed

As an assessor, the job here is to evaluate and assess whether the numbers identified above do three things:
1. Provide the appropriate level of recovery in relation to the security categorization of the system under review
2. Provide the level of recovery expected and documented in the BIA for the system under review
3. Provide the level of recovery expected by the end using organization and their financial commitment
The various recovery processes and procedures need to be verified and validated, which is typically done through the use of testing and exercises conducted in accordance with SP 800-82.

SP 800-82, Guide to Test, Training, and Exercise Programs for IT Plans and Capabilities

Organizations have IT plans in place, such as contingency and computer security IR plans, so that they can respond to and manage adverse situations involving IT. These plans should be maintained in a state of readiness, which should include having personnel trained to fulfill their roles and responsibilities within a plan, having plans exercised to validate their content, and having systems and system components tested to ensure their operability in an operational environment specified in a plan. These three types of events can be carried out efficiently and effectively through the development and implementation of a test, training, and exercise (TT&E) program. Organizations should consider having such a program in place because tests, training, and exercises are so closely related. For example, exercises and tests offer different ways of identifying deficiencies in IT plans, procedures, and training.11
Test
Tests are evaluation tools that use quantifiable metrics to validate the operability of an IT system or system component in an operational environment specified in an IT plan. For example, an organization could test if call tree cascades can be executed within prescribed time limits; another test would be removing power from a system or system component. A test is conducted in as close to an operational environment as possible; if feasible, an actual test of the components or systems used to conduct daily operations for the organization should be used. The scope of testing can range from individual system components or systems to comprehensive tests of all systems and components that support an IT plan. Tests often focus on recovery and backup operations; however, testing varies depending on the goal of the test and its relation to a specific IT plan.
Training
Training, in this recovery context, refers only to informing personnel of their roles and responsibilities within a particular IT plan and teaching them skills related to those roles and responsibilities, thereby preparing them for participation in exercises, tests, and actual emergency situations related to the IT plan. Training personnel on their roles and responsibilities before an exercise or test event is typically split between a presentation on their roles and responsibilities and activities that allow personnel to demonstrate their understanding of the subject matter.
Exercises
An exercise is a simulation of an emergency designed to validate the viability of one or more aspects of an IT plan. In an exercise, personnel with roles and responsibilities in a particular IT plan meet to validate the content of a plan through discussion of their roles and their responses to emergency situations, execution of responses in a simulated operational environment, or other means of validating responses that does not involve using the actual operational environment. Exercises are scenario-driven, such as a power failure in one of the organization’s data centers or a fire causing certain systems to be damaged, with additional situations often being presented during the course of an exercise. There are several types of exercises, and this publication focuses on the following two types that are widely used in TT&E programs by single organizations:
Tabletop: Tabletop exercises are discussion-based exercises where personnel meet in a classroom setting or in breakout groups to discuss their roles during an emergency and their responses to a particular emergency situation. A facilitator presents a scenario and asks the exercise participants questions related to the scenario, which initiates a discussion among the participants of roles, responsibilities, coordination, and decision making. A tabletop exercise is discussion-based only and does not involve deploying equipment or other resources.
Functional: Functional exercises allow personnel to validate their operational readiness for emergencies by performing their duties in a simulated operational environment. They are designed to exercise the roles and responsibilities of specific team members, procedures, and assets involved in one or more functional aspects of a plan (e.g., communications, emergency notifications, IT equipment setup). Functional exercises vary in complexity and scope, from validating specific aspects of a plan to full-scale exercises that address all plan elements. They allow staff to execute their roles and responsibilities as they would in an actual emergency situation, but in a simulated manner.

Contingency Plan Testing

Contingency plan testing always requires special attention for assessors as this is often the only way to fully check out the alternative operations and support efforts that the organization has placed into operations but only activates when they are required to do so. The following table reflects the areas of CP controls to evaluate and obtain evidence and proof of accomplishment for testing of the various parts of the system or organization’s contingency plans and COOP preparations:
Control Testing event Sample event to document
CP-3 CP training A seminar and/or briefing used to familiarize personnel with the overall CP purpose, phases, activities, and roles and responsibilities
CP-3 Instruction Instruction of contingency personnel on their roles and responsibilities within the CP and includes refresher training and, for high-impact systems, simulated events
CP-4 CP testing/exercise Test and/or exercise the CP to determine the effectiveness and the organization’s readiness. This includes both planned and unplanned maintenance activities
CP-4 Tabletop exercise Discussion-based simulation of an emergency-based situation in an informal stress-free environment; designed to elicit constructive scenario-based discussions for an examination of the existing CP and individual state of preparedness
CP-4 Functional exercise Simulation of a disruption with a system recovery component such as backup tape restoration or server recovery
CP-4 Full-scale functional exercise Simulation prompting a full recovery and reconstitution of the information system to a known state and ensures that staff are familiar with the alternative facility
CP-4, CP-7 Alternate processing site recovery Test and/or exercise the CP at the alternate processing site to familiarize contingency personnel with the facility and available resources and evaluate the site’s capabilities to support contingency operations. Includes a full recovery and return to normal operations to a known secure state. If high-impact system, the alternate facility should be fully configured as defined in CP
CP-9 System backup Test backup information to verify media reliability and information integrity. If high-impact system, use sample backup information to validate recovery process and ensure backup copies are maintained at alternate storage facility
Now, each of these areas of focus for assessment of the CP controls should be tied into and reflected in the system contingency plan and its design efforts as reflected in the following:
image

Incident Response

The current state of the security of systems across the enterprise often requires organizations to develop and conduct IR activities due to breaches, malware infections, “phishing” events, and outright external attacks. The state of the cybercrime and hacking communities has developed dramatically over the past few years and now includes “hack-in-a-box” and fully developed malicious software development efforts including formal version controls, automated delivery channels, testing against known antivirus signatures, and malware as a service (MAAS) cloud-based delivery mechanisms. The goals of any IR effort are as follows:
Detect incidents quickly.
Diagnose incidents accurately.
Manage incidents properly.
Contain and minimize damage.
Restore affected services.
Determine root causes.
Implement improvements to prevent recurrence.
Document and report.
The purpose of IR is to manage and respond to unexpected disruptive events with the objective of controlling impacts within acceptable levels. These events can be technical, such as attacks mounted on the network via viruses, denial of service, or system intrusion, or they can be the result of mistakes, accidents, or system or process failure. Disruptions can also be caused by a variety of physical events such as theft of proprietary information, social engineering, lost or stolen backup tapes or laptops, environmental conditions such as floods, fires, or earthquakes, and so forth. Any type of incident that can significantly affect the organization’s ability to operate or that may cause damage must be considered by the information security manager and will normally be a part of incident management and response capabilities.
The US government has long recognized the need and requirements for computer IR and, as a result, has developed many documented resources and organizations for IR to include the US-Computer Emergency Response Team (CERT), various DOD CERT organizations, joint ventures between various governmental agencies, incident handling guides, procedures and techniques, and the NIST SP 800-61.

SP 800-61 – Computer Security Incident Handling Guide

As the introduction at the beginning of SP 800-61 says: “Computer security incident response has become an important component of information technology (IT) programs. Cybersecurity-related attacks have become not only more numerous and diverse but also more damaging and disruptive. New types of security-related incidents emerge frequently. Preventive activities based on the results of risk assessments can lower the number of incidents, but not all incidents can be prevented. An incident response capability is therefore necessary for rapidly detecting incidents, minimizing loss and destruction, mitigating the weaknesses that were exploited, and restoring IT services. To that end, this publication provides guidelines for incident handling, particularly for analyzing incident-related data and determining the appropriate response to each incident. The guidelines can be followed independently of particular hardware platforms, operating systems, protocols, or applications.
Because performing incident response effectively is a complex undertaking, establishing a successful incident response capability requires substantial planning and resources. Continually monitoring for attacks is essential. Establishing clear procedures for prioritizing the handling of incidents is critical, as is implementing effective methods of collecting, analyzing, and reporting data. It is also vital to build relationships and establish suitable means of communication with other internal groups (e.g., human resources, legal) and with external groups (e.g., other incident response teams, law enforcement).”12

Incident Handling

The incident response process has several phases. The initial phase involves establishing and training an incident response team, and acquiring the necessary tools and resources. During preparation, the organization also attempts to limit the number of incidents that will occur by selecting and implementing a set of controls based on the results of risk assessments. However, residual risk will inevitably persist after controls are implemented. Detection of security breaches is thus necessary to alert the organization whenever incidents occur. In keeping with the severity of the incident, the organization can mitigate the impact of the incident by containing it and ultimately recovering from it. During this phase, activity often cycles back to detection and analysis—for example, to see if additional hosts are infected by malware while eradicating a malware incident. After the incident is adequately handled, the organization issues a report that details the cause and cost of the incident and the steps the organization should take to prevent future incidents.13

image
Preparation
IR methodologies typically emphasize preparation – not only establishing an IR capability so that the organization is ready to respond to incidents but also preventing incidents by ensuring that systems, networks, and applications are sufficiently secure. Although the IR team is not typically responsible for incident prevention, it is fundamental to the success of IR programs.
As an assessor of IR capacity and incident handling activities, it is important to understand the process itself is often chaotic and can appear haphazard when the response is active. One of the critical areas to focus on during the review is the documented and defined training for the responders, as well as the organizational policies and procedures for IR. Each of these areas helps determine the success or failure of the response team, their interactions with the rest of the organization, and ultimately the minimization of the impact of the incident on the organization, its people, and its mission.
Detection and analysis

For many organizations, the most challenging part of the incident response process is accurately detecting and assessing possible incidents—determining whether an incident has occurred and, if so, the type, extent, and magnitude of the problem. What makes this so challenging is a combination of three factors:

Incidents may be detected through many different means, with varying levels of detail and fidelity. Automated detection capabilities include network-based and host-based IDPSs, antivirus software, and log analyzers. Incidents may also be detected through manual means, such as problems reported by users. Some incidents have overt signs that can be easily detected, whereas others are almost impossible to detect.
The volume of potential signs of incidents is typically high—for example, it is not uncommon for an organization to receive thousands or even millions of intrusion detection sensor alerts per day.
Deep, specialized technical knowledge and extensive experience are necessary for proper and efficient analysis of incident-related data.

Signs of an incident fall into one of two categories: precursors and indicators. A precursor is a sign that an incident may occur in the future. An indicator is a sign that an incident may have occurred or may be occurring now.

Incident detection and analysis would be easy if every precursor or indicator were guaranteed to be accurate; unfortunately, this is not the case. For example, user-provided indicators such as a complaint of a server being unavailable are often incorrect. Intrusion detection systems may produce false positives—incorrect indicators. These examples demonstrate what makes incident detection and analysis so difficult: each indicator ideally should be evaluated to determine if it is legitimate. Making matters worse, the total number of indicators may be thousands or millions a day. Finding the real security incidents that occurred out of all the indicators can be a daunting task.

Even if an indicator is accurate, it does not necessarily mean that an incident has occurred. Some indicators, such as a server crash or modification of critical files, could happen for several reasons other than a security incident, including human error. Given the occurrence of indicators, however, it is reasonable to suspect that an incident might be occurring and to act accordingly. Determining whether a particular event is actually an incident is sometimes a matter of judgment. It may be necessary to collaborate with other technical and information security personnel to make a decision. In many instances, a situation should be handled the same way regardless of whether it is security related. For example, if an organization is losing Internet connectivity every 12 hours and no one knows the cause, the staff would want to resolve the problem just as quickly and would use the same resources to diagnose the problem, regardless of its cause.14

Containment, eradication, and recovery
Containment is important before an incident overwhelms resources or increases damage. Most incidents require containment, so that is an important consideration early in the course of handling each incident. Containment provides time for developing a tailored remediation strategy. An essential part of containment is decision making (e.g., shut down a system, disconnect it from a network, or disable certain functions). Such decisions are much easier to make if there are predetermined strategies and procedures for containing the incident. Organizations should define acceptable risks in dealing with incidents and develop strategies accordingly.
Containment strategies vary based on the type of incident. For example, the strategy for containing an email-borne malware infection is quite different from that for a network-based DDoS attack. Organizations should create separate containment strategies for each major incident type, with criteria documented clearly to facilitate decision making.15

After an incident has been contained, eradication may be necessary to eliminate components of the incident, such as deleting malware and disabling breached user accounts, as well as identifying and mitigating all vulnerabilities that were exploited. During eradication, it is important to identify all affected hosts within the organization so that they can be remediated. For some incidents, eradication is either not necessary or is performed during recovery.

In recovery, administrators restore systems to normal operation, confirm that the systems are functioning normally, and (if applicable) remediate vulnerabilities to prevent similar incidents. Recovery may involve such actions as restoring systems from clean backups, rebuilding systems from scratch, replacing compromised files with clean versions, installing patches, changing passwords, and tightening network perimeter security (e.g., firewall rulesets, boundary router access control lists). Higher levels of system logging or network monitoring are often part of the recovery process. Once a resource is successfully attacked, it is often attacked again, or other resources within the organization are attacked in a similar manner.

Eradication and recovery should be done in a phased approach so that remediation steps are prioritized. For large-scale incidents, recovery may take months; the intent of the early phases should be to increase the overall security with relatively quick (days to weeks) high value changes to prevent future incidents. The later phases should focus on longer-term changes (e.g., infrastructure changes) and ongoing work to keep the enterprise as secure as possible.16

Postincident activity

One of the most important parts of incident response is also the most often omitted: learning and improving. Each incident response team should evolve to reflect new threats, improved technology, and lessons learned. Holding a “lessons learned” meeting with all involved parties after a major incident, and optionally periodically after lesser incidents as resources permit, can be extremely helpful in improving security measures and the incident handling process itself. Multiple incidents can be covered in a single lessons learned meeting. This meeting provides a chance to achieve closure with respect to an incident by reviewing what occurred, what was done to intervene, and how well intervention worked.

Small incidents need limited post-incident analysis, with the exception of incidents performed through new attack methods that are of widespread concern and interest. After serious attacks have occurred, it is usually worthwhile to hold post-mortem meetings that cross team and organizational boundaries to provide a mechanism for information sharing. The primary consideration in holding such meetings is ensuring that the right people are involved. Not only is it important to invite people who have been involved in the incident that is being analyzed, but also it is wise to consider who should be invited for the purpose of facilitating future cooperation.17

As an IR assessor and evaluator, you will be looking for the required training and exercise documentation for each responder on the team. The policies for IR, handling, notification, and board review all need to be identified, reviewed, and assessed. The supporting procedures for handling and response efforts all need review and correlation to the policies, the security controls for IR from SP 800-53 and the actual IR plan for each system as it is reviewed and assessed.

Federal Agency Incident Categories

To clearly communicate incidents and events (any observable occurrence in a network or system) throughout the Federal Government and supported organizations, it is necessary for the government incident response teams to adopt a common set of terms and relationships between those terms. All elements of the Federal Government should use a common taxonomy.

Below please find a high level set of concepts and descriptions to enable improved communications among and between agencies. The taxonomy below does not replace discipline (technical, operational, intelligence) that needs to occur to defend federal agency computers/networks, but provides a common platform to execute the US-CERT mission. US-CERT and the federal civilian agencies are to utilize the following incident and event categories and reporting timeframe criteria as the federal agency reporting taxonomy.

Federal Agency Incident Categories

Category Name Description Reporting Timeframe
CAT 0 Exercise/Network Defense Testing This category is used during state, federal, national, international exercises and approved activity testing of internal/external network defenses or responses. Not Applicable; this category is for each agency’s internal use during exercises.
CAT 1 Unauthorized Access In this category an individual gains logical or physical access without permission to a federal agency network, system, application, data, or other resources Within one (1) hour of discovery/detection
CAT 2 Denial of Service (DoS) An attack that successfully prevents or impairs the normal authorized functionality of networks, systems or applications by exhausting resources. This activity includes being the victim or participating in the DoS. Within two (2) hours of discovery/detection if the successful attack is still ongoing and the agency is unable to successfully mitigate activity.
CAT 3 Malicious Code Successful installation of malicious software (e.g., virus, worm. Trojan horse, or other code-based malicious entity) that infects an operating system or application. Agencies are NOT required to report malicious logic that has been successfully quarantined by antivirus (AV) software

Daily

Note: Within one (1) hour of discovery/detection if widespread across agency.

CAT 4 Improper Usage A person violates acceptable computing use policies. Weekly
CAT 5 Scans/Probes/Attempted Access This category includes any activity that seeks to access or identify a federal agency computer, open ports, protocols, service, or any combination for later exploit. This activity does not directly result in a compromise or denial of service.

Monthly

Note: If system is classified, report within one (1) hour of discovery.

CAT 6 Investigation Unconfirmed incidents that are potentially malicious or anomalous activity deemed by the reporting entity to warrant further review. Not Applicable; this category is for each agency’s use to categorize a potential incident that is currently being investigated.

CAT 0 - Exercise/Network Defense Testing
CAT 1 - *Unauthorized Access
CAT 2 - *Denial of Service (DoS)
CAT 3 - *Malicious Code
CAT 4 - *Inappropriate Usage
CAT 5 - Scans/Probes/Attempted Access
CAT 6 – Investigation

*Any incident that involves compromised PII must be reported to US-CERT within 1 hour of detection regardless of the incident category reporting timeframe.18

Now, as of October 1, 2014 US-CERT posted a new taxonomy and methodology for reporting of incidents. US-CERT posted the following information and table for the new requirements, which are required to be used after September 1, 2015:

Please use the table below to identify the impact of the incident. Incidents may affect multiple types of data; therefore, D/As may select multiple options when identifying the information impact. The security categorization of federal information and information systems must be determined in accordance with Federal Information Processing Standards (FIPS) Publication 199. Specific thresholds for loss of service availability (i.e., all, subset, loss of efficiency) must be defined by the reporting organization.19

Impact Classifications Impact Description
Functional Impact HIGH - Organization has lost the ability to provide all critical services to all system users.
MEDIUM - Organization has lost the ability to provide a critical service to a subset of system users.
LOW - Organization has experienced a loss of efficiency, but can still provide all critical services to all users with minimal effect on performance.
NONE - Organization has experienced no loss in ability to provide all services to all users.
Information Impact CLASSIFIED - The confidentiality of classified information [5] was compromised.
PROPRIETARY [6] - The confidentiality of unclassified proprietary information, such as protected critical infrastructure information (PCII), intellectual property, or trade secrets was compromised.
PRIVACY - The confidentiality of personally identifiable information [7] (PII) of personal health information (PHI) was compromised.
INTEGRITY - The necessary integrity of information was modified without authorization.
NONE - No information was exfiltrated. modified, deleted, or otherwise compromised.
Recoverability REGULAR - Time to recovery is predictable with existing resources.
SUPPLEMENTED - Time to recovery is predictable with additional resources.
EXTENDED - Time to recovery is unpredictable; additional resources and outside help are needed.
NOT RECOVERABLE - Recovery from the incident is not possible (e.g., sensitive data exfiltrated and posted publicly).
NOT APPLICABLE - Incident does not require recovery.

To minimize damage from security incidents and to recover and to learn from such incidents, a formal IR capability should be established. The organization and management of an IR capability should be coordinated or centralized with the establishment of key roles and responsibilities. In establishing this process, employees and contractors are made aware of procedures for reporting the different types of incidents that might have an impact on the security of organizational assets. Incidents occur because vulnerabilities are not addressed properly. Ideally, an organizational computer security incident response team (CSIRT) or CERT should be formulated with clear lines of reporting, and responsibilities for standby support should be established. An assessor should ensure that the CSIRT is actively involved with users to assist them in the mitigation of risks arising from security failures and also to prevent security incidents.
The assessor needs to check for, evaluate, and assess the following areas of IR:
A. Organizations must create, provision, and operate a formal incident response capability. Federal law requires Federal agencies to report incidents to the United States Computer Emergency Readiness Team (US-CERT) office within the Department of Homeland Security (DHS).

The Federal Information Security Management Act (FISMA) requires Federal agencies to establish incident response capabilities. Each Federal civilian agency must designate a primary and secondary point of contact (POC) with US-CERT and report all incidents consistent with the agency’s incident response policy. Each agency is responsible for determining how to fulfill these requirements. Establishing an incident response capability should include the following actions:

1. Creating an incident response policy and plan
2. Developing procedures for performing incident handling and reporting
3. Setting guidelines for communicating with outside parties regarding incidents
4. Selecting a team structure and staffing model
5. Establishing relationships and lines of communication between the incident response team and other groups, both internal (e.g., legal department) and external (e.g., law enforcement agencies)
6. Determining what services the incident response team should provide
7. Staffing and training the incident response team.
B. Organizations should reduce the frequency of incidents by effectively securing networks, systems, and applications.
Preventing problems is often less costly and more effective than reacting to them after they occur. Thus, incident prevention is an important complement to an incident response capability. If security controls are insufficient, high volumes of incidents may occur. This could overwhelm the resources and capacity for response, which would result in delayed or incomplete recovery and possibly more extensive damage and longer periods of service and data unavailability. Incident handling can be performed more effectively if organizations complement their incident response capability with adequate resources to actively maintain the security of networks, systems, and applications. This includes training IT staff on complying with the organization’s security standards and making users aware of policies and procedures regarding appropriate use of networks, systems, and applications.
C. Organizations should document their guidelines for interactions with other organizations regarding incidents.
During incident handling, the organization will need to communicate with outside parties, such as other incident response teams, law enforcement, the media, vendors, and victim organizations. Because these communications often need to occur quickly, organizations should predetermine communication guidelines so that only the appropriate information is shared with the right parties.
D. Organizations should be generally prepared to handle any incident but should focus on being prepared to handle incidents that use common attack vectors.

Incidents can occur in countless ways, so it is infeasible to develop step-by-step instructions for handling every incident. This publication defines several types of incidents, based on common attack vectors; these categories are not intended to provide definitive classification for incidents, but rather to be used as a basis for defining more specific handling procedures. Different types of incidents merit different response strategies. The attack vectors are:

External/Removable Media: An attack executed from removable media (e.g., flash drive, CD) or a peripheral device.
Attrition: An attack that employs brute force methods to compromise, degrade, or destroy systems, networks, or services.
Web: An attack executed from a website or web-based application.
Email: An attack executed via an email message or attachment.
Improper Usage: Any incident resulting from violation of an organization’s acceptable usage policies by an authorized user, excluding the above categories.
Loss or Theft of Equipment: The loss or theft of a computing device or media used by the organization, such as a laptop or smartphone.
Other: An attack that does not fit into any of the other categories.
E. Organizations should emphasize the importance of incident detection and analysis throughout the organization.
In an organization, millions of possible signs of incidents may occur each day, recorded mainly by logging and computer security software. Automation is needed to perform an initial analysis of the data and select events of interest for human review. Event correlation software can be of great value in automating the analysis process. However, the effectiveness of the process depends on the quality of the data that goes into it. Organizations should establish logging standards and procedures to ensure that adequate information is collected by logs and security software and that the data is reviewed regularly.
F. Organizations should create written guidelines for prioritizing incidents.
Prioritizing the handling of individual incidents is a critical decision point in the incident response process. Effective information sharing can help an organization identify situations that are of greater severity and demand immediate attention. Incidents should be prioritized based on the relevant factors, such as the functional impact of the incident (e.g., current and likely future negative impact to business functions), the information impact of the incident (e.g., effect on the confidentiality, integrity, and availability of the organization’s information), and the recoverability from the incident (e.g., the time and types of resources that must be spent on recovering from the incident).
G. Organizations should use the lessons learned process to gain value from incidents.
After a major incident has been handled, the organization should hold a “lessons learned” meeting to review the effectiveness of the incident handling process and identify necessary improvements to existing security controls and practices. Lessons learned meetings can also be held periodically for lesser incidents as time and resources permit. The information accumulated from all lessons learned meetings should be used to identify and correct systemic weaknesses and deficiencies in policies and procedures. Follow-up reports generated for each resolved incident can be important not only for evidentiary purposes but also for reference in handling future incidents and in training new team members.20

System Maintenance

As an assessor of federal information systems, what do you need to know about operations and maintenance (O&M) of information systems?
In the maintenance area for systems, focus on the policies and procedures for the maintenance activities of the assigned personnel first. Then look at the maintenance records, logs, and reports of the maintenance staff. Check these records against the requests and help desk tickets to ensure the maintenance is requested legitimately, performed appropriately, and completed successfully.
Areas for review include the following parts of the Maintenance and Support program of the agency:
Nonlocal maintenance = remote access/maintenance:
FIPS-201-1 Common Identification – Personal Identity Verification (PIV; IA)
SP 800-63 e-authentication (IA)
FIPS-197 Advance Encryption Standard (systems and communications protection (SC))
FIPS-140-2 Cryptography Standard
SP 80-88 Media Sanitization (MP)
Planning for failure of equipment:
Mean time between failures (MTBF)
Mean time to repair (MTTR)

Encryption Standards for Use and Review in Federal Systems

FIPS-140-1 was developed by a government and industry working group composed of both operators and vendors. The working group identified requirements for four security levels for cryptographic modules to provide for a wide spectrum of data sensitivity (e.g., low-value administrative data, million dollar funds transfers, and life-protecting data) and a diversity of application environments (e.g., a guarded facility, an office, and a completely unprotected location). Four security levels are specified for each of 11 requirement areas. Each security level offers an increase in security over the preceding level. These four increasing levels of security allow cost-effective solutions that are appropriate for different degrees of data sensitivity and different application environments. FIPS-140-2 incorporates changes in applicable standards and technology since the development of FIPS-140-1 as well as changes that are based on comments received from the vendor, laboratory, and user communities. The basic level guidance from the FIPS is provided as follows:
1. Security Level 1: Security Level 1 provides the lowest level of security. Basic security requirements are specified for a cryptographic module (e.g., at least one approved algorithm or approved security function shall be used). No specific physical security mechanisms are required in a Security Level 1 cryptographic module beyond the basic requirement for production-grade components. An example of a Security Level 1 cryptographic module is a personal computer (PC) encryption board.
Security Level 1 allows the software and firmware components of a cryptographic module to be executed on a general-purpose computing system using an unevaluated operating system. Such implementations may be appropriate for some low-level security applications when other controls, such as physical security, network security, and administrative procedures, are limited or nonexistent. The implementation of cryptographic software may be more cost-effective than corresponding hardware-based mechanisms, enabling organizations to select from alternative cryptographic solutions to meet lower-level security requirements.
2. Security Level 2: Security Level 2 enhances the physical security mechanisms of a Security Level 1 cryptographic module by adding the requirement for tamper-evidence, which includes the use of tamper-evident coatings or seals or for pick-resistant locks on removable covers or doors of the module. Tamper-evident coatings or seals are placed on a cryptographic module so that the coating or seal must be broken to attain physical access to the plaintext cryptographic keys and critical security parameters (CSPs) within the module. Tamper-evident seals or pick-resistant locks are placed on covers or doors to protect against unauthorized physical access.

Security Level 2 requires, at a minimum, role-based authentication in which a cryptographic module authenticates the authorization of an operator to assume a specific role and perform a corresponding set of services.

Security Level 2 allows the software and firmware components of a cryptographic module to be executed on a general-purpose computing system using an operating system that:

a. Meets the functional requirements specified in the Common Criteria (CC) Protection Profiles (PPs)
b. Is evaluated at the CC evaluation assurance level EAL2 (or higher)

An equivalent evaluated trusted operating system may be used. A trusted operating system provides a level of trust so that cryptographic modules executing on general-purpose computing platforms are comparable to cryptographic modules implemented using dedicated hardware systems.

3. Security Level 3: In addition to the tamper-evident physical security mechanisms required at Security Level 2, Security Level 3 attempts to prevent the intruder from gaining access to CSPs held within the cryptographic module. Physical security mechanisms required at Security Level 3 are intended to have a high probability of detecting and responding to attempts at physical access, use, or modification of the cryptographic module. The physical security mechanisms may include the use of strong enclosures and tamper detection/response circuitry that zeroizes all plaintext CSPs when the removable covers/doors of the cryptographic module are opened.

Security Level 3 requires identity-based authentication mechanisms, enhancing the security provided by the role-based authentication mechanisms specified for Security Level 2. A cryptographic module authenticates the identity of an operator and verifies that the identified operator is authorized to assume a specific role and perform a corresponding set of services.

Security Level 3 requires the entry or output of plaintext CSPs (including the entry or output of plaintext CSPs using split knowledge procedures) be performed using ports that are physically separated from other ports, or interfaces that are logically separated using a trusted path from other interfaces. Plaintext CSPs may be entered into or output from the cryptographic module in encrypted form (in which case they may travel through enclosing or intervening systems).

Security Level 3 allows the software and firmware components of a cryptographic module to be executed on a general-purpose computing system using an operating system that:

a. Meets the functional requirements specified in the PPs listed in Annex B with the additional functional requirement of a trusted path (FTP_TRP.1)
b. Is evaluated at the CC evaluation assurance level EAL3 (or higher) with the additional assurance requirement of an informal Target of Evaluation (TOE) Security Policy Model (ADV_SPM.1)

An equivalent evaluated trusted operating system may be used. The implementation of a trusted path protects plaintext CSPs and the software and firmware components of the cryptographic module from other untrusted software or firmware that may be executing on the system.

4. Security Level 4: Security Level 4 provides the highest level of security defined in this standard. At this security level, the physical security mechanisms provide a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access. Penetration of the cryptographic module enclosure from any direction has a very high probability of being detected, resulting in the immediate zeroization of all plaintext CSPs. Security Level 4 cryptographic modules are useful for operation in physically unprotected environments.

Security Level 4 also protects a cryptographic module against a security compromise due to environmental conditions or fluctuations outside of the module’s normal operating ranges for voltage and temperature. Intentional excursions beyond the normal operating ranges may be used by an attacker to thwart a cryptographic module’s defenses. A cryptographic module is required either to include special environmental protection features designed to detect fluctuations and zeroize CSPs or to undergo rigorous environmental failure testing to provide a reasonable assurance that the module will not be affected by fluctuations outside of the normal operating range in a manner that can compromise the security of the module.

Security Level 4 allows the software and firmware components of a cryptographic module to be executed on a general-purpose computing system using an operating system that:

a. Meets the functional requirements specified for Security Level 3
b. Is evaluated at the CC evaluation assurance level EAL4 (or higher)

An equivalent evaluated trusted operating system may be used.

5. Advanced Encryption Standard: FIPS-197, Advanced Encryption Standard (AES), specifies the Rijndael algorithm, a symmetric block cipher that can process data blocks of 128 bits, using cipher keys with lengths of 128, 192, and 256 bits.
The algorithm may be used with the three different key lengths indicated above, and therefore these different “flavors” may be referred to as “AES-128,” “AES-192,” and “AES-256.”

Media Protection

SP 800-88 – sanitization
SP 800-111 – storage encryption

Media Sanitation

The information security concern regarding information disposal and media sanitization resides not in the media but in the recorded information. The issue of media disposal and sanitization is driven by the information placed intentionally or unintentionally on the media.
Information systems capture, process, and store information using a wide variety of media. This information is located not only on the intended storage media but also on devices used to create, process, or transmit this information. These media may require special disposition in order to mitigate the risk of unauthorized disclosure of information and to ensure its confidentiality. Efficient and effective management of information that is created, processed, and stored by an IT system throughout its life, from inception to disposition, is a primary concern of an information system owner and the custodian of the data.
With the use of increasingly sophisticated encryption, an attacker wishing to gain access to an organization’s sensitive information is forced to look outside the system itself for that information. One avenue of attack is the recovery of supposedly deleted data from media. These residual data may allow unauthorized individuals to reconstruct data and thereby gain access to sensitive information. Sanitization can be used to thwart this attack by ensuring that deleted data cannot be easily recovered.
When storage media are transferred, become obsolete, or are no longer usable or required by an information system, it is important to ensure that residual magnetic, optical, electrical, or other representation of data that has been deleted is not easily recoverable. Sanitization refers to the general process of removing data from storage media, such that there is reasonable assurance that the data may not be easily retrieved and reconstructed.
Information disposition and sanitization decisions occur throughout the system life cycle. Critical factors affecting information disposition and media sanitization are decided at the start of a system’s development. The initial system requirements should include hardware and software specifications as well as interconnections and data flow documents that will assist the system owner in identifying the types of media used in the system.
image

Types of Media

1. Hard copy: Hard copy media is physical representations of information. Paper printouts, printer, and facsimile ribbons, drums, and platens are all examples of hard copy media. These types of media are often the most uncontrolled. Information tossed into the recycle bins and trash containers exposes a significant vulnerability to “dumpster divers,” and overcurious employees, risking accidental disclosures.
2. Electronic (or soft copy): Electronic media are the bits and bytes contained in hard drives, random access memory (RAM), read-only memory (ROM), disks, memory devices, phones, mobile computing devices, networking equipment, and many other types.
There are different types of sanitization for each type of media. Media sanitization is divided into four categories in NIST SP 800-88: disposal, clearing, purging, and destroying.
1. Disposal is the act of discarding media with no other sanitization considerations. This is most often done by paper recycling containing nonconfidential information but may also include other media.
2. Clearing information is a level of media sanitization that would protect the confidentiality of information against a robust keyboard attack. Simple deletion of items would not suffice for clearing. Clearing must not allow information to be retrieved by data, disk, or file recovery utilities. It must be resistant to keystroke recovery attempts executed from standard input devices and from data scavenging tools. For example, overwriting is an acceptable method for clearing media.
3. Purging information is a media sanitization process that protects the confidentiality of information against a laboratory attack. For some media, clearing media would not suffice for purging. However, for Advanced Technology Attachment (ATA) disk drives manufactured after 2001 (over 15 GB) the terms clearing and purging have converged.
4. Destruction of media is the ultimate form of sanitization. After media are destroyed, they cannot be reused as originally intended. Physical destruction can be accomplished using a variety of methods, including disintegration, incineration, pulverizing, shredding, and melting.

Sanitization and Disposition Decision Flow

image
Organizations make sanitization decisions that are commensurate with the security categorization of the confidentiality of information contained on their media. The decision process is based on the confidentiality of the information, not the type of media. Once organizations decide what type of sanitization is best for their individual case, the media type will influence the technique used to achieve this sanitization goal.

Storage Encryption Technologies

SP 800-111 provides a high-level overview of the most commonly used options for encrypting stored information: full disk encryption (FDE), volume and virtual disk encryption, and file/folder encryption. It briefly defines each option and explains at a high level how it works.
1. FDE, also known as whole disk encryption, is the process of encrypting all the data on the hard drive used to boot a computer, including the computer’s OS, and permitting access to the data only after successful authentication to the FDE product. Most FDE products are software-based.
FDE software works by redirecting a computer’s master boot record (MBR), which is a reserved sector on bootable media that determines which software (e.g., OS, utility) will be executed when the computer boots from the media. Before FDE software is installed onto a computer, the MBR usually points to the computer’s primary OS. When FDE software is being used, the computer’s MBR is redirected to a special preboot environment (PBE) that controls access to the computer.
FDE software is most commonly used on desktop and laptop computers. The requirement for preboot authentication means that users have to be able to authenticate using the most fundamental components of a device, such as a standard keyboard – because the OS is not loaded, OS-level drivers are unavailable. For example, a personal digital assistant (PDA) or smart phone could not display a keyboard on the screen for entering a password because that is an OS-level capability.
2. Virtual disk encryption is the process of encrypting a file called a container, which can hold many files and folders, and permitting access to the data within the container only after proper authentication is provided, at which point the container is typically mounted as a virtual disk. Virtual disk encryption is used on all types of end user device storage. The container is a single file that resides within a logical volume. Examples of volumes are boot, system, and data volumes on a PC, and a Universal Serial Bus (USB) flash drive formatted with a single file system.
Characteristic Full disk encryption Volume encryption Virtual disk encryption File/folder encryption
Typical platforms supported Desktop and laptop computers Desktop and laptop computers, volume-based removable media (e.g., USB flash drives) All types of end user devices All types of end user devices
Data protected by encryption All data on the media (data files, system files, residual data, and metadata) All data in the volume (data files, system files, residual data, and metadata) All data in the container (data files, residual data, and metadata, but not system files) Individual files/folders (data files only)
Mitigates threats involving loss or theft of devices? Yes Yes Yes Yes
Mitigates OS and application layer threats (such as malware and insider threats)? No If the data volume is being protected, it sometimes mitigates such threats. If the data volume is not being protected, then there is no mitigation of these threats It sometimes mitigates such threats It sometimes mitigates such threats
Potential impact to devices in case of solution failure Loss of all data and device functionality Loss of all data in volume; can cause loss of device functionality, depending on which volume is being protected Loss of all data in container Loss of all protected files/folders
Portability of encrypted information Not portable Not portable Portable Often portable

3. Volume encryption is the process of encrypting an entire logical volume and permitting access to the data on the volume only after proper authentication is provided. It is most often performed on hard drive data volumes and volume-based removable media, such as USB flash drives and external hard drives.
The key difference between volume and virtual disk encryption is that containers are portable and volumes are not – a container can be copied from one medium to another, with encryption intact. This allows containers to be burned to CDs and DVDs and to be used on other media that are not volume-based. Virtual disk encryption also makes it trivial to back up sensitive data; the container is simply copied to the backup server or media. Another advantage of virtual disk encryption over volume encryption is that virtual disk encryption can be used in situations where volume-based removable media needs to have both protected and unprotected storage; the volume can be left unprotected and a container placed onto the volume for the sensitive information.
4. File encryption is the process of encrypting individual files on a storage medium and permitting access to the encrypted data only after proper authentication is provided. Folder encryption is very similar to file encryption, only it addresses individual folders instead of files. Some OSs offer built-in file and/or folder encryption capabilities and many third-party programs are also available. Although folder encryption and virtual disk encryption sound similar – both a folder and a container are intended to contain and protect multiple files – there is a difference. A container is a single opaque file, meaning that no one can see what files or folders are inside the container until the container is decrypted. File/folder encryption is transparent, meaning that anyone with access to the file system can view the names and possibly other metadata for the encrypted files and folders, including files and folders within encrypted folders, if they are not protected through OS AC features. File/folder encryption is used on all types of storage for end user devices.

Physical security

As an assessor, physical security reviews are usually conducted via “security walk-throughs” which are inspections of the facilities and their various components. These “walk-throughs” are just that, walking through the facility looking at the various equipment, configurations, electrical panels, HVAC systems, generators, fire suppression systems, and physical access on doors and rooms. The primary areas for physical ACs to review and inspect include:
Badges
Memory cards
Guards
Keys
True-floor-to-true-ceiling wall construction, especially in data centers and controlled access rooms
Fences
Locks
The primary areas to review during inspections for fire safety and suppression systems include:
Building operation
Building occupancy
Fire detection equipment such as the various kinds of sensors
Fire extinguishment, including fire extinguishers and delivery mechanisms for rooms
Reviewing the physical security of the facilities also includes the supporting utilities and their delivery. This includes:
Air-conditioning system
Electric power distribution
Heating plants
Water
Sewage
Alternative power and its delivery to the facility
Some of the more critical areas to focus on include looking at the positive flow for both air and water such that the flow is out of the room, rather than into the room, and the point of delivery of the utilities to the facility – is it secure from tampering or inadvertent accidents?

Personnel security

The PS component is often overlooked and not reviewed in detail by assessors. This area has critical issues in today’s world with insider threats, lack of reviews for new or transferring employees, as well as dealing with the US government’s requirements for PIV credentials necessary for all users on government systems. Some of the documents and regulations which cover this area include:
800-73
800-76
800-78
5 CFR 731.106, Designation of Public Trust Positions and Investigative Requirements
ICD 704, Personnel Security Standards Sensitive Compartmented Information (SCI)
Proper information security practices should be in place to ensure that employees, contractors, and third-party users understand their responsibilities, and are suitable for the roles they are considered for, and to reduce the risk of theft, fraud, or misuse of facilities, specifically:
Security responsibilities should be addressed prior to employment in adequate job descriptions and in terms and conditions of employment.
All candidates for employment, contractors, and third-party users should be adequately screened, especially for sensitive jobs.
Employees, contractors, and third-party users of information processing facilities should sign an agreement on their security roles and responsibilities.
Security roles and responsibilities of employees, contractors, and third-party users should be defined and documented in accordance with the organization’s information security policy.
The basic staffing process is shown below and the assessor should ensure the processes, procedures, and organizational policies provide the necessary guidance to the HR staff to accomplish these steps in a professional and secure manner throughout the recruitment, hiring, and employee life cycle for each and every employee and contractor involved in the governmental support efforts for their agency.

Staffing

image
Areas for coverage of personnel which the assessor should review include areas in user administration such as:
User account management
Audit and management reviews
Detecting unauthorized/illegal activities
Temporary assignments and in-house transfers
Termination
Friendly termination
Unfriendly termination
Throughout the personnel process which is under review, the assessor should check on all of the user activities.

System integrity

Integrity reviews often require an assessor to test a system capability either via automated tool employment or through the use of manual scripting efforts. These activities will require the assessor to have knowledge and skills in scripting languages and manual test development. Automated tool use will need the assessor to have experience in the use of the tool and the results expected from the tool. One of the principal ways of automated tool use is through employing vulnerability scanners which often can review the system and determine if there are configuration errors or misaligned areas of code within the application or the operating system. This includes patches for systems as well as code issues.
Patching and flaw remediation are areas of system integrity and system maintenance which the assessor should focus on first when testing and evaluating system integrity. Other areas are found in the following Special Publications:
800-40 – Patching (RA family of controls)
800-45 – Email
800-83 – Malware
800-92 – Logs (audit and accounting (AU) family of controls)

Malware Incident Prevention and Handling

SP 800-83, Guide to Malware Incident Prevention and Handling, provides recommendations for improving an organization’s malware incident prevention measures. It also gives extensive recommendations for enhancing an organization’s existing IR capability so that it is better prepared to handle malware incidents, particularly widespread ones. The recommendations address several major forms of malware, including viruses, worms, Trojan horses, malicious mobile code, blended attacks, spyware tracking cookies, and attacker tools such as backdoors and rootkits. The recommendations encompass various transmission mechanisms, including network services (e.g., email, web browsing, file sharing) and removable media.
The basic structure of SP 800-83 addresses focal points of interest for the assessor, such as:
Malware categories
Malware incident prevention:
Policy
Awareness
Vulnerability mitigation
Threat mitigation
Malware IR

Malware Categories

Viruses:
Compiled viruses
Interpreted viruses
Virus obfuscation techniques
Worms
Trojan horses
Malicious mobile code
Blended attacks
Tracking cookies
Attacker tools:
Backdoors
Keystroke loggers
Rootkits
Web browser plug-ins
Email generators
Attacker toolkits
Non-malware threats:
Phishing
Virus hoaxes
This area of security should always be closely looked at and examined by the assessor as it is often used by attackers as one major area for exploitation of systems through many areas. Often I have found organizations only partially address flaws and remediation efforts and leave potential large and major exposures available for attacks to work against successfully.

Email Security – Spam

As the Special Publication, SP 800-45, states in the Executive Summary: “Electronic mail (email) is perhaps the most popularly used system for exchanging business information over the Internet (or any other computer network). At the most basic level, the email process can be divided into two principal components: (1) mail servers, which are hosts that deliver, forward, and store email; and (2) mail clients, which interface with users and allow users to read, compose, send, and store email. This document addresses the security issues of mail servers and mail clients, including Web-based access to mail.
Mail servers and user workstations running mail clients are frequently targeted by attackers. Because the computing and networking technologies that underlie email are ubiquitous and well-understood by many, attackers are able to develop attack methods to exploit security weaknesses. Mail servers are also targeted because they (and public Web servers) must communicate to some degree with untrusted third parties. Additionally, mail clients have been targeted as an effective means of inserting malware into machines and of propagating this code to other machines. As a result, mail servers, mail clients, and the network infrastructure that supports them must be protected.”21
Understanding the email system within the organization requires understanding of the various potential attack and exposure areas such as:
To exchange email with the outside world, a requirement for most organizations, it is allowed through organizations’ network perimeter defenses. At a basic level, viruses and other types of malware may be distributed throughout an organization via email. Increasingly, however, attackers are getting more sophisticated and using email to deliver targeted zero-day attacks in an attempt to compromise users’ workstations within the organization’s internal network.
Given email’s nature of human to human communication, it can be used as a social engineering vehicle. Email can allow an attacker to exploit an organization’s users to gather information or get the users to perform actions that further an attack.
Flaws in the mail server application may be used as the means of compromising the underlying server and hence the attached network. Examples of this unauthorized access include gaining access to files or folders that were not meant to be publicly accessible, and being able to execute commands and/or install software on the mail server.
Denial of service (DoS) attacks may be directed to the mail server or its support network infrastructure, denying or hindering valid users from using the mail server.
Sensitive information on the mail server may be read by unauthorized individuals or changed in an unauthorized manner.
Sensitive information transmitted unencrypted between mail server and client may be intercepted. All popular email communication standards default to sending usernames, passwords, and email messages unencrypted.
Information within email messages may be altered at some point between the sender and recipient.
Malicious entities may gain unauthorized access to resources elsewhere in the organization’s network via a successful attack on the mail server. For example, once the mail server is compromised, an attacker could retrieve users’ passwords, which may grant the attacker access to other hosts on the organization’s network.
Malicious entities may attack external organizations from a successful attack on a mail server host.
Misconfiguration may allow malicious entities to use the organization’s mail server to send email-based advertisements (i.e., spam).
Users may send inappropriate, proprietary, or other sensitive information via email. This could expose the organization to legal action.22
The areas for the assessor to review should, therefore, include the following configuration items:
Ensuring that spam cannot be sent from the mail servers they control
Implementing spam filtering for inbound messages
Blocking messages from known spam-sending servers
The assessor should examine the mail servers, clients, and organization’s security architecture for the focal points as follows to ensure proper security for email systems:
1. Email message signing and encryption standards
2. Planning and management of mail servers
3. Securing the operating system underlying a mail server
4. Mail server application security
5. Email content filtering
6. Email-specific considerations in the deployment and configuration of network protection mechanisms, such as firewalls, routers, switches, and intrusion detection and intrusion prevention systems
7. Securing mail clients
8. Administering the mail server in a secure manner, including backups, security testing, and log reviews
Each area should be tested for configuration, compliance, and actual security actions when dealing with this very sensitive organizational support area of email.

Technical areas of consideration

The common way for validating and verifying the technical controls is to employ automated testing tools and techniques as found in the NIST SP 800-115 testing guide. There are many tools which can be utilized to evaluate the various technical components, equipment, and configuration used by the IT staff, network support staff, and the agency. The basic four areas of technical controls are as follows:
1. AC
2. AU
3. IA
4. SC
We will examine each area and the parts of each with technical focus from an assessment perspective as we review the technical components that make up these controls.

Access control

There are many NIST Special Publications for the various AC methodologies and implementations. Each one has a specific area of AC that it covers. Here are just some of the SPs available for review and reference as the controls are identified, implemented, and evaluated:
800-46 (Telework)
800-77 (Internet Protocol Security (IPSec))
800-113 (SSL)
800-114 (External Devices)
800-121 (Bluetooth)
800-48 (Legacy Wireless)
800-97 (802.11i Wireless)
800-124 (Cell Phones/PDA)
OMB M 06-16 (Remote Access)

Logical Access Controls

Logical ACs are the primary means of managing and protecting resources to reduce risks to a level acceptable to an organization. They are tools used for identification, authentication, authorization, and accountability. They are software components that enforce AC measures for systems, programs, processes, and information. The logical ACs can be embedded within operating systems, applications, add-on security packages, or database and telecommunication management systems. In applying management-designed policies and procedures for protecting information assets, logical ACs are the primary means of managing and protecting these resources to reduce risks to a level acceptable to an organization. For example, the concept of AC relates to managing and controlling access to an organization’s information resources residing on host- and network-based computer systems. Assessors need to understand the relationship of logical ACs to management policies and procedures for information security. In doing so, assessors should be able to analyze and evaluate a logical AC’s effectiveness in accomplishing information security objectives.
Inadequate logical ACs increase an organization’s potential for losses resulting from exposures. These exposures can result in minor inconveniences up to a total shutdown of computer functions. Exposures that exist from accidental or intentional exploitation of logical AC weaknesses include technical exposures and computer crime.
For assessors to effectively assess logical ACs within the system under review, they first need to gain a technical and organizational understanding of the organization’s IT environment. The purpose of this is to determine which areas from a risk standpoint warrant special attention in planning current and future work. This includes reviewing all security layers associated with the organization’s IT information system architecture.
These layers are as follows:
Network layer
Operating system platform layer
Database layer
Application layer

Paths of Logical Access

Access or points of entry to an organization’s information system infrastructure can be gained through several avenues. Each avenue is subject to appropriate levels of access security. For example, paths of logical access often relate to different levels occurring from either a back-end or a front-end interconnected network of systems for internally or externally based users. Front-end systems are network-based systems connecting an organization to outside untrusted networks, such as corporate websites, where a customer can access the website externally in initiating transactions that connect to a proxy server application which in turn connects, for example, to a back-end database system in updating a customer database. Front-end systems can also be internally based in automating business, paper-less processes that tie into back-end systems in a similar manner.

General Points of Entry

General points of entry to either front-end or back-end systems relate to an organization’s networking or telecommunications infrastructure in controlling access into their information resources (e.g., applications, databases, facilities, networks). The approach followed is based on a client–server model where, for example, a large organization can literally have thousands of interconnected network servers. Connectivity in this environment needs to be controlled through a smaller set of primary domain controlling servers, which enable a user to obtain access to specific secondary points of entry (e.g., application servers, databases).
General modes of access into this infrastructure occur through the following:
Network connectivity
Remote access
Operator console
Online workstations or terminals

Logical Access Control Software

IT has made it possible for computer systems to store and contain large quantities of sensitive data, increase the capability of sharing resources from one system to another, and permit many users to access the system through internet/intranet technologies. All of these factors have made organizations’ information system resources more accessible and available anytime and anywhere.
To protect an organization’s information resources, AC software has become even more critical in assuring the confidentiality, integrity, and availability of information resources. The purpose of AC software is to prevent unauthorized access and modification to an organization’s sensitive data and use of system critical functions.
To achieve this level of control, it is necessary to apply ACs across all layers of an organization’s information system architecture. This includes networks, platforms or operating system, databases, and application systems. Attributes across each commonly include some form of IA, access authorization, checking to specific information resources, and logging and reporting of user activities.
The greatest degree of protection in applying AC software is at the network and platform/operating system levels. These layers provide the greatest degree of protection of information resources from internal and external users’ unauthorized access. These systems are also referred to as general support systems, and they make up the primary infrastructure on which applications and database systems will reside.
Operating system AC software interfaces with other system software AC programs, such as network layer devices (e.g., routers, firewalls), that manage and control external access to organizations’ networks. Additionally, operating system AC software interfaces with database and/or application system ACs to protect system libraries and user datasets.

Logical Access Control Software Functionality

1. General operating system AC functions include:
a. Apply user IA mechanisms.
b. Restrict log-on IDs to specific terminals/workstations and specific times.
c. Establish rules for access to specific information resources (e.g., system-level application resources and data).
d. Create individual accountability and auditability.
e. Create or change user profiles.
f. Log events.
g. Log user activities.
h. Report capabilities.
2. Database and/or application-level AC functions include:
a. Create or change data files and database profiles.
b. Verify user authorization at the application and transaction levels.
c. Verify user authorization within the application.
d. Verify user authorization at the field level for changes within a database.
e. Verify subsystem authorization for the user at the file level.
f. Log database/data communications access activities for monitoring access violations.
Assessing ACs and AC systems:
Start with obtaining a general understanding of the security risks facing information processing, through a review of relevant documentation, inquiry, observation, and risk assessment and evaluation techniques.
Document and evaluate controls over potential access paths into the system to assess their adequacy, efficiency, and effectiveness by reviewing appropriate hardware and software security features and identifying any deficiencies or redundancies.
Test controls over access paths to determine whether they are functioning and effective by applying appropriate testing techniques.
Evaluate the AC environment to determine if the control requirements are achieved by analyzing test results and other evidence.
Evaluate the security environment to assess its adequacy by reviewing written policies, and observing practices and procedures, and comparing them with appropriate security standards or practices and procedures used by other organizations.
Familiarization with the IT environment:
This is the first step of the evaluation and involves obtaining a clear understanding of the technical, managerial, and security environment of the information system processing facility. This typically includes interviews, physical walk-throughs, review of documents, and risk assessments, as mentioned above in the physical security control area.
Documenting the access paths:
The access path is the logical route an end user takes to access computerized information. This starts with a terminal/workstation and typically ends with the data being accessed. Along the way, numerous hardware and software components are encountered. The assessor should evaluate each component for proper implementation and proper physical and logical access security.
Interviewing systems personnel:
To control and maintain the various components of the access path, as well as the operating system and computer mainframe, technical experts often are required. These people can be a valuable source of information to the assessor when gaining an understanding of security. To determine who these people are, the assessor should interview with the IS manager and review organizational charts and job descriptions. Key people include the security administrator, network control manager, and systems software manager.
Reviewing reports from AC software:
The reporting features of AC software provide the security administrator with the opportunity to monitor adherence to security policies. By reviewing a sample of security reports, the assessor can determine if enough information is provided to support an investigation and if the security administrator is performing an effective review of the report.
Reviewing Application Systems Operations Manual:
An Application Systems Manual should contain documentation on the programs that generally are used throughout a data processing installation to support the development, implementation, operations, and use of application systems. This manual should include information about which platform the application can run on, database management systems, compilers, interpreters, telecommunications monitors, and other applications that can run with the application.
Log-on IDs and passwords:
To test confidentiality, the assessor could attempt to guess the password of a sample of employees’ log-on IDs (though this is not necessarily a test). This should be done discreetly to avoid upsetting employees. The assessor should tour end user and programmer work areas looking for passwords taped to the side of terminals or the inside of desk drawers, or located in card files. Another source of confidential information is the wastebasket. The assessor might consider going through the office wastebasket looking for confidential information and passwords. Users could be asked to give their password to the assessor. However, unless specifically authorized for a particular situation and supported by the security policy, no user should ever disclose his/her password.
Controls over production resources:
Computer ACs should extend beyond application data and transactions. There are numerous high-level utilities, macro or job control libraries, control libraries, and system software parameters for which AC should be particularly strong. Access to these libraries would provide the ability to bypass other ACs. The assessor should work with the system software analyst and operations manager to determine if access is on a need-to-know basis for all sensitive production resources. Working with the security administrator, the assessor should determine who can access these resources and what can be done with this access.
Logging and reporting of computer access violations:
To test the reporting of access violations, the assessor should attempt to access computer transactions or data for which access is not authorized. The attempts should be unsuccessful and identified on security reports. This test should be coordinated with the data owner and security administrator to avoid violation of security regulations.
Follow up access violations:
To test the effectiveness and timeliness of the security administrator’s and data owner’s response to reported violation attempts, the assessor should select a sample of security reports and look for evidence of follow-up and investigation of access violations. If such evidence cannot be found, the assessor should conduct further interviews to determine why this situation exists.
Identification of methods for bypassing security and compensating controls:
This is a technical area of review. As a result, the assessor should work with the system software analyst, network manager, operations manager, and security administrator to determine ways to bypass security. This typically includes bypass label processing (BLP), special system maintenance log-on IDs, operating system exits, installation utilities, and I/O devices. Working with the security administrator, the assessor should determine who can access these resources and what can be done with this access. The assessor should determine if access is on a need-to-know/have basis or if compensating detective controls exist.
Review ACs and password administration:
Ensure password control is active for all accounts and users. Ensure password complexity and renewal requirements are enforced for all users and accounts. Ensure password criteria for elevated privilege accounts are more complex and longer than for standard user accounts as part of Separation of Duties review.
Restricting and monitoring access:
There should be restrictions and procedures of monitoring access to computer features that bypass security. Generally, only system software programmers should have access to these features:
- BLP: BLP bypasses the computer reading of the file label. Since most AC rules are based on file names (labels), this can bypass access security.
- System exits: This system software feature permits the user to perform complex system maintenance, which may be tailored to a specific environment or company. They often exist outside of the computer security system and, thus, are not restricted or reported in their use.
- Special system log-on IDs: These log-on IDs often are provided with the computer by the vendor. The names can be determined easily because they are the same for all similar computer systems. Passwords should be changed immediately, on installation, to secure the systems.
Auditing remote access:
Remote use of information resources dramatically improves business productivity, but generates control issues and security concerns. In this regard, IS auditors should determine that all remote access capabilities used by an organization provide for effective security of the organization’s information resources. Remote access security controls should be documented and implemented for authorized users operating outside of the trusted network environment. In reviewing existing remote access architectures, IS auditors should assess remote access points (APs) of entry in addressing how many (known/unknown) exist and whether greater centralized control of remote APs is needed. IS auditors should also review APs for appropriate controls, such as in the use of virtual private networks (VPNs), authentication mechanisms, encryption, firewalls, and IDS.

Identification and authentication

IA is the process of proving one’s identity. It is the process by which the system obtains from a user his/her claimed identity and the credentials needed to authenticate this identity, and validates both pieces of information.