Chapter 3
IN THIS CHAPTER
Aligning security to the business
Understanding security governance principles and concepts
Recognizing legal, regulatory, compliance and professional ethics issues
Documenting security policies, standards, procedures and guidelines
Developing business continuity requirements
Implementing personnel security policies
Applying risk management concepts and threat modeling
Integrating security risk considerations
Establishing and monitoring security education, training, and awareness programs
The Security and Risk Management domain addresses many fundamental security concepts and principles, as well as compliance, ethics, governance, security policies and procedures, business continuity planning, risk management, and security education, training, and awareness. This domain represents 15 percent of the CISSP certification exam.
For the CISSP exam, you must fully understand and be able to apply security governance principles including:
In order for an information security program to be effective, it must be aligned with the organization’s mission, strategy, goals, and objectives; thus you must understand the differences and relationships between an organization’s mission statement, strategy, goals, and objectives. You also need to know how these elements can affect the organization’s information security policies and program. Proper alignment with the organization’s mission, strategy, goals, and objectives also helps to build business cases, secure budgets, and allocate resources for security program initiatives. With proper alignment, security projects and other activities are appropriately prioritized, and they fit better into organization policies, practices, and processes.
Corny heading, yes, but there’s a good chance you’re humming the Mission Impossible theme song now — mission accomplished!
An organization’s mission statement expresses its reason for existence. A good mission statement is an easily understood, general-purpose statement that says what the organization is, what it does, and why it exists, doing what it does in the way that it has chosen.
An organization’s strategy, describes how it accomplishes its mission and is frequently adapted to address new challenges and business realities.
A goal is something (or many somethings) that an organization hopes to accomplish. A goal should be consistent with the organization’s mission statement or philosophy, and it should help define a vision for the organization. It should also whip people into a wild frenzy, running around their offices, waving their arms in the air, and yelling “GOOOAAALLL!” (Well, maybe only if they’re World Cup fans.)
An objective is a milestone or a specific result that is expected and, as such, helps an organization attain its goals and achieve its mission.
Security personnel should be acutely aware of their organizations’ goals and objectives. Only then can security professionals ensure that security capabilities will work with and protect all the organization’s current, changing, and new products, services, and endeavors.
In this section, we discuss key processes in the realm of security governance.
Security management starts (or should start!) at the top with executive management and board-level oversight. This generally takes the form of security governance, which simply means that the organization’s governing body has set the direction and the organization has policies and processes in place to ensure that executive management is following that direction, is fully informed, and is in control of information security strategy, policy, and operations.
A governance committee is a group of executives and/or managers who regularly meet to review security incidents, projects, operational metrics, and other aspects of concern to them. The governance committee will occasionally issue mandates to security management about new business activities and shifts in priorities and strategic direction.
In practice, this is not much different from governance in IT or other departments. Governance is how executive management stays involved in the goings-on in IT, security, and other parts of the business.
Organizations, particularly in private industry, continually are reinventing themselves. More than ever before, it is important to be agile and competitive. This results in organizations acquiring other organizations, organizations splitting themselves into two (or more) separate companies, as well as internal reorganizations to change the alignment of teams, departments, divisions, and business units.
There are several security-related considerations that should be taken into account when an organization acquires another organization, or when two (or more) organizations merge:
If the security of one organization is vastly different from another, the organization should not be too hasty to connect the two organizations’ networks together.
Interestingly, when an organization divides itself into two (or more) separate organizations or sells off a division, it can be trickier. Each new company probably will need to duplicate the security governance, management, controls, operations, and tools that the single organization had before the split. This doesn’t always mean that the two separate security functions need to be the same as the old; it is important to fully understand the business mission in each new organization, and what security regulations and standards apply to each new organization. Only then can information security align to each new organization.
The truism that information security is “everyone’s responsibility” is too often put into practice as everyone is responsible, but no one is accountable. To avoid this pitfall, specific roles and responsibilities for information security should be defined in an organization’s security policy, individual job or position descriptions, and third-party contracts. These roles and responsibilities should apply to employees, consultants, contractors, interns, and vendors. And they should apply to every level of staff, from C-level executives to line employees.
Senior-level management is often responsible for information security at several levels, including the role as an information owner, which we discuss in the following section. However, in this context, management has a responsibility to demonstrate a strong commitment to an organization’s information security program through the following actions:
An end-user (or user) includes just about everyone within an organization. Users aren’t specifically designated. They can be broadly defined as anyone who has authorized access to an organization’s internal information or information systems. Users include employees, contractors and other temporary help, consultants, vendors, customer, and anyone else with access. Some organizations call them employees, partners, associates, or what-have-you. Typical user responsibilities include
Organizations often adopt a control framework to aid in their legal and regulatory compliance efforts. Some examples of relevant security frameworks include
COBIT 5. Developed by ISACA (formerly known as the Information Systems Audit and Control Association) and the IT Governance Institute (ITGI), COBIT consists of several components, including:
The COBIT framework is popular in organizations that are subject to the Sarbanes-Oxley Act (SOX; discussed later in this chapter) or ICOFR.
Due care is the conduct that a reasonable person exercises in a given situation, which provides a standard for determining negligence. In the practice of information security, due care relates to the steps that individuals or organizations take to perform their duties and implement security best practices.
Another important aspect of due care is the principle of culpable negligence. If an organization fails to follow a standard of due care in the protection of its assets (or its personnel), the organization may be held culpably negligent. In such cases, jury awards may be adjusted accordingly, and the organization’s insurance company may be required to pay only a portion of any loss — the organization may get stuck paying the rest of the bill!
Due diligence is the prudent management and execution of due care. It’s most often used in legal and financial circles to describe the actions that an organization takes to research the viability and merits of an investment or merger/acquisition opportunity. In the context of information security, due diligence commonly refers to risk identification and risk management practices, not only in the day-to-day operations of an organization, but also in the case of technology procurement, as well as mergers and acquisitions.
The CIA triad (also referred to as ICA) forms the basis of information security (see Figure 3-1). The triad is composed of three fundamental information security concepts:
FIGURE 3-1: The C-I-A triad.
As with any triangular shape, all three sides depend on each other (think of a three-sided pyramid or a three-legged stool) to form a stable structure. If one piece falls apart, the whole thing falls apart.
Confidentiality limits access to information to subjects (users and machines) that require it. Privacy is a closely related concept that’s most often associated with personal data. Various U.S. and international laws exist to protect the privacy (confidentiality) of personal data.
Personal data most commonly refers to personally identifiable information (PII) or personal health information (PHI). PII includes names, addresses, Social Security numbers, contact information (in some cases), and financial or medical data. PHI consists of many of the same data elements as PII, but also includes an individual patient’s medical records and healthcare payment history. Personal data, in more comprehensive legal definitions (particularly in Europe), may also include race, marital status, sexual orientation or lifestyle, religious preference, political affiliations, and any number of other unique personal characteristics that may be collected or stored about an individual.
The objective of privacy is the confidentiality and proper handling of personal data.
Integrity safeguards the accuracy and completeness of information and processing methods. It ensures that
Availability ensures that authorized users have reliable and timely access to information, and associated systems and assets, when and where needed. Availability is easily one of the most overlooked aspects of information security. In addition to Denial of Service attacks, other threats to availability include single points of failure, inadequate capacity (such as storage, bandwidth, and processing) planning, equipment malfunctions, and business interruptions or disasters.
Compliance is composed of the set of activities undertaken by an organization in its attempts to abide by applicable laws, regulations, standards, and other legal obligations such as contract terms and conditions and service-level agreements (SLAs).
Because of the nature of compliance, and because there are many security- and privacy-related laws and standards, many organizations have adopted the fatally mistaken notion that to be compliant with security regulations is the same thing as being secure. However, it is appropriate to say that being compliant with security regulations and standards is a step in the right direction on the journey to becoming secure. The nature of threats today makes it plain that even organizations that are fully compliant with applicable security laws, regulations, and standards may be woefully unsecure.
A basic understanding of the major types and classifications of U.S. and international law, including key concepts and terms, is required for the CISSP exam.
Common law (also known as case law) originated in medieval England, and is derived from the decisions (or precedents) of judges. Common law is based on the doctrine of stare decisis (“let the decision stand”) and is often codified by statutes. Under the common law system of the United States, three major categories of laws are defined at the federal and state levels: criminal, civil (or tort), and administrative (or regulatory) laws.
Criminal law defines those crimes committed against society, even when the actual victim is a business or individual(s). Criminal laws are enacted to protect the general public. As such, in the eyes of the court, the victim is incidental to the greater cause.
Penalties under criminal law have two main purposes:
To be convicted under criminal law, a judge or jury must believe beyond a reasonable doubt that the defendant is guilty. Therefore, the burden of proof in a criminal case rests firmly with the prosecution.
Criminal law has two main classifications, depending on severity, such as type of crime/attack or total loss in dollars:
Civil (tort) law addresses wrongful acts committed against an individual or business, either willfully or negligently, resulting in damage, loss, injury, or death.
Unlike criminal penalties, civil penalties don’t include jail or prison terms. Instead, civil penalties provide financial restitution to the victim:
Convictions under civil law are typically easier to obtain than under criminal law because the burden of proof is much less. To be convicted under civil law, a jury must believe based upon the preponderance of the evidence that the defendant is guilty. This simply means that the available evidence leads the judge or jury to a conclusion of guilt.
The concepts of liability and due care are germane to civil law cases, but they’re also applicable under administrative law, which we discuss in the next section.
The standard criteria for assessing the legal requirements for implementing recommended safeguards is to evaluate the cost of the safeguard and the estimated loss from the corresponding threat, if realized. If the cost is less than the estimated loss and the organization doesn’t implement a safeguard, then a legal liability may exist. This is based on the principle of proximate causation, in which an action taken or not taken was part of a sequence of events that resulted in negative consequences.
Under the Federal Sentencing Guidelines, senior corporate officers may be personally liable if their organization fails to comply with applicable laws. Such individuals must follow the prudent man (or person) rule, which requires them to perform their duties:
Administrative (regulatory) laws define standards of performance and conduct for major industries (including banking, energy, and healthcare), organizations, and government agencies. These laws are typically enforced by various government agencies, and violations may result in financial penalties and/or imprisonment.
Given the global nature of the Internet, it’s often necessary for countries to cooperate in order to bring a computer criminal to justice. But because practically every country in the world has its own unique legal system, such cooperation is always difficult and often impossible. As a starting point, countries sometimes disagree on exactly what justice is. Other problems include
Besides common law systems (which we talk about in the section “Common law,” earlier in this chapter), other countries throughout the world use legal systems including:
Privacy and data protection laws are enacted to protect information collected and maintained on individuals from unauthorized disclosure or misuse. Privacy laws are one area in which the United States lags behind many others, particularly the European Union (EU) and its General Data Protection Regulation (GDPR), which has defined increasingly restrictive privacy regulations that regulate the transfer of personal information to countries (including the United States) that don’t equally protect such information. The EU GDPR privacy rules include the following requirements about personal data and records:
Specific privacy and data protection laws are discussed later in this chapter.
CISSP candidates are expected to be familiar with the laws and regulations that are relevant to information security throughout the world and in various industries. This could include national laws, local laws, and any laws that pertain to the types of activities performed by organizations.
Computer crime consists of any criminal activity in which computer systems or networks are used as tools. Computer crime also includes crimes in which computer systems are targeted, or in which computers are the scene of the crime committed. That’s a pretty wide spectrum.
The real world, however, has difficulty dealing with computer crimes. Several reasons why computer crimes are hard to cope with include
Computer crimes are often difficult to prosecute for the reasons we just listed, and also because of the following issues:
Computer crimes are often classified under one of the following six major categories:
Industrial espionage: Businesses are increasingly the targets of industrial espionage. These attacks include competitive intelligence gathering, as well as theft of product specifications, plans, and schematics, and business information such as marketing and customer information. Businesses can be inviting targets for an attacker due to
The cost to businesses can be significant, including loss of trade secrets or proprietary information, loss of revenue, and loss of reputation when intrusions are made public.
“Fun” attacks: “Fun” attacks are perpetrated by thrill-seekers and script kiddies who are motivated by curiosity or excitement. Although these attackers may not intend to do any harm or use any of the information that they access, they’re still dangerous and their activities are still illegal.
These attacks can also be relatively easy to detect and prosecute. Because the perpetrators are often script kiddies (hackers who use scripts or programs written by other hackers because they don’t have programming skills themselves) or otherwise-inexperienced hackers, they may not know how to cover their tracks effectively.
Also, because no real harm is normally done nor intended against the system, it may be tempting (although ill-advised) for a business to prosecute the individual and put a positive public relations spin on the incident. You’ve seen the film at 11:00: “We quickly detected the attack, prevented any harm to our network, and prosecuted the responsible individual; our security is unbreakable!” Such action, however, will likely motivate others to launch a more serious and concerted grudge attack against the business.
Many computer criminals in this category only seek notoriety. Although it’s one thing to brag to a small circle of friends about defacing a public website, the wily hacker who appears on CNN reaches the next level of hacker celebrity-dom. These twisted individuals want to be caught to revel in their 15 minutes of fame.
Grudge attacks: Grudge attacks are targeted at individuals or businesses, and the attacker is motivated by a desire to take revenge against a person or organization. A disgruntled employee, for example, may steal trade secrets, delete valuable data, or plant a logic bomb in a critical system or application.
Fortunately, these attacks (at least in the case of a disgruntled employee) can be easier to prevent or prosecute than many other types of attacks because:
Important international computer crime and information security laws and standards that the CISSP candidate should be familiar with include
It is important to understand that cybersecurity and privacy laws change from time to time. The list of such laws in this book should not be considered complete or up to date. Instead, consider these a sampling of laws from the U.S. and elsewhere.
In 1986, the first U.S. federal computer crime law, the U.S. Computer Fraud and Abuse Act, was passed. This intermediate act was narrowly defined and somewhat ambiguous. The law covered:
The U.S. Computer Fraud and Abuse Act of 1986 enhanced and strengthened the 1984 law, clarifying definitions of criminal fraud and abuse for federal computer crimes and removing obstacles to prosecution.
The Act established two new felony offenses for the unauthorized access of federal interest computers and a misdemeanor for unauthorized trafficking in computer passwords:
Felony 2: Altering, damaging, or destroying information in a federal interest computer or preventing authorized use of the computer or information, that causes an aggregate loss of $1,000 or more during a one-year period or potentially impairs medical treatment, shall be punishable as a felony [Subsection (a)(5)].
This provision was stricken in its entirety and replaced with a more general provision, which we discuss later in this section.
Several minor amendments to the U.S. Computer Fraud and Abuse Act were made in 1988, 1989, and 1990, and more significant amendments were made in 1994, 1996 (by the Economic Espionage Act of 1996), and 2001 (by the USA PATRIOT Act of 2001). The Act, in its present form, establishes eight specific computer crimes. In addition to the three that we discuss in the preceding list, these crimes include the following five provisions (we discuss subsection [a][5] in its current form in the following list):
In the section “USA PATRIOT Act of 2001,” later in this chapter, we discuss major amendments to the U.S. Computer Fraud and Abuse Act of 1986 (as amended) that Congress introduced in 2001.
The U.S. Computer Fraud and Abuse Act of 1986 is the major computer crime law currently in effect. The CISSP exam likely tests your knowledge of the Act in its original 1986 form, but you should also be prepared for revisions to the exam that may cover the more recent amendments to the Act.
The ECPA complements the U.S. Computer Fraud and Abuse Act of 1986 and prohibits eavesdropping, interception, or unauthorized monitoring of wire, oral, and electronic communications. However, the ECPA does provide specific statutory exceptions, allowing network providers to monitor their networks for legitimate business purposes if they notify the network users of the monitoring process.
The ECPA was amended extensively by the USA PATRIOT Act of 2001. These changes are discussed in the upcoming “USA PATRIOT Act of 2001” section.
The U.S. Electronic Communications Privacy Act (ECPA) provides the legal basis for network monitoring.
The U.S. Computer Security Act of 1987 requires federal agencies to take extra security measures to prevent unauthorized access to computers that hold sensitive information. In addition to identifying and developing security plans for sensitive systems, the Act requires those agencies to provide security-related awareness training for their employees. The Act also assigns formal government responsibility for computer security to the National Institute of Standards and Technology (NIST) for information security standards, in general, and to the National Security Agency (NSA) for cryptography in classified government/military systems and applications.
In November 1991, the United States Sentencing Commission published Chapter 8, “Federal Sentencing Guidelines for Organizations,” of the U.S. Federal Sentencing Guidelines. These guidelines establish written standards of conduct for organizations, provide relief in sentencing for organizations that have demonstrated due diligence, and place responsibility for due care on senior management officials with penalties for negligence, including fines of up to $290 million.
The U.S. Economic Espionage Act (EEA) of 1996 was enacted to curtail industrial espionage, particularly when such activity benefits a foreign entity. The EEA makes it a criminal offense to take, download, receive, or possess trade secret information that’s been obtained without the owner’s authorization. Penalties include fines of up to $10 million, up to 15 years in prison, and forfeiture of any property used to commit the crime. The EEA also enacted the 1996 amendments to the U.S. Computer Fraud and Abuse Act, which we talk about in the section “U.S. Computer Fraud and Abuse Act of 1986, 18 U.S.C. § 1030 (as amended),” earlier in this chapter.
The U.S. Child Pornography Prevention Act (CPPA) of 1996 was enacted to combat the use of computer technology to produce and distribute pornography involving children, including adults portraying children.
Following the terrorist attacks against the United States on September 11, 2001, the USA PATRIOT Act of 2001 (Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act) was enacted in October 2001 and renewed in March 2006. (Many provisions originally set to expire have since been made permanent under the renewed Act.) This Act takes great strides to strengthen and amend existing computer crime laws, including the U.S. Computer Fraud and Abuse Act and the U.S. Electronic Communications Privacy Act (ECPA), as well as to empower U.S. law enforcement agencies, if only temporarily. U.S. federal courts have subsequently declared some of the Act’s provisions unconstitutional. The sections of the Act that are relevant to the CISSP exam include
Section 214 — Pen Register and Trap and Trace Authority under FISA (Foreign Intelligence Surveillance Act): Clarifies law enforcement authority to trace communications on the Internet and other computer networks, and it authorizes the use of a pen/trap device nationwide, instead of limiting it to the jurisdiction of the court.
A pen/trap device refers to a pen register that shows outgoing numbers called from a phone and a trap and trace device that shows incoming numbers that called a phone. Pen registers and trap and trace devices are collectively referred to as pen/trap devices because most technologies allow the same device to perform both types of traces (incoming and outgoing numbers).
In the wake of several major corporate and accounting scandals, SOX was passed in 2002 to restore public trust in publicly held corporations and public accounting firms by establishing new standards and strengthening existing standards for these entities including auditing, governance, and financial disclosures.
SOX established the Public Company Accounting Oversight Board (PCAOB), which is a private-sector, nonprofit corporation responsible for overseeing auditors in the implementation of SOX. PCAOB’s “Accounting Standard 2” recognizes the role of information technology as it relates to a company’s internal controls and financial reporting. The Standard identifies the responsibility of Chief Information Officers (CIOs) for the security of information systems that process and store financial data, and it has many implications for information technology security and governance.
This law consolidated 22 U.S. government agencies to form the Department of Homeland Security (DHS). The law also provided for the creation of a privacy official to enforce the Privacy Act of 1974.
FISMA extended the Computer Security Act of 1987 by requiring regular audits of both U.S. government information systems, and organizations providing information services to the U.S. federal government.
The U.S. CAN-SPAM Act (Controlling the Assault of Non-Solicited Pornography and Marketing Act) establishes standards for sending commercial e-mail messages, charges the U.S. Federal Trade Commission (FTC) with enforcement of the provision, and provides penalties that include fines and imprisonment for violations of the Act.
This law updated earlier U.S. laws on identity theft.
In 1995, the European Parliament ratified this essential legislation that protects personal information for all European citizens. The directive states that personal data should not be processed at all, except when certain conditions are met.
A legitimate concern about the disposition of European citizens’ personal data when it leaves computer systems in Europe and enters computer systems in the U.S. led to the creation of the Safe Harbor program (discussed in the following section).
In an agreement between the European Union and the U.S. Department of Commerce in 1998, the U.S. Department of Commerce developed a certification program called Safe Harbor. This permits U.S.-based organizations to certify themselves as properly handling private data belonging to European citizens.
This law facilitates the sharing of intelligence information between various U.S. government agencies, as well as protections of privacy and civil liberties.
The Convention on Cybercrime is an international treaty, currently signed by more than 40 countries (the U.S. ratified the treaty in 2006), requiring criminal laws to be established in signatory nations for computer hacking activities, child pornography, and intellectual property violations. The treaty also attempts to improve international cooperation with respect to monitoring, investigations, and prosecution.
The Computer Misuse Act 1990 (U.K.) defines three criminal offenses related to computer crime: unauthorized access (whether successful or unsuccessful), unauthorized modification, and hindering authorized access (Denial of Service).
Similar to U.S. “do not call” laws, this law makes it illegal to use equipment to make automated telephone calls that play recorded messages.
This law modernized computer crimes and defined activities such as data theft, creation and spreading of malware, identity theft, pornography, child pornography, and cyber terrorism. This law also validated electronic contracts and electronic signatures.
The Cybercrime Act 2001 (Australia) establishes criminal penalties, including fines and imprisonment, for people who commit computer crimes (including unauthorized access, unauthorized modification, or Denial of Service) with intent to commit a serious offense.
Although not (yet) a legal mandate, the Payment Card Industry Data Security Standard (PCI DSS) is one example of an industry initiative for mandating and enforcing security standards. PCI DSS applies to any business worldwide that transmits, processes, or stores payment card (meaning credit card) transactions to conduct business with customers — whether that business handles thousands of credit card transactions a day or a single transaction a year. Compliance is mandated and enforced by the payment card brands (American Express, MasterCard, Visa, and so on) and each payment card brand manages its own compliance program.
PCI DSS requires organizations to submit an annual assessment and network scan, or to complete onsite PCI data security assessments and quarterly network scans. The actual requirements depend on the number of payment card transactions handled by an organization and other factors, such as previous data loss incidents.
PCI DSS version 3.2 consists of six core principles, supported by 12 accompanying requirements, and more than 200 specific procedures for compliance. These include
Penalties for non-compliance are levied by the payment card brands and include not being allowed to process credit card transactions, fines up to $25,000 per month for minor violations, and fines up to $500,000 for violations that result in actual lost or stolen financial data.
Given the difficulties in defining and prosecuting computer crimes, many prosecutors seek to convict computer criminals on more traditional criminal statutes, such as theft, fraud, extortion, and embezzlement. Intellectual property rights and privacy laws, in addition to specific computer crime laws, also exist to protect the general public and assist prosecutors.
Intellectual property is protected by U.S. law under one of four classifications:
Intellectual property rights worldwide are agreed upon, defined, and enforced by various organizations and treaties, including the World Intellectual Property Organization (WIPO), World Customs Organization (WCO), World Trade Organization (WTO), United Nations Commission on International Trade Law (UNCITRAL), European Union (EU), and Trade-Related Aspects of Intellectual Property Rights (TRIPs).
Licensing violations are among the most prevalent examples of intellectual property rights infringement. Other examples include plagiarism, software piracy, and corporate espionage.
Digital rights management (DRM) attempts to protect intellectual property rights by using access control technologies to prevent unauthorized copying or distribution of protected digital media.
A patent, as defined by the U.S. Patent and Trademark Office (PTO) is “the grant of a property right to the inventor.” A patent grant confers upon the owner (either an individual or a company) “the right to exclude others from making, using, offering for sale, selling, or importing the invention.” In order to qualify for a patent, an invention must be novel, useful, and not obvious. An invention must also be tangible — an idea cannot be patented. Examples of computer-related objects that may be protected by patents are computer hardware and physical devices in firmware.
A patent is granted by the U.S. PTO for an invention that has been sufficiently documented by the applicant and that has been verified as original by the PTO. A U.S. patent is generally valid for 20 years from the date of application and is effective only within the U.S., including territories and possessions. Patent applications must be filed with the appropriate patent office in various countries throughout the world to receive patent protection in that country. The owner of the patent may grant a license to others for use of the invention or its design, often for a fee.
U.S. patent (and trademark) laws and rules are covered in 35 U.S.C. and 37 C.F.R., respectively. The Patent Cooperation Treaty (PCT) provides some international protection for patents. More than 130 countries worldwide have adopted the PCT. Patent infringements are not prosecuted by the U.S. PTO. Instead, the holder of a patent must enforce their patent rights through the appropriate legal system.
A trademark, as defined by the U.S. PTO, is “any word, name, symbol, or device, or any combination, used, or intended to be used, in commerce to identify and distinguish the goods of one manufacturer or seller from goods manufactured or sold by others.” Computer-related objects that may be protected by trademarks include corporate brands and operating system logos. U.S. Public Law 105–330, the Trademark Law Treaty Implementation Act, provides some international protection for U.S. registered trademarks.
A copyright is a form of protection granted to the authors of “original works of authorship,” both published and unpublished. A copyright protects a tangible form of expression rather than the idea or subject matter itself. Under the original Copyright Act of 1909, publication was generally the key to obtaining a federal copyright. However, the Copyright Act of 1976 changed this requirement, and copyright protection now applies to any original work of authorship immediately, from the time that it’s created in a tangible form. Object code or documentation are examples of computer-related objects that may be protected by copyrights.
Copyrights can be registered through the Copyright Office of the Library of Congress, but a work doesn’t need to be registered to be protected by copyright. Copyright protection generally lasts for the lifetime of the author plus 70 years.
A trade secret is proprietary or business-related information that a company or individual uses and has exclusive rights to. To be considered a trade secret, the information must meet the following requirements:
Software source code or firmware code are examples of computer-related objects that an organization may protect as trade secrets.
International import and export controls exist between countries to protect both intellectual property rights and certain sensitive technologies (such as encryption).
Information security professionals need to be aware of relevant import/export controls for any countries in which their organization operates or to which their employees travel. For example, it is not uncommon for laptops to be searched, and possibly confiscated, at airports to enforce various import/export controls.
Related to import/export controls is the issue of trans-border data flow. As discussed earlier in this chapter, data privacy and breach disclosure laws vary greatly across different regions, countries, and U.S. states. Australia and European Union countries are two examples where data privacy regulations, in general, are far more stringent than in the U.S. Many countries restrict or completely forbid personal data of their citizens from leaving the country.
Issues of trans-border data flow, and data residency (where data is physically stored) are particularly germane for organizations operating in the public cloud. For these organizations, it is important to know — and have control over — where their data is stored. Issues of data residency and trans-border data flow should be addressed in any agreements or contracts with cloud service providers.
Privacy in the context of electronic information about citizens is not well understood by everyone. Simply put, privacy has two main components:
Several important pieces of privacy and data protection legislation include the Federal Privacy Act, the Health Insurance Portability and Accountability Act (HIPAA), the Health Information Technology for Economic and Clinical Health Act (HITECH), and the Gramm-Leach-Bliley Act (GLBA) in the United States, and the Data Protection Act (DPA) in the United Kingdom. Finally, the Payment Card Industry Data Security Standard (PCI DSS) is an example of an industry policing itself — without the need for government laws or regulations.
Several privacy related laws that CISSP candidates should be familiar with include
The Federal Privacy Act of 1974 protects records and information maintained by U.S. government agencies about U.S. citizens and lawful permanent residents. Except under certain specific conditions, no agency may disclose any record about an individual “except pursuant to a written request by, or with the prior written consent of, the individual to whom the record pertains.” The Privacy Act also has provisions for access and amendment of an individual’s records by that individual, except in cases of “information compiled in reasonable anticipation of a civil action or proceeding.” The Privacy Act provides individual penalties for violations, including a misdemeanor charge and fines up to $5,000.
HIPAA was signed into law effective August 1996. The HIPAA legislation provided Congress three years from that date to pass comprehensive health privacy legislation. When Congress failed to pass legislation by the deadline, the Department of Health and Human Services (HHS) received the authority to develop the privacy and security regulations for HIPAA. In October 1999, HHS released proposed HIPAA privacy regulations entitled “Privacy Standards for Individually Identifiable Health Information,” which took effect in April 2003. HIPAA security standards were subsequently published in February 2003 and took effect in April 2003. Organizations that must comply with HIPAA regulations are referred to as covered entities and include
Civil penalties for HIPAA violations include fines of $100 per incident, up to $25,000 per provision, per calendar year. Criminal penalties include fines up to $250,000 and potential imprisonment of corporate officers for up to ten years. Additional state penalties may also apply.
In 2009, Congress passed additional HIPAA provisions as part of the American Recovery and Reinvestment Act of 2009, requiring covered entities to publicly disclose security breaches involving personal information. (See the section “Disclosure laws” later in this chapter for a discussion of disclosure laws.)
This law provides for protection of online information about children under the age of 13. The law defines rules for the collection of information from children and means for obtaining consent from parents. Organizations are also restricted from marketing to children under the age of 13.
Gramm-Leach-Bliley (known as GLBA) opened up competition among banks, insurance companies, and securities companies. GLBA also requires financial institutions to better protect their customers’ personally identifiable information (PII) with three rules:
Civil penalties for GLBA violations are up to $100,000 for each violation. Furthermore, officers and directors of financial institutions are personally liable for civil penalties of not more than $10,000 for each violation.
The HITECH Act, passed as part of the American Recovery and Reinvestment Act of 2009, broadens the scope of HIPAA compliance to include the business associates of HIPAA covered entities. These include third-party administrators, pharmacy benefit managers for health plans, claims processing/billing/transcription companies, and persons performing legal, accounting and administrative work.
Another highly important provision of the HITECH Act promotes and, in many cases, funds the adoption of electronic health records (EHRs), in order to increase the effectiveness of individual medical treatment, improve efficiency in the U.S. healthcare system, and reduce the overall cost of healthcare. Anticipating that the widespread adoption of EHRs will increase privacy and security risks, the HITECH Act introduces new security and privacy-related requirements.
In the event of a breach of “unsecured protected health information,” the HITECH Act requires covered entities to notify the affected individuals and the Secretary of the U.S. Department of Health and Human Services (HHS). The regulation defines unsecured protected health information (PHI) as PHI that is not secured through the use of a technology or methodology to render it unusable, unreadable, or indecipherable to unauthorized individuals.
The notification requirements vary according to the amount of data breached
Finally, the HITECH Act also requires the issuance of technical guidance on the technologies and methodologies “that render protected health information unusable, unreadable, or indecipherable to unauthorized individuals”. The guidance specifies data destruction and encryption as actions that render PHI unusable if it is lost or stolen. PHI that is encrypted and whose encryption keys are properly secured provides a “safe harbor” to covered entities and does not require them to issue data-breach notifications.
Passed by Parliament in 1998, the U.K. Data Protection Act (DPA) applies to any organization that handles sensitive personal data about living persons. Such data includes
The DPA applies to electronically stored information, but certain paper records used for commercial purposes may also be covered. The DPA consists of eight privacy and disclosure principles as follows:
DPA compliance is enforced by the Information Commissioner’s Office (ICO), an independent official body. Penalties generally include fines which may also be imposed against the officers of a company.
The European Union General Data Protection Regulation, known as GDPR, represents a significant revision of the 1995 privacy directive. Highlights of GDPR include the following:
In an effort to combat identity theft, many U.S. states have passed disclosure laws that compel organizations to publicly disclose security breaches that may result in the compromise of personal data.
Although these laws typically include statutory penalties, the damage to an organization’s reputation and the potential loss of business — caused by the public disclosure requirement of these laws — can be the most significant and damaging aspect to affected organizations. Thus, public disclosure laws shame organizations into implementing more effective information security policies and practices to lessen the risk of a data breach occurring in the first place.
By requiring organizations to notify individuals of a data breach, disclosure laws enable potential victims to take defensive or corrective action to help avoid or minimize the damage resulting from identity theft.
Passed in 2003, the California Security Breach Information Act (SB-1386) was the first U.S. state law to require organizations to notify all affected individuals “in the most expedient time possible and without unreasonable delay, consistent with the legitimate needs of law enforcement,” if their confidential or personal data is lost, stolen, or compromised, unless that data is encrypted.
The law is applicable to any organization that does business in the state of California — even a single customer or employee in California. An organization is subject to the law even if it doesn’t directly do business in California (for example, if it stores personal information about California residents for another company).
Other U.S. states have quickly followed suit, and 46 states, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands now have public disclosure laws. However, these laws aren’t necessarily consistent from one state to another, nor are they without flaws and critics.
For example, until early 2008, Indiana’s Security Breach Disclosure and Identity Deception law (HEA 1101) did not require an organization to disclose a security breach “if access to the [lost or stolen] device is protected by a password [emphasis added] that has not been disclosed.” Indiana’s law has since been amended and is now one of the toughest state disclosure laws in effect, requiring public disclosure unless “all personal information … is protected by encryption.”
Finally, a provision in California’s and Indiana’s disclosure laws, as well as in most other states’ laws, allows an organization to avoid much of the cost of disclosure if the cost of providing such notice would exceed $250,000 or if more than 500,000 individuals would need to be notified. Instead, a substitute notice, consisting of e-mail notifications, conspicuous posting on the organization’s website, and notification of major statewide media, is permitted.
Ethics (or moral values) help to describe what you should do in a given situation based on a set of principles or values. Ethical behavior is important to maintaining credibility as an information security professional and is a requirement for maintaining your CISSP certification. An organization often defines its core values (along with its mission statement) to help ensure that its employees understand what is acceptable and expected as they work to achieve the organization’s mission, goals, and objectives.
Ethics are not easily discerned, and a fine line often hovers between ethical and unethical activity. Unethical activity doesn’t necessarily equate to illegal activity. And what may be acceptable in some organizations, cultures, or societies may be unacceptable or even illegal in others.
Ethical standards can be based on a common or national interest, individual rights, laws, tradition, culture, or religion. One helpful distinction between laws and ethics is that laws define what we must do and ethics define what we should do.
Many common fallacies abound about the proper use of computers, the Internet, and information, which contribute to this gray area:
The Hacker’s Fallacy: Computers provide a valuable means of learning that will, in turn, benefit society.
The problem here lies in the distinction between hackers and crackers. Although both may have a genuine desire to learn, crackers do it at the expense of others.
Almost every recognized group of professionals defines a code of conduct or standards of ethical behavior by which its members must abide. For the CISSP, it is the (ISC)2 Code of Ethics. The CISSP candidate must be familiar with the (ISC)2 Code of Ethics and Request for Comments (RFC) 1087 “Ethics and the Internet” for professional guidance on ethics (and information that you need to know for the exam).
As a requirement for (ISC)2 certification, all CISSP candidates must subscribe to and fully support all portions of the (ISC)2 Code of Ethics. Intentionally or knowingly violating any provision of the (ISC)2 Code of Ethics may subject you to a peer review panel and revocation of your hard-earned CISSP certification.
The (ISC)2 Code of Ethics consists of a preamble and four canons. The canons are listed in order of precedence, thus any conflicts should be resolved in the order presented below:
Preamble:
Canons:
Just about every organization has a code of ethics, or a statement of values, which it requires its employees or members to follow in their daily conduct. As a CISSP-certified information security professional, you are expected to be a leader in your organization, which means you exemplify your organization’s ethics (or values) and set a positive example for others to follow.
In addition to your organization’s code of ethics, two other computer security ethics standards you should be familiar with for the CISSP exam and adhere to are the Internet Activities Board’s (IAB) “Ethics and the Internet” (RFC 1087) and the Computer Ethics Institute’s (CEI) “Ten Commandments of Computer Ethics”.
Published by the Internet Architecture Board (IAB) (www.iab.org
) in January 1989, RFC 1087 characterizes as unethical and unacceptable any activity that purposely
Other important tenets of RFC 1087 include
The Computer Ethics Institute (CEI; http://computerethicsinstitute.org
) is a nonprofit research, education, and public policy organization originally founded in 1985 by the Brookings Institution, IBM, the Washington Consulting Group, and the Washington Theological Consortium. CEI members include computer science and information technology professionals, corporate representatives, professional industry associations, public policy groups, and academia.
CEI’s mission is “to provide a moral compass for cyberspace.” It accomplishes this mission through computer-ethics educational activities that include publications, national conferences, membership and certificate programs, a case study repository, the Ask an Ethicist online forum, consultation, and (most famously) its “Ten Commandments of Computer Ethics,” which has been published in 23 languages (presented here in English):
Policies, standards, procedures, and guidelines are all different from each other, but they also interact with each other in a variety of ways. It’s important to understand these differences and relationships, and also to recognize the different types of policies and their applications. To successfully develop and implement information security policies, standards, guidelines, and procedures, you must ensure that your efforts are consistent with the organization’s mission, goals, and objectives (discussed earlier in this chapter).
Policies, standards, procedures, and guidelines all work together as the blueprints for a successful information security program. They
Too often, technical security solutions are implemented without these important blueprints. The results are often expensive and ineffective controls that aren’t uniformly applied and don’t support an overall security strategy.
Governance is a term that collectively represents the system of policies, standards, guidelines, and procedures — together with management oversight — that help steer an organization’s day-to-day operations and decisions.
A security policy forms the basis of an organization’s information security program. RFC 2196, The Site Security Handbook, defines a security policy as “a formal statement of rules by which people who are given access to an organization’s technology and information assets must abide.”
The four main types of policies are:
Standards are specific, mandatory requirements that further define and support higher-level policies. For example, a standard may require the use of a specific technology, such as a minimum requirement for encryption of sensitive data using AES. A standard may go so far as to specify the exact brand, product, or protocol to be implemented. A device or system hardening standard would define specific security configuration settings for applicable systems.
Baselines are similar to and related to standards. A baseline can be useful for identifying a consistent basis for an organization’s security architecture, taking into account system-specific parameters, such as different operating systems. After consistent baselines are established, appropriate standards can be defined across the organization.
Procedures provide detailed instructions on how to implement specific policies and meet the criteria defined in standards. Procedures may include Standard Operating Procedures (SOPs), run books, and user guides. For example, a procedure may be a step-by-step guide for encrypting sensitive files by using a specific software encryption product.
Guidelines are similar to standards but they function as recommendations rather than as compulsory requirements. For example, a guideline may provide tips or recommendations for determining the sensitivity of a file and whether encryption is required.
Business continuity and disaster recovery (discussed in detail in Chapter 9) work hand in hand to provide an organization with the means to continue and recover business operations when a disaster strikes. Business continuity and disaster recovery are two sides of the same coin. Each springs into action when a disaster strikes. But they do have different goals:
While the business continuity team is busy keeping business operations running via one of possibly several contingency plans, the disaster recovery team members are busy restoring the original facilities and equipment so that they can resume normal operations.
Here’s an analogy. Two boys kick a big anthill — a disaster for the ant colony. Some of the ants scramble to save the eggs and the food supply; that’s Ant City business continuity. Other ants work on rebuilding the anthill; that’s Ant City disaster recovery. Both teams work to ensure the anthill’s survival, but each team has its own role to play.
Business continuity and disaster recovery planning have these common elements:
The similarities end with this list. Business continuity planning concentrates on continuing business operations, whereas disaster recovery planning focuses on recovering the original business functions. Although both plans deal with the long-term survival of the business, they involve different activities. When a significant disaster occurs, both activities kick into gear at the same time, keeping vital business functions running (business continuity) and getting things back to normal as soon as possible (disaster recovery).
Business continuity (and disaster recovery) planning exist because bad things happen. Organizations that want to survive a disastrous event need to make formal and extensive plans — contingency plans to keep the business running and recovery plans to return operations to normal.
Keeping a business operating during a disaster can be like juggling with one arm tied behind your back (we first thought of plate-spinning and one-armed paper hangers, but most of our readers are probably too young to understand these). You’d better plan in advance how you’re going to do it, and practice! It could happen at night, you know (one-handed juggling in the dark is a lot harder).
Before business continuity planning can begin, everyone on the project team has to make and understand some basic definitions and assumptions. These critical items include
A business continuity planning project typically has four components: scope determination, the Business Impact Analysis (BIA), the Business Continuity Plan (BCP), and implementation. We discuss each of these components in the following sections.
The success and effectiveness of a business continuity planning project depends greatly on whether senior management and the project team properly define its scope. Business processes and technology can muddy the waters and make this task difficult. For instance, distributed systems dependence on at least some desktop systems for vital business functions expands the scope beyond core functions. Geographically dispersed companies — often the result of mergers — complicate matters as well.
Also, large companies are understandably more complex. The boundaries between where a function begins and ends are oftentimes fuzzy and sometimes poorly documented and not well understood.
Political pressures can influence the scope of the business continuity planning project as well. A department that thinks it’s vital, but which falls outside the business continuity planning project scope, may lobby to be included in the project. Everybody wants to be important (and some just want to appear to be important). You need senior management support of scope (what the project team really needs to include and what it doesn’t) to put a stop to the political games.
Scope creep (what happens when a project’s scope grows beyond the original intent) can become scope leap if you have a weak or inexperienced business continuity planning project team. For the success of the project, strong leaders must make rational decisions about the scope of the project. Remember, you can change the scope of the business continuity planning project in later iterations of the project.
The project team needs to find a balance between too narrow a scope, which makes the plan ineffective, and too wide a scope, which makes the plan too cumbersome.
A complete BCP consists of several components that handle not only the continuation of critical business functions, but also all the functions and resources that support those critical functions. The various elements of a BCP are described in the following sections.
Emergency response teams must be identified for every possible type of disaster. These response teams need playbooks (detailed written procedures and checklists) to keep critical business functions operating.
Written procedures are vital for two reasons. First, the people who perform critical functions after a disaster may not be familiar with them: They may not usually perform those functions. (During a disaster, the people who ordinarily perform the function may be unavailable.) Second, the team probably needs to use different procedures and processes for performing the critical functions during a disaster than they would under normal conditions. Also, the circumstances surrounding a disaster might have people feeling out-of-sorts; having a written procedure guides them into action (kind of like the “break glass” instructions on some fire alarms, in case you forget what to do).
When a disaster strikes, experts need to be called in to inspect the premises and determine the extent of the damage. Typically, you need experts who can assess building damage, as well as damage to any special equipment and machinery.
Depending on the nature of the disaster, you may have to perform damage assessment in stages. A first assessment may involve a quick walkthrough to look for obvious damage, followed by a more time-consuming and detailed assessment to look for problems that you don’t see right away.
Damage assessments determine whether an organization can still use buildings and equipment, whether they can use those items after some repairs, or whether they must abandon those items altogether.
In any kind of disaster, the safety of personnel is the highest priority, ahead of buildings, equipment, computers, backup tapes, and so on. Personnel safety is critical not only because of the intrinsic value of human life, but also because people — not physical assets — make the business run.
The BCP must have some provisions for notifying all affected personnel that a disaster has occurred. An organization needs to establish multiple methods for notifying key business-continuity personnel in case public communications infrastructures are interrupted.
Not all disasters are obvious: A fire or broken water main is a local event, not a regional one. And in an event such as a tornado or flood, employees who live even a few miles away may not know the condition of the business. Consequently, the organization needs a plan for communicating with employees, no matter what the situation.
Throughout a disaster and its recovery, management must be given regular status reports as well as updates on crucial tactical issues so that management can align resources to support critical business operations that function on a contingency basis. For instance, a manager of a corporate facilities department can loan equipment that critical departments need so that they can keep functioning.
Things go wrong with hardware and software, resulting in wrecked or unreachable data. When it’s gone, it’s gone! Thus IT departments everywhere make copies of their critical data on tapes, removable discs, or external storage systems, or in the cloud.
These backups must be performed regularly, usually once per day. For organizations with on-premises systems, backup media must also be stored off-site in the event that the facility housing the original systems is damaged. Having backup tapes in the data center may be convenient for doing a quick data restore but of little value if backup tapes are destroyed along with their respective systems. For organizations with cloud-based systems, the problem here is the same, but the technology differs a bit: It is imperative that data be backed up (or replicated) to a different geographic location, so that data can be recovered, no matter what happens.
For systems with large amounts of data, that data must be well understood in order to determine what kinds of backups need to be performed (real-time replication, full, differential, and incremental) and how frequently. Consider these factors:
For example, consider whether you can restore application software from backup tapes more quickly than by installing them from their release media (the original CD-ROMs or downloaded install files). Just make sure you can recover your configuration settings if you re-install software from release media. Also, if a large part of the database is static, do you really need to back it all up every day?
You must choose off-site storage of backup media and other materials (documentation, and so on) carefully. Factors to consider include survivability of the off-site storage facility, as well as the distance from the off-site facility to the data center, media transportation, and alternate processing sites. The facility needs to be close enough so that media retrieval doesn’t take too long (how long depends on the organization’s recovery needs), but not so close that the facility becomes involved in the same natural disaster as the business.
Cloud-based data replication and backup services are a viable alternative to off-site backup media storage. Today’s Internet speeds make it possible to back up critical data to a cloud-based storage provider — often faster than magnetic tapes can be returned from an off-site facility and data recovered from them.
The purpose of off-site media storage is to ensure that up-to-date data is available in the event that systems in the primary data center are damaged.
Your organization should consider software escrow agreements (wherein the software vendor sends a copy of its software code to a third-party escrow organization for safekeeping) with the software vendors whose applications support critical business functions. In the event that an insurmountable disaster (which could include bankruptcy) strikes the software vendor, your organization must consider all options for the continued maintenance of those critical applications, including in-house support.
The Corporate Communications, External Affairs, and (if applicable) Investor Relations departments should all have plans in place for communicating the facts about a disaster to the press, customers, and public. You need contingency plans for these functions if you want the organization to continue communicating to the outside world. Open communication during a disaster is vital so that customers, suppliers, and investors don’t panic (which they might do if they don’t know the true extent of the disaster).
The emergency communications plan needs to take into account the possibility that some corporate facilities or personnel may be unavailable. Thus you need to keep even the data and procedures related to the communications plan safe so that they’re available in any situation.
Data-processing facilities that support time-critical business functions must keep running in the event of a power failure. Although every situation is different, the principle remains the same: The business continuity planning team must determine for what period of time the data-processing facility must be able to continue operating without utility power. A power engineer can find out the length of typical (we don’t want to say routine) power outages in your area and crunch the numbers to arrive at the mean time of outages. By using that information, as well as an inventory of the data center’s equipment and environmental equipment, you can determine whether the organization needs an uninterruptible power supply (UPS) alone, or a UPS and an electric generator.
A business can use uninterruptible power supplies (UPSs) and emergency electric generators to provide electric power during prolonged power outages. A UPS is also good for a controlled shutdown, if the organization is better off having their systems powered off during a disaster. A business can also use a stand-alone power system (SPS), another term for an off-the-grid system that generates power with solar, wind, hydro, or employees madly pedaling stationary bicycles (we’re kidding about that last one).
In a really long power outage (more than a day or two), it is also essential to have a plan for the replenishment of generator fuel.
The business continuity planning team needs to study every aspect of critical functions that must be made to continue in a disaster. Every resource that’s needed to sustain the critical operation must be identified and then considered against every possible disaster scenario to determine what special plans must be made. For instance, if a business operation relies upon a just-in-time shipment of materials for its operation and an earthquake has closed the region’s only highway (or airport or sea/lake port), then alternative means for acquiring those materials must be determined in advance. Or, perhaps an emergency ration of those materials needs to be stockpiled so that the business function can continue uninterrupted.
Many natural disasters disrupt public utilities, including water supplies or delivery. In the event that a disaster has interrupted water delivery, new problems arise. Your facility may not be allowed to operate without the means for fighting a fire, should one occur.
In many places, businesses could be ordered to close if they can’t prove that they can effectively fight a fire using other means, such as FM-200 inert gas. Then again, if water supplies have been interrupted, you have other issues to contend with, such as drinking water and water for restrooms. Without water, you’re hosed!
We discuss fire protection in more detail in Chapter 5.
Any critical business function must be able to continue operating after a disaster strikes. And to make sure you can sustain operations, you need to make available all relevant documentation for every critical piece of equipment, as well as every critical process and procedure that the organization performs in a given location.
Don’t be lulled into taking for granted the emerging trend of hardware and software products that don’t come with any documentation. Many vendors deliver their documentation only over the Internet, or they charge extra for a hard copy. But many types of disasters may disrupt Internet communications, thereby leaving an operation high and dry with no instructions for how to use and manage tools or applications.
At least one set of hard copy (or CD-ROM soft copy) documentation — including your BCP and Disaster Recovery Plan (DRP) — should be stored at the same off-site storage facility that stores the organization’s backup tapes. It would also be smart to issue electronic copies of BCP and DRP documentation to all relevant personnel on USB storage devices (with encryption).
If the preceding sounds like the ancient past to you, then your organization may be fully in the cloud today. In such a case, you may be more inclined to maintain multiple soft copies of all required documentation so that personnel can use it when needed.
Continuity and recovery documentation must exist in hard copy in the event that it’s unavailable via electronic means.
Data processing facilities are so vital to businesses today that a lot of emphasis is placed on them. Generally, this comes down to these variables: where and how the business will continue to sustain its data processing functions.
Because data centers are so expensive and time-consuming to build, better business sense dictates having an alternate processing site available. The types of sites are
A hot site provides the most rapid recovery capability, but it also costs the most because of the effort required to maintain its readiness.
Table 3-1 compares these options side by side.
TABLE 3-1 Data Processing Continuity Planning Site Comparison
Feature |
Hot Site |
Warm Site |
Cold Site |
Multiple Data Centers |
Cloud Site |
Cost |
Highest |
Medium |
Low |
No additional |
Variable |
Computer-equipped |
Yes |
Yes |
No |
Yes |
Yes |
Connectivity-equipped |
Yes |
Yes |
No |
Yes |
Yes |
Data-equipped |
Yes |
No |
No |
Yes |
Variable |
Staffed |
Yes |
No |
No |
Yes |
No |
Typical lead time to readiness |
Minutes to hours |
Hours to days |
Days to weeks |
Minutes to hours or longer |
Minutes to hours |
The Business Impact Analysis (BIA) describes the impact that a disaster is expected to have on business operations. This important early step in business continuity planning helps an organization figure out which business processes are more resilient and which are more fragile.
A disaster’s impact includes quantitative and qualitative effects. The quantitative impact is generally financial, such as loss of revenue or output of production. The qualitative impact has more to do with the quality of goods and/or services.
Any BIA worth its salt needs to perform the following tasks well:
You can get the scoop on these activities in the following sections.
Often, a BIA includes a Vulnerability Assessment that helps get a handle on obvious and not-so-obvious weaknesses in business critical systems. A Vulnerability Assessment has quantitative (financial) and qualitative (operational) sections, similar to a Risk Assessment, which is covered later in this chapter.
The purpose of a Vulnerability Assessment is to determine the impact — both quantitative and qualitative — of the loss of a critical business function.
Quantitative losses include
Qualitative losses include loss of
The Vulnerability Assessment identifies critical support areas, which are business functions that, if lost, would cause significant harm to the business by jeopardizing critical business processes or the lives and safety of personnel. The Vulnerability Assessment should carefully study critical support areas to identify the resources that those areas require to continue functioning.
Quantitative losses include an increase in operating expenses because of any higher costs associated with executing the contingency plan. In other words, planners need to remember to consider operating costs that may be higher during a disaster situation.
The business continuity planning team should inventory all high-level business functions (for example, customer support, order processing, returns, cash management, accounts receivable, payroll, and so on) and rank them in order of criticality. The team should also describe the impact of a disruption to each function on overall business operations.
The team members need to estimate the duration of a disaster event to effectively prepare the Criticality Assessment. Project team members need to consider the impact of a disruption based on the length of time that a disaster impairs specific critical business functions. You can see the vast difference in business impact of a disruption that lasts one minute, compared to one hour, one day, one week, or longer. Generally, the criticality of a business function depends on the degree of impact that its impairment has on the business.
Although you can consider a variety of angles when evaluating vulnerability and criticality, commonly you start with a high-level organization chart. (Hip people call this chart the org chart). In most companies, the major functions pretty much follow the structure of the organization.
Following an org chart helps the business continuity planning project team consider all the steps in a critical process. Walk through the org chart, stopping at each manager’s or director’s position and asking, “What does he do?”, “What does she do?”, and “Who files the TPS reports?” This mental stroll can help jog your memory, and help you better see all the parts of the organization’s big picture.
An extension of the Criticality Assessment (which we talk about in the section “Criticality Assessment,” earlier in this chapter) is a statement of Maximum Tolerable Downtime (MTD — also known as Maximum Tolerable Period of Disruption or MTPD) for each critical business function. Maximum Tolerable Downtime is the maximum period of time that a critical business function can be inoperative before the company incurs significant and long-lasting damage.
For example, imagine that your favorite online merchant — a bookseller, an auction house, or an online trading company — goes down for an hour, a day, or a week. At some point, you have to figure that a prolonged disruption sinks the ship, meaning the business can’t survive. Determining MTD involves figuring out at what point the organization suffers permanent, measurable loss as a result of a disaster. Online retailers know that even short outages may mean that some customers will switch brands and take their business elsewhere.
Make the MTD assessment a major factor in determining the criticality — and priority — of business functions. A function that can withstand only two hours of downtime obviously has a higher priority than another function that can withstand several days of downtime.
MTD is a measure of the longest period of time that a critical business function can be disrupted without suffering unacceptable consequences, perhaps threatening the actual survivability of the organization.
During the Criticality Assessment, you establish a statement of Maximum Tolerable Outage (MTO) for each critical business function. Maximum Tolerable Outage is the maximum period of time that a critical business function can be operating in emergency or alternate processing mode. This matters because, in many cases, emergency or alternate processing mode performs at a lower level of throughput or quality, or at a higher cost. Although an organization’s survival can be assured through an interim period in alternate processing mode, the long-term business model may not be able to sustain the differences in throughput, quality, cost, or whatever aspects of alternate processing mode are different from normal processing.
When you establish the Criticality Assessment, MTD, and MTO for each business process (which we talk about in the preceding sections), the planning team can establish recovery targets. These targets represent the period of time from the onset of a disaster until critical processes have resumed functioning.
Two primary recovery targets are usually established for each business process: a Recovery Time Objective (RTO) and Recovery Point Objective (RPO). We discuss these targets in the following sections.
A Recovery Time Objective (RTO) is the maximum period of time in which a business process must be restored after a disaster.
An organization without a BCP that suffers a serious disaster, such as an earthquake or hurricane, could experience a recovery time of one to two weeks or more. An organization could possibly need this length of time to select a new location for processing data, purchase new systems, load application software and data, and resume processing. An organization that can’t tolerate such a long outage needs to establish a shorter RTO and determine the level of investments required to meet that target.
A Recovery Point Objective (RPO) is the maximum period of time in which data might be lost if a disaster strikes.
A typical schedule for backing up data is once per day. If a disaster occurs before backups are done, the organization can lose an entire day’s worth of information. This is because system and data recovery are often performed using the last good set of backups. An organization that requires a shorter RPO needs to figure out a way to make copies of transaction data more frequently than once per day.
Here are some examples of how organizations might establish their RPOs:
If you establish the MTD for processes such as the ones in the preceding list as less than one business day, the organization needs to take some steps to save online data more than once per day.
Many organizations consider off-site backup media storage, where backup tapes are transported off-site as frequently as every day, or where electronic vaulting to an offsite location is performed several times each day. An event such as a fire can destroy computers as well as backup media if it is nearby.
RPO and RTO targets are different measures of recovery for a system, but they work together. When the team establishes proposed targets, the team members need to understand how each target works.
At first glance, you might think that RPO should be a shorter time than RTO (or maybe the other way around). In fact, different businesses and applications present different business requirements that might make RPO less than RTO, equal to RTO, or greater than RTO. Here are some examples:
The Resource Requirements portion of the BIA is a listing of the resources that an organization needs in order to continue operating each critical business function. In an organization that has finite resources (which is pretty much every organization), the most critical functions get first pick, and the lower-priority functions get the leftovers.
Understanding what resources are required to support a business process helps the project team to figure out what the contingency plan for that process needs to contain, and how the process can be operated in Emergency mode and then recovered.
Examples of required resources include
After you define the scope of the business continuity planning project and develop the BIA, Criticality Assessment, MTDs, and MTOs, you know
The hard part of the business continuity planning project begins now: You need to develop the strategy for continuing each critical business function when disasters occur, which is known as the Continuity Strategy.
When you develop a Continuity Strategy, you must set politics aside and look at the excruciating details of critical business functions. You need lots of strong coffee, several pizzas, buckets of Rolaids, and cool heads.
For the important and time-consuming Continuity Strategy phase of the project, you need to follow these guidelines:
Some critical business functions may be too large and complex to examine in one big chunk. You can break down those complex functions into smaller components, perhaps like this:
Analyzing processes is like disassembling toy building block houses — you have to break them down to the level of their individual components. You really do need to understand each step in even the largest processes in order to be able to develop good continuity plans for them.
If a team that analyzes a large complex business function breaks it into groups, such as the groups in the preceding list, the team members need to get together frequently to ensure that their respective strategies for each group eventually become a cohesive whole. Eventually these groups need to come back together and integrate their separate materials into one complete, cohesive plan.
Now for the part that everyone loves: documentation. The details of the continuity plans for each critical function must be described in minute detail, step by step by step.
Why? The people who develop the strategy may very well not be the people who execute it. The people who develop the strategy may change roles in the company or change jobs altogether. Or the scope of an actual disaster may be wide enough that the critical personnel just aren’t available. Any skeptics should consider September 11 and the impact that this disaster had on a number of companies that lost practically everyone and everything.
Best practices for documenting BCPs exist. For this reason, you may want to have an expert around. For $300 an hour, a consultant can spend a couple of weeks developing templates. But watch out — your consultant might just download templates from a business continuity planning website, tweak them a little bit, and spend the rest of his or her time playing Candy Crush. To be sure you get a solid consultant, do the old-fashioned things: check his references, ask for work samples, see if he has a decent LinkedIn page. (We’re kidding about that last one!)
It is an accomplishment indeed when the BCP documentation has been written, reviewed, edited, placed into three-ring binders, and distributed via thumb drives or online file storage accounts. However, the job isn’t yet done. The BCP needs senior management buy-in, the plan must be announced and socialized throughout the organization, and one or more persons must be dedicated to keeping the plan up-to-date. Oh yeah, and the plan needs to be tested!
After the entire plan has been documented and reviewed by all stakeholders, it’s time for senior management to examine it and approve it. Not only must senior management approve the plan, but senior management must also publicly approve it. By “public” we don’t mean the general public; instead, we mean that senior management should make it well known inside the business that they support the business continuity planning process.
Senior management’s approval is needed so that all affected and involved employees in the organization understand the importance of emergency planning.
Everyone in the organization needs to know about the plan and his or her role in it. You may need to establish training for potentially large numbers of people who need to be there when a disaster strikes.
All employees in the organization must know about the BCP.
Regularly testing the BCP ensures that all essential personnel required to implement the plan understand their roles and responsibilities, and helps to ensure that the plan is kept up to date as the organization changes. BCP testing methods are similar to DRP testing methods (discussed in Chapter 9), and include
See Chapter 9 for a full explanation of these testing methods.
No, the plan isn’t finished. It has just begun! Now the business continuity planning person (the project team members by this time have collected their commemorative denim shirts, mugs, and mouse pads, and have moved on to other projects) needs to periodically chase The Powers That Be to make sure that they know about all significant changes to the environment.
In fact, if the business continuity planning person has any leadership left at this point in the process, he or she needs to start attending the Change Control Board and IT Steering Committee (or whatever that company calls them) meetings and to jot down notes that may mean that some detail in a BCP document may need some changes.
An organization needs clearly documented personnel security policies and procedures in order to facilitate the use and protection of information. There are numerous essential practices for protecting the business and its important information assets. These essential practices all have to do with how people — not technology — work together to support the business.
This is collectively known as administrative management and control.
Note: We tend to use the term essential practices versus best practices. The reason is simple: Best practices refers to the very best practices and technologies that can be brought to bear against a business problem, whereas essential practices means those activities and technologies that are considered essential to implement in an organization. Best practices are nearly impossible to achieve, and few organizations attempt it. However, essential practices are, well, essential, and definitely achievable in many organizations.
Even before posting a “Help Wanted” sign (Do people still do that?!) or an ad on a job search website, an employer should ensure that the position to be filled is clearly documented and contains a complete description of the job requirements, the qualifications, and the scope of responsibilities and authority.
The job (or position) description should be created as a collaborative effort between the hiring manager — who fully understands the functional requirements of the specific position to be filled — and the human resources manager — who fully understands the applicable employment laws and organizational requirements to be addressed.
Having a clearly documented job (or position) description can benefit an organization for many reasons:
Concise job descriptions that clearly identify an individual’s responsibility and authority, particularly on information security issues, can help:
An organization should conduct background checks and verify application information for all potential employees and contractors. This process can help to expose any undesirable or unqualified candidates. For example:
Most background checks require the written consent of the applicant and disclosure of certain private information (such as the applicant’s Social Security or other retirement system number). Private information obtained for the purposes of a background check, as well as the results of the background check, must be properly handled and safeguarded in accordance with applicable laws and the organization’s records retention and destruction policies.
Basic background checks and verification might include the following information:
Pre- and post-employment background checks can provide an employer with valuable information about an individual whom an organization is considering for a job or position within an organization. Such checks can give an immediate indication of an individual’s integrity (for example, by providing verification of information in the employment application) and can help screen out unqualified and undesirable applicants.
Personnel who fill sensitive positions should undergo a more extensive pre-employment screening and background check, possibly including:
Periodic post-employment screenings (such as credit records and drug testing) may also be necessary, particularly for personnel with access to financial data, cash, or high-value assets, or for personnel being considered for promotions to more sensitive or responsible positions.
Many organizations that did not perform drug screenings in the past do so today. Instead of drug testing all employees, some take a measured approach by screening employees when promoted to higher levels of responsibility, such as director or vice president.
Various employment agreements and policies should be signed when an individual joins an organization or is promoted to a more sensitive position within an organization. Employment agreements often include non-compete agreements, non-disclosure agreements, codes of conduct, and acceptable use policies. Typical employment policies might include Internet acceptable use, social media policy, remote access, mobile and personal device use (for example, “Bring Your Own Device,” or BYOD), and sexual harassment/fraternization.
Formal employment termination procedures should be implemented to help protect the organization from potential lawsuits, property theft and destruction, unauthorized access, or workplace violence. Procedures should be developed for various scenarios including resignations, termination, layoffs, accident or death, immediate departures versus prior notification, and hostile situations. Termination procedures may include
Organizations commonly outsource many IT functions, particularly data center hosting, call-center or contact-center support, and application development. Information security policies and procedures must address outsourcing security and the use of service providers, vendors and consultants, when appropriate. Access control, document exchange and review, maintenance hooks, on-site assessment, process and policy review, and service level agreements (SLAs) are good examples of outsourcing security considerations.
Individual responsibilities for compliance with applicable policies and regulations within the organization should be understood by all personnel within an organization. Signed statements that attest to an individual’s understanding, acknowledgement, and/or agreement to comply may be appropriate for certain regulations and policies.
Applicable policy regulations and policy requirements should be documented and understood by all personnel within the organization. Signed statements that attest to an individual’s understanding, acknowledgement, and/or agreement to comply may also be appropriate.
Beyond basic security fundamentals, the concepts of risk management are perhaps the most important and complex part of the security and risk management domain. Indeed, risk management is the process from which decisions are made to establish what security controls are necessary, implement security controls, acquire and use security tools, and hire security personnel.
Risk can never be completely eliminated. Given sufficient time, resources, motivation, and money, any system or environment, no matter how secure, can eventually be compromised. Some threats or events, such as natural disasters, are entirely beyond our control and often unpredictable. Therefore, the main goal of risk management is risk treatment: making intentional decisions about specific risks that organizations identify. Risk management consists of three main elements (each treated in the upcoming sections):
The business of information security is all about risk management. A risk consists of a threat and a vulnerability of an asset:
Remember: Risk = Asset Value × Threat Impact × Threat Probability.
The risk management triple consists of an asset, a threat, and vulnerability.
Two key elements of risk management are the risk assessment and risk treatment (discussed in the following sections).
A risk assessment begins with risk identification — detecting and defining specific elements of the three components of risk: assets, threats, and vulnerabilities.
The process of risk identification occurs during a risk assessment.
Identifying an organization’s assets and determining their value is a critical step in determining the appropriate level of security. The value of an asset to an organization can be both quantitative (related to its cost) and qualitative (its relative importance). An inaccurate or hastily conducted asset valuation process can have the following consequences:
A properly conducted asset valuation process has several benefits to an organization:
Three basic elements used to determine the value of an asset are
To perform threat analysis, you follow these four basic steps:
For example, a company that has a major distribution center located along the Gulf Coast of the United States may be concerned about hurricanes. Possible consequences include power and communications outages, wind damage, and flooding. Using climatology, the company can determine that an annual average of three hurricanes pass within 50 miles of its location between June and September, and that a specific probability exists of a hurricane actually affecting the company’s operations during this period. During the remainder of the year, the threat of hurricanes has a low probability.
The number and types of threats that an organization must consider can be overwhelming, but you can generally categorize them as
A vulnerability assessment provides a valuable baseline for identifying vulnerabilities in an asset as well as identifying one or more potential methods for mitigating those vulnerabilities. For example, an organization may consider a Denial of Service (DoS) threat, coupled with a vulnerability found in Microsoft’s implementation of Domain Name System (DNS). However, if an organization’s DNS servers have been properly patched or the organization uses a UNIX-based DNSSEC server, the specific vulnerability may already have been adequately addressed, and no additional safeguards may be necessary for that threat.
The next element in risk management is risk analysis — a methodical examination that brings together all the elements of risk management (identification, analysis, and control) and is critical to an organization for developing an effective risk management strategy.
Risk analysis involves the following four steps:
Identify the assets to be protected, including their relative value, sensitivity, or importance to the organization.
This component of risk identification is asset valuation.
Define specific threats, including threat frequency and impact data.
This component of risk identification is threat analysis.
Calculate Annualized Loss Expectancy (ALE).
The ALE calculation is a fundamental concept in risk analysis; we discuss this calculation later in this section.
Select appropriate safeguards.
This process is a component of both risk identification (vulnerability assessment) and risk control (which we discuss in the section “Risk control,” later in this chapter).
The Annualized Loss Expectancy (ALE) provides a standard, quantifiable measure of the impact that a realized threat has on an organization’s assets. Because it’s the estimated annual loss for a threat or event, expressed in dollars, ALE is particularly useful for determining the cost-benefit ratio of a safeguard or control. You determine ALE by using this formula:
SLE × ARO = ALE
Here’s an explanation of the elements in this formula:
Single Loss Expectancy (SLE): A measure of the loss incurred from a single realized threat or event, expressed in dollars. You calculate the SLE by using the formula Asset value × Exposure Factor (EF).
Exposure Factor (EF) is a measure of the negative effect or impact that a realized threat or event would have on a specific asset, expressed as a percentage.
The two major types of risk analysis are qualitative and quantitative, which we discuss in the following sections.
Qualitative risk analysis is more subjective than a quantitative risk analysis; unlike quantitative risk analysis, this approach to analyzing risk can be purely qualitative and avoids specific numbers altogether. The challenge of such an approach is developing real scenarios that describe actual threats and potential losses to organizational assets.
Qualitative risk analysis has some advantages when compared with quantitative risk analysis; these include
Disadvantages of qualitative risk analysis, compared with quantitative risk analysis, include
A distinct advantage of qualitative risk analysis is that a large set of identified risks can be charted and sorted by asset value, risk, or other means. This can help an organization identify and distinguish higher risks from lower risks, even though precise dollar amounts may not be known.
A qualitative risk analysis doesn’t attempt to assign numeric values to the components (the assets and threats) of the risk analysis.
A fully quantitative risk analysis requires all elements of the process, including asset value, impact, threat frequency, safeguard effectiveness, safeguard costs, uncertainty, and probability, to be measured and assigned numeric values.
A quantitative risk analysis attempts to assign more objective numeric values (costs) to the components (assets and threats) of the risk analysis.
Advantages of a quantitative risk analysis, compared with qualitative risk analysis, include the following:
Disadvantages of a quantitative risk analysis, compared with qualitative risk analysis, include the following:
Purely quantitative risk analysis is generally not possible or practical. Primarily, this is because it is difficult to determine a precise probability of occurrence for any given threat scenario. For this reason, many risk analyses are a blend of qualitative and quantitative risk analysis, known as a hybrid risk analysis.
A hybrid risk analysis combines elements of both a quantitative and qualitative risk analysis. The challenges of determining accurate probabilities of occurrence, as well as the true impact of an event, compel many risk managers to take a middle ground. In such cases, easily determined quantitative values (such as asset value) are used in conjunction with qualitative measures for probability of occurrence and risk level. Indeed, many so-called quantitative risk analyses are more accurately described as hybrid.
A properly conducted risk analysis provides the basis for the next step in the risk management process: deciding what to do about risks that have been identified. The decision-making process is known as risk treatment. The four general methods of risk treatment are
Risk mitigation: This involves the implementation of one or more policies, controls, or other measures to protect an asset. Mitigation generally reduces the probability of threat realization or the impact of threat realization to an acceptable level.
This is the most common risk control remedy.
As stated in the preceding section, mitigation is the most common method of risk treatment. Mitigation involves the implementation of one or more countermeasures. Several criteria for selecting countermeasures include cost-effectiveness, legal liability, operational impact, and technical factors.
The most common criterion for countermeasure selection is cost-effectiveness, which is determined through cost-benefit analysis. Cost-benefit analysis for a given countermeasure (or collection of countermeasures) can be computed as follows:
ALE before countermeasure – ALE after countermeasure – Cost of countermeasure = Value of countermeasure to the organization
For example, if the ALE associated with a specific threat (data loss) is $1,000,000; the ALE after a countermeasure (enterprise tape backup) has been implemented is $10,000 (recovery time); and the cost of the countermeasure (purchase, installation, training, and maintenance) is $140,000; then the value of the countermeasure to the organization is $850,000.
When calculating the cost of the countermeasure, you should consider the total cost of ownership (TCO), including:
The total cost of a countermeasure is normally stated as an annualized amount.
An organization that fails to implement a countermeasure against a threat is exposed to legal liability if the cost to implement a countermeasure is less than the loss resulting from a realized threat (see due care and due diligence, discussed earlier in this chapter). The legal liability we’re talking about here could encompass statutory liability (as a result of failing to obey the law) or civil liability (as a result of failing to comply with a legal contract). A cost-benefit analysis is a useful tool for determining legal liability.
The operational impact of a countermeasure must also be considered. If a countermeasure is too difficult to implement and operate, or interferes excessively with normal operations or production, it may be circumvented or ignored and thus not be effective. The end result may be a risk that is higher than the original risk prior to the so-called mitigation.
The countermeasure itself shouldn’t, in principle (but often does, in practice), introduce new vulnerabilities. For example, improper placement, configuration, or operation of a countermeasure can cause new vulnerabilities; lack of fail-safe capabilities, insufficient auditing and accounting features, or improper reset functions can cause asset damage or destruction; finally, covert channel access or other unsafe conditions are technical issues that can create new vulnerabilities. Every new component in an environment, including security solutions, adds to the potential attack surface.
After appropriate countermeasures have been selected, they need to be implemented in the organization and integrated with other systems and countermeasures, when appropriate. Organizations that implement countermeasures are making planned changes to their environment in specific ways. Examples of countermeasure implementation include
A control is defined as a safeguard that is used to ensure a desired outcome. A control can be implemented in technology (for example, a program that enforces password complexity policy by requiring users to employ complex passwords), in a procedure (for example, a security incident response process that requires an incident responder to inform upper management), or a policy (for example, a policy that requires users to report security incidents to management). Organizations typically will have dozens to hundreds or even thousands of controls. There are so many controls that, sometimes, it makes sense to categorize controls in various ways. This can help security professionals better understand the types and categories of controls used in their organization. A few of these category groupings are discussed here.
The major types of controls are
Other types of controls include
Another way to think of controls is how they are enforced. These types are
Most organizations don’t attempt to create their control frameworks from scratch; instead, they adopt one of these well-known industry standard control frameworks:
Organizations typically start with one of these, then make individual additions, changes, or deletions to controls, until they arrive at the precise set of controls they deem sufficient.
An organization that implemented controls, but failed to periodically assess those controls, would be considered negligent. The periodic assessment of controls is a necessary part of a sound risk management system.
There are various approaches to security control assessments (SCA), including:
Organizations often take a blended approach to control assessment: some controls may be assessed internally, others externally. There may be a mix of the two; some controls are assessed both internally and externally.
It would take an entire book (a long chapter, anyway) to detail the methods used to assess controls. Most of this subject matter lies outside the realm of most CISSPs, so we’ll just summarize here. If you are “fortunate” enough to work in a highly regulated environment, you may get exposure to these concepts, and more.
There are five basic techniques used to assess the effectiveness of a control:
Auditors often use more than one of the techniques above when testing control effectiveness. The method(s) used are sometimes determined by the auditor, but sometimes the law, regulation, or standard specifies the type of control testing required.
Some controls are manifested in many physical locations, or are present in many separate information systems. Sometimes, an auditor will elect to examine a subset of systems or locations instead of all of them. In large organizations, or in organizations where controls are implemented identically in all locations, it makes sense to examine a subset of the total number of instances (auditors call the entire collection of instances the population).
The available techniques include the following:
Some laws, regulations, and standards have their own rules about sampling and the techniques that are permitted.
Auditors will typically create formal reports that include several components, including:
Some laws, regulations, and standards specify elements required in audit reports, and sometimes even the format of a report.
Any safeguards or controls that are implemented need to be managed and, as you know, you can’t manage what you don’t measure! Monitoring and measurement not only helps you manage safeguards and controls, it also helps you verify and prove effectiveness (for auditing and other purposes).
Monitoring and measurement refer to active, intentional steps in controls and processes, so that management can understand how controls and processes are operating. Depending on the control or process, one or more of the following will be recorded for management reporting:
For some controls, management may direct personnel (or systems, for automatic controls) to create alerts or exceptions in specific circumstances. This will inform management of specific events where they may wish to take action of some kind. For example, a bank’s customer representative might be required to inform a branch manager if a customer asks for change for a ten-thousand dollar bill.
Asset valuation is an important part of risk management, because managers and executives need to be aware of the tangible and intangible value of all assets involved in specific incidents of risk management.
Once in a while, an asset’s valuation can come from the accounting department’s balance sheet (for better organizations that have a good handle on asset inventory, value, and depreciation), but often that’s only a part of the story. For example, if an older server is involved in an incident and must be replaced, that replacement cost will be far higher than the asset’s depreciated value. Further, the time required to deploy and ready a replacement server, and the cost of downtime, also need to be considered.
There are sometimes other ways to assign values to assets. For example, an asset’s contribution to revenue may change one’s perspectives on an asset’s value. If an asset with a $10,000 replacement cost is key in helping the organization realize $5 million in revenue, is it still worth just $10,000?
Regular reporting is critical to ensure that risk management is always “top of mind” for management. Reports should be accurate and concise. Never attempt to hide or downplay an issue, incident, or other bad news. Any changes to the organization’s risk posture — whether due to a new acquisition, changing technology, new threats, or the failure of a safeguard, among others — should be promptly reported and explained.
Potentially, there is a lot of reporting going on in a risk management process, including:
You guessed it: Some laws, regulations, and standards may require these and other types of reports (and, in some cases, in specific formats).
Continuous (or continual) improvement is more than a state of mind or a philosophy. It is a way of thinking about security and risk management. Better organizations bake continual improvement into their business processes, as a way of intentionally seeking opportunities to do things better.
ISO/IEC 27001 (Information Security Management Systems [ISMS] requirements) specifically requires continual improvement in several ways:
If you ask an experienced security and risk professional about risk frameworks, chances are they will think you are talking about either risk assessment frameworks or risk management frameworks. These frameworks are distinct, but deal with the same general subject matter: identification of risk that can be treated in some way.
Risk assessment frameworks are methodologies used to identify and assess risk in an organization. These methodologies are, for the most part, mature and well established.
Some common risk assessment methods include
A risk framework is a set of linked processes and records that work together to identify and manage risk in an organization. The activities in a typical risk management framework are
There is no need to build a risk management framework from scratch. Instead, there are several excellent frameworks available that can be adapted for any size and type of organization. These frameworks include
Threat modeling is a type of risk analysis used to identify security defects in the design phase of an information system or business process. Threat modeling is most often applied to software applications, but it can be used for operating systems, devices, and business processes with equal effectiveness.
Threat modeling is typically attack-centric; threat modeling most often is used to identify vulnerabilities that can be exploited by an attacker in software applications.
Threat modeling is most effective when performed at the design phase of an information system, application, or process. When threats and their mitigation are identified at the design phase, much effort is saved through the avoidance of design changes and fixes in an existing system.
While there are different approaches to threat modeling, the typical steps are
Threat identification is the first step that is performed in threat modeling. Threats are those actions that an attacker may be able to successfully perform if there are corresponding vulnerabilities present in the application or system.
For software applications, there are two mnemonics used as a memory aid during threat modeling. They are
While these mnemonics themselves don’t contain threats, they do assist the individual performing threat modeling, by reminding the individual of basic threat categories (STRIDE) and their analysis (DREAD).
After threats have been identified, threat modeling continues through the creation of diagrams that illustrate attacks on an application or system. An attack tree can be developed. It outlines the steps required to attack a system. Figure 3-2 illustrates an attack tree of a mobile banking application.
FIGURE 3-2: Attack tree for a mobile banking application.
When performing a threat analysis on a complex application or a system, it is likely that there will be many similar elements that represent duplications of technology. Reduction analysis is an optional step in threat modeling to avoid duplication of effort. It doesn’t make sense to spend a lot of time analyzing different components in an environment if they are all using the same technology and configuration.
Here are typical examples:
Just as in routine risk analysis, the next step in threat analysis is the enumeration of potential measures to mitigate the identified threat. Because the nature of threats varies widely, remediation may consist of one or more of the following for each risk:
Integrating security risk considerations into supply chain management and merger and acquisition strategy helps to minimize the introduction of new or unknown risks into the organization.
It is often said that security in an organization is only as strong as its weakest link. In the context of service providers, mergers, and acquisitions, the security of all organizations in a given ecosystem will be dragged down by shoddy practices in any one of them. Connecting organizations together before sufficient analysis can result in significant impairment of the security capabilities overall.
Instead, each organization’s individual policies, requirements, processes and procedures should be assessed to identify the best solution for the new formed organization going forward.
Any new hardware, software, or services being considered by an organization should be appropriately evaluated to determine both how it will impact the organization’s overall security and risk posture, and how it will affect other hardware, software, services, and processes already in place within the organization. For example, integration issues can have a negative impact on a system’s integrity and availability.
It’s important to consider the third parties that organizations use. Not only do organizations need to carefully examine their third-party risk programs, but also a fresh look of third parties themselves is needed, to ensure that the risk level related to each third party has not changed to the detriment of the organization.
Any new third-party assessments or monitoring should be carefully considered. Contracts (including privacy, non-disclosure requirements, and security requirements) and service-level agreements (SLAs, discussed later in this section) should be reviewed to ensure that all important security issues and regulatory requirements still are addressed adequately.
Minimum security requirements, standards and baselines should be documented to ensure they are fully understood and considered in acquisition strategy and practice. Blending security requirements from two previously separate organizations is almost never as easy as simply combining them together into one document. Instead, there may be many instances of overlap, underlap, gaps, and contradiction that must all be reconciled. A transition period may be required, so that there is ample time to adjust the security configurations, architectures, processes, and practices to meet the new set of requirements after the merger or acquisition.
Service-level agreements (SLAs) establish minimum performance standards for a system, application, network, service, or process. An organization establishes internal SLAs and operating level agreements (OLAs) to provide its end-users with a realistic expectation of the performance of its information systems, services, and processes. For example, a help desk SLA might prioritize incidents as 1, 2, 3, and 4, and establish SLA response times of ten minutes, 1 hour, 4 hours, and 24 hours, respectively. In third-party relationships, SLAs provide contractual performance requirements that an outsourcing partner or vendor must meet. For example, an SLA with an Internet service provider might establish a maximum acceptable downtime which, if exceeded within a given period, results in invoice credits or (if desired) cancellation of the service contract.
The CISSP candidate should be familiar with the tools and objectives of security awareness, training, and education programs. Adversaries are well aware that, as organizations’ technical defenses improve, the most effective way to attack an organization is through its staff. Hence, all personnel in an organization need to be aware of attack techniques so that they can be on the lookout for these attacks and not be fooled by them.
The three main components of an effective security awareness program are a general awareness program, formal training, and education.
A general security awareness program provides basic security information and ensures that everyone understands the importance of security. Awareness programs may include the following elements:
Formal training programs provide more in-depth information than an awareness program and may focus on specific security-related skills or tasks. Such training programs may include
An education program provides the deepest level of security training, focusing on underlying principles, methodologies, and concepts. In all but the largest organizations, this training is delivered by external agencies, as well as colleges, universities, and vocational schools.
An education program may include
As we say often in this book, you can’t manage what you don’t measure. Security awareness training is definitely included here. It is vital that security awareness training include a number of different measurements so that security managers and company leadership know whether the effort is worth it. Some examples include
Congratulations! You’ve chosen a profession that is constantly and rapidly changing! As such, security education, training, and awareness programs constantly must be reviewed and updated to ensure they remain relevant, and to ensure your own knowledge of current security concepts, trends, and technologies remains current. We suggest that the content of security education and training programs be examined at least once per year, to ensure that there is no mention of obsolete or retired technologies or systems, and that current topics are included.