Chief information officers and the information technology (IT) and information security (IS) professionals with whom they work can no longer confine their areas of knowledge to those explained in ones and zeroes. As corporate officers and directors are increasingly vested by law with oversight responsibility for their organization’s information security practices, and enforcement actions based on information security failures escalate, the scrutiny of IT and IS practices is certain to intensify. As key figures in organizational incident response teams, IT professionals will be expected to play a vital role in preventing, minimizing, and recovering losses from cyber attacks. Fulfilling that role effectively requires not only technical expertise, but some understanding of the surrounding legal issues. Indeed, with recent increases in government resources devoted to fighting cyber crime, and with the proliferation of new regulations aimed at setting minimum standards for information security safeguards, IT and IS managers must understand the computer crime and information security laws that they now regularly encounter.
Understanding what constitutes a computer crime enables IT managers to prioritize responses and build successful cases for prosecution and recovery of losses. Tracking developments in information security regulations and their corresponding effect on industry standards ensures that the core IT and IS functions do not become a source of corporate liability, and it provides IT professionals with the added benefit of a reasoned basis for persuading management to adopt best practices.
IT and IS professionals, along with the technology solutions they choose to deploy, form the primary line of defense against incursions into government and corporate computer networks. As the first responders to network incidents, particularly those emanating from outside the organization, these professionals are responsible for evaluating when network events rise above the normal background noise. In order to assess those events meaningfully, it is imperative that IT professionals have some understanding of the laws that govern misconduct on networks.
Knowledge of the elements of the various computer crimes defined by federal statutes is vital to IS professionals, not only because it assists them in defending their companies’ data, products, and communications from outside threats, but because it enables them to reduce their companies’ liability for actions taken by their own employees. Unwanted network activity takes on a variety of forms and occurs along a continuum that runs from mere bothersome nuisances to potentially terminable employment offenses to federal felonies.
Understanding the basic elements of computer crimes has several advantages:
• It informs the decision of whether to elevate notice of certain conduct to others within the organization. When the IT staff knows the key attributes that form criminal conduct, they are far less likely to sound alarms in response to non-actionable events.
• It enables IT professionals to position their companies to make sound criminal referrals (or to build solid civil cases). Computer crime laws are somewhat unique in that they impose a large degree of responsibility on the victim for taking steps to establish the commission of a cyber crime, including defining access permissions and documenting damage. Awareness of this responsibility enables IT professionals to design their network defense posture and to collect and document critical evidence when responding to incidents. In most cases, IT managers will take a lead role in drafting their companies’ information security policies, and recognition of the key computer crime elements can be incorporated into those policies.
• It will assist in preventing overly aggressive actions in response to incidents that might subject a system administrator to liability.
Computer crimes can generally be divided into three categories: the “hacking” laws, which cover intrusions into computer networks and subsequent fraud, theft, or damage; the “electronic communications” laws, which govern the interception, retrieval, and disclosure of e-mail and keystrokes; and other “substantive” laws, which address otherwise unlawful conduct either committed in cyberspace or assisted by computers.
The Computer Fraud and Abuse Act (CFAA), codified at 18 U.S.C. Section 1030 is the seminal law on computer crimes. Designed to protect the confidentiality, integrity, and availability of data and systems, the CFAA targets hackers and others who access or attempt to access computers without authorization and inflict some measure of damage. Such prohibited access includes not only direct hacking into a system, but also denial of service attacks, viruses, logic bombs, ping floods, and other threats to information security.
The CFAA defines seven prohibited acts: unauthorized access of information protected for national security reasons,1 unauthorized access of confidential information on the Internet,2 and unauthorized access of government, nonpublic computers,3 unauthorized access of a protected computer in furtherance of fraud,4 intentional acts causing damage to computers,5 trafficking of passwords affecting interstate commerce or government computers,6 and threats to cause damage to a protected computer for the purpose of extortion.7
Only “protected computers” as defined by Section 1030(e)(2) are covered by the CFAA. Two classes of protected computers are defined: those used exclusively by a financial institution or the United States government (or, if use is shared, if the conduct constituting the offense affects the use of a financial institution or the government), and those used in interstate or foreign commerce or communications.8 In 1996, amendments expanded the range of protected computers by including any computers used in interstate commerce, which includes virtually any computer connected to the Internet.9 The 2001 USA PATRIOT Act further expanded the definition of protected computers by including computers outside of the United States that affect U.S. interstate commerce.10 Practically speaking, then, nearly every conceivable computer crime will satisfy the CFAA’s jurisdictional threshold, and meeting the elements of the particular violations presents the only hurdle to establishing a CFAA violation.
Two key sets of concepts permeate the CFAA:
• Access without or in excess of authorization
• Damage or loss
With rare exception, these two elements must be met to establish a CFAA crime. Because these concepts are central to all violations, it’s important to understand their meaning in the context of the statute.
For the purpose of the CFAA, the “access without authorization” prong actually can take two distinct forms. The first is a straight “unauthorized access,” which is defined in terms of a traditional trespass—an outsider without privileges or permission to a certain network breaks into that network. For traditional unauthorized access, the intent of the trespasser is irrelevant.
In addition to straight trespass, the CFAA also relies on the concept of gaining access to a computer system in “excess of authorization.” Recognizing when a user has exceeded his or her level of authorization can be a far more subtle determination than identifying a straight unauthorized access. “Excess of authorization” can be established both by reference to the purpose of the perpetrator’s access and the extent of the access. By way of example, an authorized user on a company network may have rights subject to limitations on the scope of access—the user is not permitted to have system administrator privileges or to access certain shared drives that are dedicated to storing sensitive information. If that user, while authorized to be on the network, elevates his privileges to root access, or somehow gains access to the restricted shared drive, she is transformed from an authorized user to one acting “in excess of authorization.” Similarly, the same user may also be given access to information on the network but only for a specific purpose—an IRS agent may access taxpayer files, but only for those taxpayers on whose cases the agent is working. If that agent begins browsing taxpayer files unrelated to her job function, the improper purpose for which she is accessing the information may transform the otherwise authorized use into an “excess of authorization.” Defining an act as purely unauthorized, as opposed to exceeding authorization, can be significant, as certain sections of the CFAA require proof that the perpetrator’s access was wholly unauthorized, while mere “excess of authorization” is sufficient for others.
NOTE Indeed, the First Circuit Court of Appeals recognized that an IRS employee’s browsing of taxpayer information out of idle curiosity, where such activity was forbidden by IRS employment policy, constituted access in excess of authorization. U.S. v Czubinski, 106 F.3d 1069, 1078-79 (1st Cir. 1997). By contrast, a violation does not exist where a defendant can establish that the reason for the access was approved. See Edge v Professional Claims Bureau, Inc., 64 F.Supp.2d 116, 119 (E.D.N.Y. 1999) (granting summary judgment to defendant who accessed a credit report for a permissible purpose).
The second set of key concepts in the CFAA is “damage” or “loss.” The CFAA defines damage as “any impairment to the integrity or availability of data, a program, system, or information.”11 For certain provisions of the CFAA, damage is confined to the following subset of specific harms:
• Loss to one or more persons affecting one or more protected computers aggregating to at least $5,000
• Any modification or potential modification to the medical diagnosis, treatment, or care of one or more individuals
• Physical injury to any person
• A threat to public health or safety
• Damage affecting a computer system used by government for administration of justice, national defense, or national security
“Loss,” for purposes of the statute, includes “any reasonable cost to the victim, including incident response, damage assessment, restoration of data or systems, and lost revenue or costs incurred from interruption of service.”12 The USA PATRIOT Act, which was passed shortly after September 11, 2001, and has been the subject of much debate between civil liberties advocates and supporters of law enforcement with regard to other issues, also clarified the concept of loss by explicitly recognizing that a victim’s costs incurred in responding to and remedying damage caused by the crime are compensable damages. Accordingly, information security professionals should keep detailed records of time spent and hard expenses incurred from the moment an incident response commences. Because certain CFAA crimes have monetary thresholds by statute, and even more so because many United States attorneys’ offices have significantly higher monetary thresholds that must be met before they will consider taking a case, the victim will often be called upon to produce evidence of the costs incurred in connection with the attack. Finally, the revised definition of “loss” is significant because any party suffering such loss may bring a civil suit for violations of the CFAA, provided that loss exceeds $5,000.
Each section of the CFAA incorporates these concepts of unauthorized access plus damage in defining the specific conduct prohibited by that section. When evaluating whether unwanted network activity constitutes a crime, the threshold issue should be isolating the unauthorized access. Upon that determination, the next question an IT manager should ask is “What ‘plus’ factor exists?” Mere trespass (of a nongovernment computer) alone does not constitute a crime under federal law. Accordingly, there must be some additional activity that causes damage or loss in some form in order to constitute a crime. The nature of that “something more” varies by section of the CFAA, as is demonstrated by the following review of the most regularly charged 1030 offenses.
Section 1030(a)(2) has perhaps the broadest application of any section, as it protects the confidentiality of data, irrespective of whether any damage is caused to the integrity or availability of the data. 1030(a)(2) prohibits intentionally accessing a computer without or in excess of authorization and thereby obtaining information in a financial record or a credit report, from a federal agency, or from a “protected computer” if conduct involved an interstate or foreign communication. In essence, 1030(a)(2) reaches both forms of unauthorized access, and the only requisite “plus factor” is obtaining information.13 This provision has been further broadened by courts holding that the mere viewing of information during a period of unauthorized access constitutes “obtaining” the information, even if it is not copied, downloaded, or otherwise converted.14 In recognition of its having the least egregious “plus factor,” violations of 1030(a)(2) are misdemeanors, not felonies (meaning they carry a maximum sentence of one year in prison), unless they are committed for commercial advantage or private financial gain, for criminal or tortious purposes, or if the value of information exceeds $5,000.
Section 1030(a)(3) contains the only “mere trespass” crime recognized under the federal cyber-crime laws, but it is limited in application to government computers. Specifically, this section prohibits intentionally accessing any nonpublic computer of a U.S. government department or agency if the person is not authorized to access any computer of that department or agency. The victim computer can be one to which access is shared between government agencies and private contractors, provided the charged conduct affects the use by or for the government. Unlike Section 1030(a)(2), (a)(3) only criminalizes pure “outsider” unauthorized access, and not uses in excess of authorization. First time 1030(a)(3) offenses are misdemeanors.
Section 1030(a)(4) criminalizes either form of unauthorized access in connection with a scheme to defraud. Specifically, this section prohibits “knowingly and with the intent to defraud, accessing a protected computer without or in excess of authorization, and by means of such conduct further[ing] the intended fraud and obtain[ing] anything of value.” Here, the “plus factors” are the existence of a fraudulent scheme in connection with the hack, as well as the acquisition of something of value. The CFAA specifically excludes the theft of small-scale computer time (less than $5,000 in one year) as the potential thing of value. Accordingly, “hacks for access” where the victim’s computer resources are the only thing taken (such as leveraging the wireless network of a neighboring company) do not constitute an (a)(4) violation, despite the presence of an unauthorized access coupled with an intent to defraud (unless a loss of over $5,000 can be demonstrated). 1030(a)(4) violations are felonies carrying a five year maximum sentence and $250,000 maximum fine for first time offenses.
Section 1030(a)(5) covers the classic computer hacking violations—intentional release of worms and viruses, denial of service attacks, and computer intrusions that damage systems. The section is broken into three distinct parts. First, Section 1030(a)(5)(A)(i) prohibits knowingly causing the transmission of a “program, information, code, or command” and as a result of such conduct, intentionally causing “damage” without authorization to a protected computer. This subsection has a strict intent element—the wrongdoer must knowingly commit the act while intending to cause damage—but it is unique among CFAA crimes in that it applies to either insiders or outsiders as it does not require any level of unauthorized access. Section (a)(5)(A)(i) crimes are those where no level of access is necessarily required to commit the offense, as in a SYN flood attack, where an outsider manages to knock a system offline without ever gaining access.
NOTE In the case of United States v Morris, 928 F.2d 504 (2nd Cir. 1991), a defendant who released a worm into national networks connecting university, governmental, and military computers around the country was found guilty of accessing federal interest computers without authorization under former Section 1030(a)(5)(A).
Section 1030(a)(5)(A)(ii) and (iii) govern traditional computer hacking by outsiders that causes damage to the victim system. Section (a)(5)(A)(ii) prohibits intentionally accessing a protected computer without authorization and recklessly causing damage; Section (a)(5)(A)(iii) criminalizes the same unlawful access coupled with causing any damage, negligently or otherwise. The severity of the penalties depends on whether the damage was caused recklessly (a felony) or negligently (a misdemeanor). Thus, unlike (a)(5)(A)(i), the latter two subsections do require an “unauthorized access” coupled with the causing of damage. Significantly, both (a)(5)(A)(ii) and (iii) require that the perpetrator be an “outsider,” as someone merely exceeding authorized access cannot commit either offense. For all three subsections of 1030(a)(5), the conduct must result in the previously identified subsets of “damage” set forth in 1030(a)(5)(B). Accordingly, bothersome and potentially nefarious conduct, such as repeated port-scanning, where no actual unauthorized access has occurred and no actual damage has resulted, do not reach the level of a 1030(a)(5) violation.15
Criminal penalties under the CFAA vary depending on the prohibited act. The CFAA provides for both fines and imprisonment, and punishment may vary depending on whether the offender is an insider or outsider, and on whether the offender is a first-time CFAA violator or a recidivist. The recent USA PATRIOT Act further expanded activity covered under the CFAA by punishing an attempt to commit any of the seven prohibited acts as if the act were completed, and by including state court convictions for similar crimes in determining whether an offender is a first-time offender under the statute.16
Understanding the conduct forming the violations of the CFAA not only helps when referring incidents to law enforcement, but it also permits entities to build a case for potential recovery of losses in a civil case. The CFAA allows private actions so that parties suffering damage or loss can obtain compensatory damages and injunctive relief from the violator.17 Although civil claims are limited to the subset of specific damage set out in 18 U.S.C. Section 1030(a)(5), this does not pose a serious practical limitation on entities seeking redress, as it includes loss in excess of $5,000. Thus, civil cases may be pursued for any CFAA violation, and entities may seek to recover all economic loss suffered, including the cost of response and remediation.
NOTE In connection with the passage of the USA PATRIOT Act, Congress amended the CFAA’s civil provision to clarify that CFAA claims could not be pursued based on claims of negligent design or manufacture of computer hardware, software, or firmware. (See 18 U.S.C. Section 1030(g).)
Although the CFAA has broad application over nearly all computer hacking offenses, it is not the only set of relevant cyber-crime laws for such incidents. In fact, most states now have their own cyber-crime statutes.18 Although each of these provisions have their own unique attributes, a large number of them are modeled on the CFAA and incorporate its core access and damage concepts. These similarities, and the limited jurisdictional reach of state law enforcement (many state authorities are somewhat loath to investigate cyber crimes where both the victim and perpetrator reside outside of the state), reinforce that a working-level knowledge of the federal CFAA is of paramount interest to IT and IS professionals. Awareness of the state cyber-crime laws in the company’s home state can be helpful, however, particularly in cases involving mere trespass into non-government computers (access without damage), which many states outlaw, and where the damage level associated with unauthorized access is too low for consideration by federal law enforcement.
Federal statutes protect electronic communications, including e-mail, instant messaging, and the keystrokes of network users (and sometime abusers) both from interception while they are being sent, and from access after they arrive at their destination. The Electronic Communications Privacy Act (ECPA) and its associated federal statutes prohibit the unauthorized interception or disclosure of such communications, but the level of protection for the communications differs depending upon whether the communications are in transit or are stored. Understanding how these laws work is also useful in understanding when your organization is the victim of a crime. More importantly, however, because the monitoring of electronic communications is an integral part of what IT and IS professionals are asked to do, they should have a firm grasp of when such monitoring is authorized.
The real-time acquisition of electronic communications in transit is governed by the wiretap provisions of the ECPA, codified at 18 U.S.C. Section 2511 and following. Specifically, Section 2511(a) prohibits intentionally intercepting (or “endeavoring to intercept”) any electronic communication, intentionally disclosing (or “endeavoring to disclose”) the contents of any electronic communication knowing or having reason to know that the information was obtained through an illegal wiretap, or using (or “endeavoring to use”) the information knowing it was obtained via an unlawful interception.19 Practically speaking, the wiretap provisions make unlawful the use of packet sniffers or other devices designed to record the keystrokes of persons sending electronic communications, unless a legally recognized exception applies to authorize the conduct.
Obviously, IT and IS professionals must be able to use electronic monitoring tools in maintaining and protecting their network environments. The wiretapping provisions of the ECPA recognize this reality and afford two primary exceptions (other than specific Title III wiretapping authorities for law enforcement) under which the interception of electronic communications is permitted: self-defense and consent. The self-defense or system provider exception states that a “provider of … electronic communication service” may intercept communications on its own machines “in the normal course of employment while engaged in any activity which is a necessary incident to … the protection of the rights or property of the provider of that service.”20
The courts have not had occasion to define the contours of when such an activity is “necessarily incident” to protecting rights and property. What is certain, however, is that there must be some limitation on permissible monitoring, or the exception would swallow the general prohibition. Whereas a system administrator’s monitoring the keystrokes of a hacker who has gained access via a dormant account and attempted to elevate himself to root-level access surely falls squarely into the exception, periodic monitoring of the e-mail communications of all junior vice-presidents in a certain division of a company seems to stretch beyond the rationale for the exception.
NOTE In some cases, an entity may monitor a hacker’s activities for a period of time and then turn over the results of its own investigation to law enforcement. Once a criminal investigation related to the activity commences, it is unlawful for any person to disclose the communications obtained lawfully under the self-defense exception if done with the intent to impede or obstruct the criminal investigation, or if the communications were intercepted in connection with that criminal investigation.
The uncertainty of the self-defense exception’s reach suggests that reliance on the second exception, consent, provides a far sounder footing in most instances. The Wiretap Act recognizes that it shall not be unlawful for a person to intercept an electronic communication where the person “is a party to the communication or where one of the parties to the communication has given prior consent to such interception.”21 The clearest form of consent is when an actual party to the communication seeks to record it. Under federal law, both parties need not consent to the recording or disclosure of e-mails or instant messages by either the sender or recipient of those messages. (Some states, however, require that both parties to a communication consent before the contents may be recorded or disclosed.)
In most instances where a company calls upon its IT staff to monitor communications, however, the staff are not participants in the subject communications. The entity that owns the network is not automatically a party to an e-mail exchange between someone using its system and a third party outside the network. Accordingly, if that entity wishes to preserve the right to monitor such communications, it must ensure that it has previously obtained the consent to do so from all users of its network. The cleanest manner of ensuring consent to record all communications on an entity’s network is to use a click-through banner as part of the login process, requiring any user of the system to accept that use of the system constitutes consent to the monitoring of all use of that network.
In the absence of such a banner, consent via organizational acceptable use policies and employee handbooks may suffice. When relying on consent obtained via policy or handbook, entities should be mindful of defining the consent broadly. Broad consents are increasingly necessary, due both to the proliferation of devices enabling the exchange of electronic communications (cell phones, RIM devices, remote access programs), and to recent court cases extending the application of the wiretap provisions to activities that may be routinely monitored by organizations without regard to wiretapping concerns, such as tracking URLs visited by network users.22
Like the CFAA, the wiretap provisions of the ECPA permit civil suits to be brought for violations of the Act. Any person whose wire, oral, or electronic communication is intercepted, disclosed, or intentionally used in violation of the Act may recover actual, statutory, and/or punitive damages from the person or entity engaging in the offense.23 Thus, criminal liability aside, it is critical that IT professionals are mindful about the types of interceptions they and their companies perform.
Stored electronic communications, such as e-mail residing on a mail server, are protected by the stored communications provisions of the ECPA, codified at 18 U.S.C. Section 2701 and following. Specifically, Section 2701(a)(1) and (2) prohibit intentionally accessing, without or in excess of authorization, the facilities of a provider of electronic communications (an entity that provides users the ability to send and receive e-mail, not merely an individual’s PC) and thereby obtaining, altering, or preventing authorized access to the electronic communications stored there.24 Thus, hacking into an e-mail server for the purpose of obtaining access to stored e-mail is prohibited by the stored communications provisions. This prohibition applies equally to hacking into the e-mail servers of providers to the public (such as ISPs), and private providers of restricted company networks. In connection with the recent passage of the Homeland Security Act, violations where the offense is committed for purposes of commercial advantage or gain, malicious destruction, or in furtherance of another criminal or tortious act were elevated to a felony.
Significantly, unlike real-time interceptions, which are unlawful without an explicit exception, the review or recording of stored communications is lawful unless coupled with an unauthorized access to the information. For system administrators with root level access to their company’s e-mail servers, accessing these communications for legitimate purposes (doing so on behalf of the company in a manner consistent with the company’s policies) will seldom, if ever, be unauthorized. Reviewing the system logs for non-content, transactional information is even less problematic. Of course, the technical ability to access e-mail is not coextensive with the level of authority to do so.
NOTE For example, a rogue system administrator who peruses an officer of the company’s e-mail out of curiosity is likely violating company policy, and is potentially violating the ECPA by extension.
While the core cyber crimes are covered under the CFAA and ECPA, there are additional substantive provisions of criminal and civil law that may affect IS professionals in the course of their regular duties, and they should have some understanding of these laws. Each of the offenses discussed in this section are routinely encountered within organizations, and they generally involve the use of the organization’s computer network to some degree. In many cases, the IT manager will be the first person in the organization to become aware of such activity, and he or she should have some basis for evaluating its significance. These offenses include theft of trade secrets, copyright and trademark infringement, and possession of child pornography. Each of the statutes governing this conduct is particularly relevant not only to causes of action against hackers and outsiders, but also to internal investigations.
Criminal theft of trade secrets is punishable under the Economic Espionage Act, codified at 18 U.S.C. Sections 1831-39. A defendant is guilty of economic espionage if, for economic benefit, she steals, or obtains without authorization, proprietary trade secrets related to a product involved in interstate commerce, with the knowledge or intent that the owner of the secret would suffer injury. This statute applies equally to trade secrets stolen by outsiders and those obtained without approval or authorization by employees. Civil cases of trade-secret theft must be filed under state trade-secret law.
Another discomforting problem for network administrators is the discovery of electronic contraband stored on their organization’s network, whether placed there by a hacker or by an internal network user. Two pervasive examples of this issue are intellectual property infringement and child pornography. Intentional electronic reproduction of copyrighted works with a retail value of more than $2,500 is punishable by fine, imprisonment, or both via 18 U.S.C. Section 2319, Criminal Infringement of a Copyright. While this statute can apply to outsiders who copy a company’s products, it also applies to employees of a company who host infringing content on the company’s network. (Criminal trademark infringement—for instance, selling pirated copies of software or musical works with a counterfeited mark—is likewise punishable by fine, imprisonment, or both via 18 U.S.C. Section 2320.) Increasingly, content owners are also targeting private organizations where they identify users of those networks who are actively engaging in the swapping of copyrighted materials via the organization’s network. In such instances, the organization will generally not be held liable for the rogue actions of employees, particularly where they violate the organization’s written policies. To ensure that the company does not risk exposure, however, it is important to respond swiftly upon discovering infringing materials on the network.
18 U.S.C. Section 2252, and 18 U.S.C. Section 2252A prohibit the “knowing” possession of any book, magazine, periodical, film, videotape, computer disk, or other material that contains an image of child pornography that has been mailed or transported interstate by any means, including by computer. Actual knowledge or reckless disregard of the minority of the performers and of the sexually explicit nature of the material is required. Although there is some authority intimating that the intent requirement is satisfied when a defendant is aware of the nature of the material, the requirement that possession of such material is “knowing” was created specifically to protect people who have received child pornography by mistake. Therefore, individuals who unknowingly possess material meant for another are not implicated by the statute.
However, cases interpreting the federal statute have found that a party may be found to “knowingly” possess child pornography if it possesses such material for a long period of time and does not delete it. Accordingly, it is imperative that an entity take action upon attaining a sufficient level of knowledge that it is in possession of the contraband material. In many cases, an IT manager may discover an employee directory with a number of JPEG files with filenames suggestive of child pornography. If these images are not actually viewed, however, the requisite level of “knowledge” may not have crystallized, despite suggestive names. Courts have stated that filenames are not necessarily a reliable indicator of the actual content of files, and that it is rarely, if ever, possible to know if data in a file contains child pornography without viewing the file on a monitor.25 Section 2252A(d) contains an affirmative defense to possession charges for anyone who promptly takes reasonable steps to destroy the images or report them to law enforcement, provided the person is in possession of three or fewer images. Although the defense is limited to three or fewer images, as a practical matter, if an employee is storing child pornography on a company network in violation of the company’s acceptable use policies, that conduct (even where the number of images far exceeds three) will not be imputed to the organization if it promptly takes action to delete the images or report them to the authorities.
Recognizing the categories of network behavior that constitute criminal acts enables IT professionals to take the offensive effectively upon discovery of such conduct. Increasingly, however, chief information officers (CIOs) are focused on the legal issues surrounding their organization’s defensive posture. Specifically, CIOs are growing more concerned about liability arising from their organizations’ efforts to achieve one of the IT and IS staff’s core functions: safeguarding the security of the organization’s information. In the last few years, information security regulation, and the concomitant prospect of incurring liability for falling short of industry standards for preparing for, preventing, and responding to security breaches has increased exponentially.
This proliferation of federal and state regulations has largely been aimed at protecting electronically stored, personally identifiable information, and the regulations have generally been confined in their application to certain industry sectors. The regulations establish a basis for liability and accountability for entities that fail to apply the requisite safeguards. Although most of the regulations enacted to date are sector-specific, the combination of the regulations and the forthcoming proposals is generating significant momentum toward recognition of a long elusive “industry standard” for information security.
The first prominent regulation began with the industry-specific safeguards for financial institutions required by the Gramm-Leach-Bliley Act. The protections of these safeguards have been gradually expanded to the health-care industry by the Health Insurance Portability and Accountability Act, and to nonregulated industries through consent decrees entered in connection with enforcement actions brought by both the Federal Trade Commission and state attorneys general. In addition, California has recently enacted its own non-sector-specific reporting requirements for information security breaches. The cumulative effect of these developments is an emerging duty of care for any entity that obtains or maintains personally identifying information electronically, and one that may logically be expected to extend to the government and corporate America’s general information security posture. A discussion of the existing regulations provides some shape and contour to the measures that organizations should now consider essential to secure their systems.
The Gramm-Leach-Bliley Act of 1999 (GLB) was enacted to reform the banking industry, and among its methods was the establishment of standards for financial institution safeguarding of non-public personal information. Each federal agency with authority over financial institutions was charged with establishing standards to ensure the security and confidentiality of customer records and information, to protect against any anticipated threats or hazards to the security or integrity of such records, and to protect against unauthorized access to or use of such records or information that could result in substantial harm or inconvenience to any customer.
Each implementing agency took a slightly different tack. Individual financial agencies, such as the Federal Reserve System and the Federal Deposit Insurance Corporation acted first, developing interagency banking guidelines in 2001 applying specifically to the institutions under their jurisdictions. The Federal Trade Commission Safeguards Rule, which became effective in May of 2003, is perhaps the most significant because it applies broadly to any financial institution not subject to the jurisdiction of another agency that collects or receives customer information. The defining element of the Safeguards Rule is the requirement that each financial institution “develop, implement, and maintain a comprehensive information security program that is written in one or more readily accessible parts and contains administrative, technical, and physical safeguards that are appropriate to [its] size and complexity, the nature and scope of [its] activities, and the sensitivity of any customer information at issue.”26
The Rule sets forth five specific elements that must be contained in an entity’s information security program:
• Designate an employee or employees to coordinate the information security program to ensure accountability
• Assess risks to customer information in each area of its operations, especially employee training and management, information systems, and attack or intrusion response
• Design and implement safeguards to control the assessed risks, and monitor the effectiveness of the safeguards
• Select service providers that can maintain appropriate safeguards, and include safeguard requirements in service provider contracts
• Evaluate and adjust the information security program based on the results of effectiveness monitoring and on material changes to the organization
The interagency banking guidelines implementing GLB provide some additional specifics with regard to practical application of safeguards. While they outline risk assessment in the same manner as the FTC Safeguards Rule—entities should identify potential threats, then assess the likelihood of occurrence and the sufficiency of security measures designed to meet those threats—they provide more detailed suggestions for risk management. For instance, the banking guidelines suggest several methods for restricting access to customer information, thereby reducing vulnerability. Among these suggested methods are the following:
• Restrict data access only to authorized individuals
• Prevent authorized individuals from providing the information to unauthorized individuals
• Restrict access to the physical locations that contain customer information
• Encrypt electronic customer information
• Restrict access of customer information to employees who are prescreened using background checks
• Implement dual control procedures that require two or more persons, operating together, to access information
While the interagency banking guidelines apply only to financial institutions under the jurisdiction of the promulgating agencies, their guidelines for risk management serve as a useful reference for all entities that collect or receive customer information.
Finally, the Securities and Exchange Commission released its own Regulation S-P in 2001. Regulation S-P requires every broker-dealer, fund, and registered adviser to adopt policies and procedures to address the safeguards. Consistent with safeguards promulgated by other agencies, Regulation S-P requires that the adopted policies and procedures be reasonably designed to ensure the security and confidentiality of customer information, protect against any anticipated threats or hazards to the information, and protect against unauthorized access that could result in substantial customer harm or inconvenience. Unlike many of the other agencies, however, the SEC opted not to mandate any particular attributes that should be included in the policies, nor did it provide specific guidelines for ensuring the regulation’s goals were met.
Although each agency took a slightly different approach, when viewed as a whole, it is clear that certain common attributes permeate all of the various agency implementations of the Gramm-Leach-Bliley safeguards—namely that the information security requirements placed on a particular organization should be commensurate with the risks facing that organization, and that written response plans and reporting mechanisms are essential to addressing those risks. Each agency recognized that the duty to safeguard personal information through risk assessment and risk management is directly proportional to the potential vulnerability of the information and to the quantity and quality of the information to be protected. For this reason, both the FTC Safeguards Rule and the interagency banking guidelines are centered around the performance of an initial vulnerability assessment, followed by the implementation of policies and procedures tailored to address the potential risk of compromised customer information.
Although the SEC’s implementing regulations for GLB were the least rigorous of any agency, information security oversight by that agency may nonetheless emerge as a serious issue under the purview of the more general Sarbanes-Oxley Act of 2002. The SEC has placed additional restrictions on public companies as a result of the Sarbanes-Oxley Act, which requires in section 404 that the annual reports of covered entities contain an “internal control report.” This report must indicate management’s responsibility for establishing and maintaining adequate internal controls for the purpose of financial reporting, and must contain an assessment of the effectiveness of those controls.27 Signed into law in the wake of the Enron and WorldCom scandals, Sarbanes-Oxley imposes substantial criminal penalties on officers responsible for failure to accurately report. Internal control report requirements go into effect on June 15, 2004, for publicly traded companies with market capitalization of $75 million or more; smaller businesses and foreign corporations must comply beginning April 15, 2005.
The Act is not entirely clear about whether the “internal control” requirements include a review of information security policies and procedures. The SEC Final Rule promulgated pursuant to section 404 states that registrants must implement “policies and procedures that … [p]rovide reasonable assurance regarding prevention or timely detection of unauthorized acquisition, use or disposition of the registrant’s assets that could have a material effect on the financial statements.”28 The Federal Reserve, Federal Deposit Insurance Corporation, Office of the Comptroller of the Currency, and Office of Thrift Supervision issued a joint policy in March 2003 that characterizes “internal controls” as a process designed to provide reasonable assurances that companies achieve the following internal control objectives: efficient and effective operations, including safeguarding of assets; reliable financial reporting; and, compliance with applicable laws and regulations. Among the core management process components identified in the policy are risk assessment and monitoring activities, both key attributes of infromation security procedures.29 Although neither the SEC rule nor the joint agency guidance single out information security as a component of “internal controls” reporting, the increasing significance of information security issues to large organizations, coupled with the requirements of officer and board of director oversight of information security in sector-specific regulation, suggests that it will be an issue that makes its way onto the Sarbanes-Oxley checklists for major corporations. Accordingly, the high profile and level of attention placed on Sarbanes-Oxley is likely to significantly increase the scrutiny on information security best practices.
Much as the Gramm-Leach-Bliley Act sought to regulate the protection of personal information in the financial industry, the Health Insurance Portability and Accountability Act (HIPAA) introduced standards for the protection of health-related personal information. Passed in 1996, HIPAA required the Department of Health and Human Services to issue Privacy and Security Rules for the protection of individually identifiable health information maintained electronically by health plans, health-care clearinghouses, and certain health-care providers.
The Privacy Rule, adopted in 2000, contained a general information security provision requiring covered entities to implement “appropriate administrative, technical and physical safeguards” for the protection of personal health information. The Security Rule, published in early 2003 and requiring compliance by April 2005, imposes more specific standards on covered entities. In practice, compliance with the standards of the Security Rule is likely to be the eventual measure for evaluating “appropriate safeguards” under the Privacy Rule. Accordingly, the Security Rule safeguards are the relevant standards that regulated agencies should incorporate into their information security plans.
Like the financial industry safeguards, the HIPAA Security Rule requires covered entities to first perform a risk assessment and then adopt security measures commensurate with the potential risk. The Rule sets out four general requirements:
• Ensure the confidentiality, integrity, and availability of all electronic personal information created, received, maintained, or transmitted by the entity
• Protect against any reasonably anticipated threats or hazards to the information
• Protect against information disclosure prohibited by the Privacy Rule
• Ensure compliance with the Rule by its workforce
Before developing security measures designed to meet these requirements, the entity must first perform an individualized assessment that considers the size of the entity and its infrastructure and security capabilities, the cost of security measures, and the potential likelihood and scope of threats to personal information. The breadth of these considerations suggests that several groups within an organization—IT/IS, legal, risk managers, human resources—may all need to be included in conducting the initial assessment. In other words, a routine prepackaged penetration test or the equivalent from a computer security vendor is unlikely to achieve the specific goals of the assessment.
Once the risk assessment has been completed, the organization must then adopt administrative, physical, and technical safeguards that are defined with a greater level of specificity in the HIPAA Rule than previous information security regulations. The Security Rule’s specific standards include both “required” and “addressable” implementation specifications. Where a specification is “addressable” and not required, the covered entity must assess whether it is a “reasonable and appropriate safeguard in its environment, when analyzed with reference to the likely contribution to protecting the entity’s electronic personally identifiable health information.” The entity must implement the specification if reasonable and appropriate; however, if doing so is not reasonable and appropriate, the entity must document its reasons for this conclusion and implement an “equivalent alternative measure.”
The required safeguards include a number of familiar concepts included in the GLB safeguards, as well as more specific, yet still technology-neutral requirements. For example, the administrative safeguards require the implementation of a security management process that includes written polices and procedures to prevent, detect, contain, and correct security violations. The policies must include a risk analysis, risk management, and employee sanction policy, an emergency contingency plan, and address information access management. Entities are also required to conduct security awareness training in support of these policies. Physical safeguards include facility access controls, workstation security, and media controls. Technical safeguards require access control and authentication but leave the use of encryption of transmitted data and automatic logoff access controls as “addressable” rather than “required” safeguards. Finally, the HIPAA Security Rule requires that covered entities ensure by written contract that business associates will protect information transmitted by the entity. Because a business associate essentially must agree to comply with the Security Rule’s requirements with respect to any electronic PHI that it creates, receives, maintains, or transmits on behalf of the covered entity, this requirement effectively extends the application of the HIPAA Security Rule beyond the specific regulated sector to all entities sharing data with it.
Thus, the HIPAA Security Rule, like the Gramm-Leach-Bliley safeguards, focuses largely on initial and updated evaluations of vulnerability, followed by steps for developing an information security plan, leaving flexibility on specifics so that the plan can be tailored to the organization and the risk.
As discussed in this chapter, the initial forays into information security regulation, particularly at the federal level, have been focused on specific industry sectors. Recently, however, the state of California began blazing the trail with a general information security law, which is the first of its kind. Cal. Civ. Code Section 1798.82 is similar to the preceding information security requirements in that it focuses on the protection of personally identifiable information, but it is markedly different in the method by which it seeks to safeguard that interest. Rather than requiring entities to adopt certain best practices or procedures for preventing or responding to an incident, the California law regulates the manner in which entities suffering a security breach report the incident to affected parties, and it provides a private right of action to sue entities who fail to provide notice in accordance with the statute.
Section 1798.82 requires all entities who do business in California to disclose information security breaches to every California resident whose data was acquired by an unauthorized person.30 The requirements extend to any person or business that conducts business in California, even if the entity has no physical presence in the state. Disclosure must be made “in the most expedient time possible and without unreasonable delay,” according to the law’s specific notice requirements. However, persons or businesses maintaining their own notification procedures as part of an information security policy may provide notice according to those procedures instead, provided notice is given in a timely fashion. As most entities would prefer to provide notice in accordance with a method chosen to reflect the realities of their businesses, this provision creates an incentive to implement a comprehensive information security policy that includes such notification procedures.
Although the statute is limited to entities doing business in California and breaches affecting California residents, the impact of the law is not confined to that state’s borders. In reality, most companies do not, and cannot realistically, segregate data of California residents from other customer data. Moreover, although the statute only requires covered entities to notify California residents, the security breach need not occur in California for the statute to apply. Thus, if a company that does any business in California suffers a computer intrusion in Illinois, the California law would apply if personal information pertaining to California residents was compromised. Nor are companies likely to be eager to test the limits of the “doing business in California” limitation as a defendant in state court in California. The enactment of the California law alone is likely to have a significant effect on how companies across the United States handle information security issues. Even more significantly, the law may be a harbinger of things to come, as similar legislation has already been introduced in Congress.
In addition to the growing set of sector-specific regulation, several movements toward standardizing infromation security practices on a voluntary basis have also recently emerged.
The National Strategy to Secure Cyberspace, released on February 14, 2003, suggests a general duty for entities with cyberspace presence to ensure that electronically stored information in their care is properly protected. While the National Strategy does not in any way regulate information security measures, and instead seeks only a voluntary commitment from cyberspace entities, it does set forth priorities very similar in nature to the industry-specific safeguards of the GLB and HIPAA, such as developing a security response system, establishing a threat and vulnerability reduction program, and training personnel on security awareness.
With its focus on self-policing, the National Strategy does not impose new requirements. Nonetheless, the Strategy does provide further impetus for expanding flexible risk assessment and management guidelines throughout all industries. As such, it presents a general standard for future legislation expanding the applicability of information security safeguards.
International standard ISO 17799, titled the “Code of Practice for Information Security Management,” provides “recommendations for information security management for use by those who are responsible for initiating, implementing or maintaining security in their organization.” The standard, which was published in 2000 and evolved from the British national information security standard, provides an aspirational framework for entities that want to ensure effective and efficient information security safeguards. One of the more significant uses of the standard has been its adoption by some insurance carriers as a requirement for underwriting or obtaining discounted cyber insurance.
The ISO 17799 framework combines the familiar initial risk assessment with controls essential for compliance with typical regulations plus controls considered to be common best practices for information security. Best practice controls include the creation of an information security policy document, development of an organizational plan with clearly defined security responsibilities, security education and training, proper incident reporting, and development of a disaster recovery plan.
The International Organization for Standardization (ISO) has trumpeted ISO 17799 as a current gold standard and the eventual industry standard for defining information security best practices. At present, there is no universal agreement as to whether this will be the case. In fact, many industry experts and organizations, including the National Institute of Standards and Technology, have expressed concern about limitations in the standard. Indeed, ISO 17799 is currently undergoing a significant revision. Despite its shortcomings, however, ISO 17799 could have an important impact on any universal standard of care that may be created in the future. Like the National Strategy, ISO 17799 has no force as law, but it does provide a detailed roadmap for organizations seeking to implement or update their own information security plan.
The sum effect of the new federal and state information security laws is the emergence, for the first time, of a minimum duty of care for entities that obtain or maintain private information electronically. Identifying a duty of care is significant, because it is a predicate to lawsuits based on cyber security incidents. Before companies can be subject to lawsuits for negligence in failing to prevent information security breaches, or for inadequately responding to them, there must be a recognized standard by which their conduct can be measured. Breach of this new duty of care can potentially create actual liability, and recent legal activity suggests that potential plaintiffs are becoming aware of the duty and are beginning to test the waters with new enforcement actions and lawsuits.
Although nearly all existing information security regulations are sector-specific, the extrapolation of the principles contained in those regulations is being applied, with some creativity, to entities not directly subject to the regulations. Both the Federal Trade Commission and state attorney general offices have begun to view information security as an area ripe for enforcement actions, but have generally needed to identify a hook where no explicit regulation exists. These actions reflect the gradual expansion of those safeguards to non-industry entities, as well as the growing belief that the safeguards are applicable across all industries. In each case, the matter was ultimately settled with consent decrees requiring that the defendant establish and maintain a comprehensive information security plan similar in nature to those required by the industry-specific safeguards.
The FTC has relied on its authority to police deceptive trade practices in order to target shortcomings in protecting information. Specifically, the FTC has initiated action against entities that misrepresent the security of customer information. The first of these cases involved pharmaceutical company Eli Lilly, which unintentionally disclosed the e-mail addresses of 669 subscribers to its prozac.com web site. The company’s January 18, 2002, settlement with the Federal Trade Commission included an agreement to implement a four-stage information security program designed to establish and maintain appropriate administrative, technical, and physical safeguards for the security, confidentiality, and integrity of electronically stored personal information. Another such FTC case involved Guess, Inc., which faced charges of exposing consumers’ credit card information to commonly known hacking attacks, despite claims on its web site that all such information was secure. In a June 18, 2003, settlement, Guess agreed to implement a comprehensive information security program to be certified by an independent professional within one year, and every other year thereafter.31
The New York State Attorney General’s office took a similar approach in an enforcement action against the American Civil Liberties Union (ACLU) in early 2003. In that case, the ACLU had left personal information accessible through its web site’s search function in contravention of its published privacy policy. Once again, the defendant agreed to implement an information security plan, including “appropriate administrative, technical, and physical safeguards,” and to submit to annual independent compliance reviews.
Inherent in each of these settlements is a growing perception that the safeguards originally designed as industry-specific regulations are being extended and used universally as the standard to assess whether measures to protect electronic personal information are “reasonable and appropriate.”
In 2001, CI Host, a web site hosting company, sued its service provider, Exodus, alleging that Exodus’s lack of security measures enabled hackers to launch a successful denial-of-service attack on CI Host’s systems, resulting in downtime for CI Host customers. The court issued a temporary restraining order requiring Exodus to shut down three web servers involved in the attack until it could ensure that the vulnerabilities were corrected. Although this case appears to have been ahead of its time, it is an example of a case where a standard for liability would be far easier to establish now than in 2001, particularly if the defendant had failed to conduct vulnerability assessments, adopt a rigorous incident response plan, or ensure that its outside contractors had sufficient information security safeguards in place.
On January 28, 2003, a class action suit was filed in an Arizona federal district court against TriWest Health Care Alliance for negligence, after the theft of server hard drives containing files on 562,000 military personnel, retirees, and family members with health-care coverage through TriWest. The files contained social security numbers, birth dates, and other personally identifiable information. Significantly, none of the sector-specific regulations discussed here were in force at the time of the incident. Nonetheless, the existence of those standards provides a method for evaluating the propriety of TriWest’s conduct.
These cases are likely not isolated examples, but a harbinger of things to come. As awareness of sector-specific regulations that collectively apply to a broad range of entities continues to increase, the minimum standards embodied in those regulations become more deeply ingrained into the best practices of all organizations. Accordingly, the failure to meet those minimum standards—the performance of a vulnerability assessment, the adoption of an information security and incident response plan tailored specifically to the organization’s risks, the vesting of responsibility for information security in high-level employees, and the periodic revision of policies in response to changes in the company, its security risks, and technology generally—is increasingly likely to subject organizations to potential liability in government enforcement and private civil actions.
The previous sections of this chapter have focused on identifying criminal activity for which an entity can seek redress, and complying with the emerging minimum industry standards for safeguarding electronically stored personal information. This final section provides practical pointers on legal issues that often arise for IT professionals during responses to incidents and litigation.
A key decision faced by any entity responding to an information security incident is whether to contact law enforcement. With the advent of reporting requirements in some states that oblige persons with knowledge of computer crimes to report them to law enforcement officials, an entity may have no choice but to contact law enforcement. But in cases where such contact is optional, there are often pros and cons to involving government officials in an incident.
The following is a list (by no means exhaustive) of potential benefits to contacting law enforcement authorities:
• Doing so sends a powerful message to would-be predators that an organization will report incidents
• It can potentially save money—the government takes on some of the burden of investigation
• It provides access to more powerful investigative tools—the government can use search warrants and the grand jury, while private entities are limited to civil discovery
• It allows for mandatory restitution for damages under the Mandatory Victims Restitution Act, where victims are entitled to recover the “full amount of each victim’s losses”32 for most federal offenses
• There is often no likelihood of recovery through civil litigation
Of course, there are drawbacks to involving law enforcement as well:
• Doing so cedes control over the process, which can potentially lead to timing, coordination, and interference issues
• It creates some danger of exposing internal information
• It creates potentially bad publicity regarding security of information
• It can disrupt business activity
• It potentially exposes any wrongdoing in which the plaintiff itself may have engaged
• The client waives attorney-client privilege
Any voluntary decision to involve law enforcement necessarily demands a cost-benefit analysis of these issues and others. An entity with its own investigative resources might consider whether those resources are sufficient for the task, whether civil remedies are adequate for the harm suffered, and whether involving law enforcement will limit or entirely deny the opportunity to file a civil suit.
As the masters of their organization’s mail server domains, IT managers are often called upon to design or implement automatic e-mail retention policies. In many sectors, entities are now required by law to maintain copies of certain electronic communications for defined periods of time. Retention issues also arise in the context of civil litigation, where parties are increasingly focused on the opposing side’s e-mail and document management systems, with the result that IT professionals are finding themselves being deposed as fact witnesses.
The Securities and Exchange Commission (SEC), the National Association of Securities Dealers (NASD), and the New York Stock Exchange (NYSE) have each recently imposed obligations on covered entities to retain electronic communications, such as e-mail and instant messaging. While some of the obligations derive from explicit retention requirements, others arise as a practical matter in the course of satisfying employee supervision and control requirements.
SEC Rule 17a-4 requires covered entities, which includes exchange members, brokers, and dealers, to “preserve for a period of not less than three years, the first two years in an easily accessible place … originals of all communications received and copies of all communications sent (and any approvals thereof) by the member, broker, or dealer (including inter-office memoranda and communications) relating to its business as such.”33 Subsequent consent decrees and interpretive decisions have consistently applied the three-year retention period to e-mail and other electronic communications.34 Records stored on electronic media must meet a detailed set of format requirements: the media must (1) preserve records exclusively in a non-rewritable, non-erasable format; (2) verify automatically the quality and accuracy of the storage media recording process; (3) serialize the original and duplicate units of storage media; and (4) time-date for the required retention period the information placed on the electronic storage media. In addition, the entity must have the capacity to download indexes and records preserved on the media to other media.35
NASD Rule 3110 incorporates the requirements of Rule 17a-4, and a recent NASD release has indicated that instant messaging communications are covered by its retention requirements.36 The SEC has yet to rule on the retention of instant messaging, but it is reasonable to anticipate that it will follow the lead of the NASD. In addition to these detailed retention requirements, the NASD and NYSE both require members to develop written procedures for reviewing incoming and outgoing communications with the public relating to investment.37 Such communications include electronic communications. Compliance with these procedures is not possible without a retention policy in place so that the communications can be stored for later review.
The SEC, NASD, and NYSE have displayed a willingness to enforce their rules regarding retention of electronic communications, as emphasized by a recent $8.25 million settlement with five large financial services companies, which resulted from their failure to retain e-mails.38 As such, entities in the financial services industry should be on notice that compliance with retention rules is essential immediately. Even those outside of financial services should be aware of the requirements and be prepared for regulation by their own industries, in much the same way that other industries have adopted safeguards similar to Gramm-Leach-Bliley’s information security standards.
In the context of litigation, parties are increasingly mindful that the most meaningful evidence is often maintained in electronic form. For this reason, it is now commonplace to begin the discovery process in litigation (the procedure by which the opposing parties request and produce relevant evidence to each other) with an initial request that the opposing party identify their basic network topology and electronic document retention practices. Rule 26(b)(2) of the Federal Rules of Civil Procedure gives courts the power to limit discovery “if the burden or expense of the proposed discovery outweighs its likely benefit.” The burden of providing information about these systems, and even of restoring documents from backup media, however, is unlikely to be considered overly burdensome. “Upon installing a data storage system, it must be assumed that at some point in the future one may need to retrieve the information previously stored. That there may be deficiencies in the retrieval system … cannot be sufficient to defeat an otherwise good faith request to examine the relevant information.”39
Depending on the party and its counsel’s level of sophistication, these requests may seek information about all software and hardware used in the storage and transfer of documents and electronic communications and about routine back-up and disaster recovery procedures, and they may probe at a party’s ability to restore electronic evidence. In nearly all cases, these requests will overtly emphasize identifying the universe of media where relevant evidence might be found, and will implicitly scrutinize the responding party’s forensic and retention practices. This somewhat recent development has had the secondary effect of turning IT managers into regular witnesses.
For this reason, IT professionals should be familiar with retention policies and adhere to them. It can be uncomfortable to get caught in a deposition (or preferably in a preparation session with your company’s own counsel) trying to explain why six months worth of e-mail on the server that should have been purged still exist and are easily searchable, or worse yet, why documents that should have been preserved have been deleted. IT staff should be prepared to work with in-house counsel to establish a protocol for working with electronic evidence immediately upon counsel becoming aware that the company may be involved in litigation. Finally, IT managers should never delete information in the context of litigation, especially outside of normal practices, and should refuse any suggestions to do so by management. Such actions potentially carry severe consequences in the litigation.
NOTE In Kucala Enterprises, Ltd. v Auto Wax Co., Inc., 2003 WL 21230605 (N.D. Ill. 2003), the court held that litigants “have a fundamental duty to preserve relevant evidence over which the non-preserving entity had control and reasonably knew or could reasonably foresee was material to the potential legal action” in granting a defendant’s motion to dismiss and for an award of attorneys’ fees as sanctions for the plaintiff’s use of the software program “Evidence Eliminator” to delete over 15,000 potentially relevant computer files.
In the wake of an incident, organizations must, as a matter of course, perform investigations, review responses, and evaluate the effectiveness of their incident response plans. However, reports and documents generated by these processes can be subject to discovery if the organization later faces legal challenges related to the incident. Thus, a company’s ability to keep communications and strategic decisions made during the incident response confidential can be of the utmost importance in any potential litigation that might follow. One helpful legal doctrine that can provide some confidentiality protection is the attorney-client privilege.
The attorney-client privilege is the oldest of the privileges for confidential communications known to the common law. The purpose of the privilege “is to encourage full and frank communication between attorneys and their clients and thereby promote broader public interests in the observance of law and administration of justice.”40 Against this background, the attorney-client privilege ensures that “[w]here legal advice of any kind is sought… from a professional legal adviser in his capacity as such… the communications relating to that purpose,… made in confidence… by the client… are at his instance permanently protected… from disclosure by himself or the legal adviser” except when waived.41
Accordingly, where communications exchanged during an incident response are made in the presence of counsel, and for the purpose of soliciting legal advice from counsel on how to proceed, those communications may be protected by the attorney-client privilege. It is imperative, however, to ensure that all significant strategic information exchanged in a privileged setting not be disclosed outside that setting—for example, to any third party, such as law enforcement, a technology vendor, an upstream victim, or someone else in the company outside the presence of an attorney. Disclosure outside of the privilege circle results in a waiver of all communications actually disclosed, and potentially of all other privileged communications concerning that same subject matter.
Written materials prepared by counsel during an incident may also be protected under the attorney work product doctrine. The work product doctrine shields documents prepared in anticipation of litigation as part of a “strong public policy underlying the orderly prosecution and defense of legal claims.”42 It “is intended to preserve a zone of privacy in which a lawyer can prepare and develop legal theories and strategy ‘with an eye toward litigation,’ free from unnecessary intrusion by his adversaries.”43 As a result, “[w]here a document was created because of anticipated litigation, and would not have been prepared in substantially similar form but for the prospect of that litigation,” the work product doctrine bars its discovery.44 Accordingly, relying on counsel to be the member of the incident response team responsible for drafting all memoranda memorializing the gathering of facts and subsequent strategic decisions about third-party notifications, investigative steps, and the like, affords the possibility of claiming work product protection in any later litigation.
Finally, in the wake of an incident, many organizations conduct after-action assessments, in which they evaluate and critique their response to an incident, in hopes of preventing any mistakes from reoccurring and evaluating any improvements that can be made in security protections or response protocols. These exercises are useful and necessary, and often provide the impetus for IT budget increases. Particularly because of the last point, these assessments can contain dire predictions about future consequences if certain problems are not remedied. When reduced to writing and viewed on a detached, cold record in the context of a lawsuit concerning a security breach two years down the road, however, such documents can prove to be a litigation nightmare. Accordingly, organizations should take steps to protect the confidentiality of after-action assessments.
In addition to the previously discussed attorney-client privilege, critical opinions contained in post-incident reports may be privileged and immune from discovery based on the self-critical analysis privilege. This privilege has been recognized by courts in the presence of four factors:
• The information must result from a critical self-analysis undertaken by the party seeking protection
• The public must have a strong interest in preserving the free flow of the type of information sought
• Flow of such information must be curtailed if discovery were allowed
• The document must be produced with an expectation of confidentiality, and the confidentiality must be maintained
It is important to recognize that the self-critical analysis privilege is not recognized by all courts or under all circumstances. Even when recognized, it applies only to the opinions provided in the analysis, and not to facts and statistics upon which the analysis is based.
Therefore, reference to financial data and other factual evidence should be limited in any self-critical analysis intended for internal use only.
The responsibilities of IS and IT professionals continue to expand. In addition to keeping pace with the rapid advancements in security technology, these professionals increasingly must be aware of the emerging spate of information security laws and regulations. Enacting and administering effective information security policies and procedures requires that IS and IT professionals understand the laws governing cybercrime, and these laws continue to evolve. The most significant change to occur in the last few years is that the “techies” are no longer solely responsible for defining “best practices” and “industry standards” for information security. Rather, defining and enforcing information security standards is increasingly becoming the province of Congress, state legislatures, and federal and state law enforcement agencies. In this regulated environment, IT and IS professionals can expect to be working closely with counsel, outside auditors, and corporate boards to ensure that their organizations’ information security practices not only protect the company’s network, but shield the company from potential liability arising from cyber incidents.