CHAPTER 11
Security of Health Care Information Systems


By now it should be clear that much of the information in today’s health care organizations is transmitted, maintained, and stored electronically. Electronic health record (EHR) systems are becoming more common, and as we have seen, even primarily paper-based health care information systems contain a large quantity of data and information that have been created and transmitted electronically. Movement toward health information exchanges (HIEs) and a Nationwide Health Information Network (NwHIN), along with the general increase in the use of Internet-based solutions for transmitting data within and among health care organizations, has led to more confidential health care information than ever before being transmitted electronically. Securing this information and the systems that support it is a crucial function in the operation of health care organizations.

In this chapter we define security, examine the need for establishing an organization-wide security program, and discuss a variety of security-related topics. We look at the various existing threats to health care information. In addition, we outline the components of the Health Insurance Portability and Accountability Act (HIPAA) security regulations, including the recent updates to the regulations under the HITECH legislation. Although security concerns certainly predate the implementation of the HIPAA Security Rule, the standards in this rule provide an excellent and comprehensive outline of the components necessary for securing health information and, to some extent, provide a framework for establishing a viable health care information security program. The chapter then continues with a look at the following topics, including examples of actual practices and procedures:

A specific discussion of business continuity and disaster recovery planning for maintaining and restoring health care information is included in the Administrative Safeguards section. The chapter concludes with a discussion of the special security issues associated with increased use of wireless networks and related devices in health care organizations, along with a discussion of the security issues raised when employees have remote access to health care organizations’ computer networks.

THE HEALTH CARE ORGANIZATION’S SECURITY PROGRAM

Health care organizations must protect their information systems from a range of potential threats. Among these threats are viruses, fire in the data center, untested software, and employee theft of clinical and administrative data. Threats may involve intentional or unintentional damage to hardware, software, or data or the misuse of the organization’s hardware, software, or data. The realization of any of these threats can cause significant damage to the organization. Resorting to manual operations if the computers are down for days can lead to organizational chaos. Theft of organizational data can lead to litigation by the individuals harmed by the disclosure of the data. Viruses can corrupt databases, corruption from which there may be no recovery. Health care organizations must have programs in place to combat security breaches.

The function of the health care organization’s security program is to identify potential threats and implement processes to remove these threats or mitigate their ability to cause damage. For example, the use of antivirus software is designed to reduce the threat from viruses; the installation of fire protection systems in data centers is intended to reduce the damage that might be caused by a fire.

It is important to understand how patient privacy is related to security. The intentional or unintentional release of patient-identifiable information constitutes a misuse of the organization’s information systems. Security in a health care organization should be designed, however, to protect not only patient-specific information but also the organization’s IT assets—such as the networks, hardware, software, and applications that make up the organization’s health care information systems—from potential threats, both threats that come from human beings and those that come from natural and environmental causes.

The primary challenge of developing an effective security program in a health care organization is balancing the need for security with the cost of security. An organization does not know how to calculate the likelihood that a hacker will cause serious damage or a backhoe will cut through network cables under the street. The organization may not fully understand the consequences of being without its network for four hours or four days. Hence, it may not be sure how much to spend to remove or reduce the risk. This dilemma is similar to the one posed when individuals consider obtaining long-term care insurance. None of us know whether we will or will not need this insurance, how long we might live in a long-term care facility, or the acuity of the care we may need. How much insurance should we buy?

One aspect of this challenge is maintaining a satisfactory balance between health care information system security and health care data and information availability. As we saw in Chapter One, the major purpose of maintaining health information and health records is to facilitate high-quality care for patients. On the one hand, if an organization’s security measures are so stringent that they prevent appropriate access to the health information needed to care for patients, this important purpose is undermined. On the other hand, if the organization allows unrestricted access to all patient-identifiable information to all its employees, the patients’ rights to privacy and confidentiality would certainly be violated and the organization’s IT assets would be at considerable risk.

As health care organizations develop their security programs, they should be sure to seek input from a wide range of health care providers and other system users as well as legal counsel and technical experts. The balance between access and security should be reasonable—protecting patients’ rights while allowing appropriate access.

THREATS TO HEALTH CARE INFORMATION

What are the threats to health care information systems? In general, threats to health care information systems fall into one of these three categories:

Within these categories are multiple potential threats. Threats to health care information systems from human beings can be intentional or unintentional. They can be internal, caused by employees, or external, caused by individuals outside the organization. Intentional threats include theft, intentional alteration of data, and intentional destruction of data. The culprit could be a disgruntled employee, a computer hacker, or a prankster. In a Florida case several years ago, for example, the daughter of a hospital employee accessed confidential information through an unattended computer workstation in the facility’s emergency room. She wrote down names and addresses of recent patients and then called to tell them that they had tested positive for HIV. Several of the recipients of these prank calls became extremely distraught (Associated Press, 1995).

Computer viruses are among the most common and virulent forms of intentional computer tampering. They pose a serious threat to computerized patient data and health care applications. (See the section on virus checking later in this chapter for more information on viruses.) Some of the causes of unintentional damage to health care information systems are lack of training in proper use of the system or human error. When users share passwords or download information from a nonsecure Internet site, for example, they create the potential for a breach in security.

Internal breaches of security are far more common than external breaches. Some of the more common forms of internal breaches of security across all industries are the installation or use of unauthorized software, use of the organization’s computing resources for illegal or illicit communications or activities (porn surfing, e-mail harassment, and so forth), and the use of the organization’s computing resources for personal profit.

Computer hardware used in health care information systems must also be protected from loss. In recent years there have been multiple instances of computer thefts from health care organizations, resulting in exposure of confidential patient information. Data reported by HHS Office of Civil Rights show that nearly seven million patients were affected by data breaches from September 2008 to December 2010. The majority of these breaches were caused by theft. The most common location for the breached information was a laptop computer (Keckley, Coughlin, & Gupta, 2011).

Electronic health care information is vulnerable to internal and external threats. Whether intentional or unintentional, these threats pose serious security risks. To minimize the risk and protect patients’ sensitive health care information, well-established and well-implemented administrative, physical, and technical security safeguards are essential for any health care organization, regardless of size.

The security standards established by the Department of Health and Human Services under the terms of the Health Insurance Portability and Ac­­countability Act (HIPAA) provide an excellent framework for developing an overall security plan and program for a health care institution. The regulations are designed to be flexible and scalable and are not reliant on specific technologies for implementation, making it possible for health care organizations of all sizes to be compliant.

OVERVIEW OF THE HIPAA SECURITY RULE

The final rule on the HIPAA security standards, known generally as the Security Rule, was published in the Federal Register on February 20, 2003 (68 Fed. Reg. 34, 8333–8381). It was subsequently updated by the HITECH legislation. (In Chapter Three we looked at the various components of the far-reaching HIPAA legislation. In this section we discuss the security component in greater detail. You may wish to refer back to Chapter Three for a description of how the Security Rule fits into the overall Act.) The HIPAA Security Rule is closely connected to the HIPAA Privacy Rule (also discussed in Chapter Three). However, whereas the Privacy Rule governs all protected health information (PHI), the Security Rule governs only ePHI. EPHI is defined as protected health information maintained or transmitted in electronic form. The Security Rule does not distinguish between electronic forms of information or between transmission mechanisms. EPHI may be stored in any type of electronic media, such as magnetic tapes and disks, optical disks, servers, and personal computers. Transmission may take place over the Internet, on local area networks (LANs), or by disks, for example.

The HIPAA Security Rule was first published, in draft form, in August 1998. At that time one of the complaints was that the standards were too prescriptive and not flexible enough. As a result the standards in the final rule were defined in general terms, focusing on what should be done rather than on how it should be done. According to the Centers for Medicare and Medicaid Services (CMS, 2004), the final rule specifies “a series of administrative, technical, and physical security procedures for covered entities to use to assure the confidentiality of electronic protected health information. The standards are delineated into either required or addressable implementation specifications.”

There are a few key terms to be defined before we examine the content of the HIPAA Security Rule. What is a covered entity? What is the difference between a required implementation specification and an addressable one?

The HIPAA standards govern covered entities (CEs), which are defined as

The specifications contained in the Security Rule are designated as either required or addressable. A required specification must be implemented by a CE for that organization to be in compliance. However, the CE is in compliance with an addressable specification if it does any one of the following:

The standards contained in the HIPAA Security Rule are divided into sections, or categories, the specifics of which we outline here. You will notice overlap among the sections. For example, contingency plans are covered under both administrative and physical safeguards, and access controls are addressed in several standards and specifications. In subsequent sections of this chapter we will look at some actual practices that might be employed by health care organizations in each of the first four categories. As you read through this outline, consider how it would work as a framework or model for a health care organization’s security program.

OUTLINE OF THE HIPAA SECURITY RULE

The Administrative Safeguards section of the Final Rule contains nine standards:

  1. Security management functions. This standard requires the CE to implement policies and procedures to prevent, detect, contain, and correct security violations. There are four implementation specifications for this standard:
    • Risk analysis (required). The CE must conduct an accurate and thorough assessment of the potential risks to and vulnerabilities of the confidentiality, integrity, and availability of ePHI.
    • Risk management (required). The CE must implement security measures that reduce risks and vulnerabilities to a reasonable and appropriate level.
    • Sanction policy (required). The CE must apply appropriate sanctions against workforce members who fail to comply with the CE’s security policies and procedures.
    • Information system activity review (required). The CE must implement procedures to regularly review records of information system activity, such as audit logs, access reports, and security incident tracking reports.
  2. Assigned security responsibility. This standard does not have any implementation specifications. It requires the CE to identify the individual responsible for overseeing development of the organization’s security policies and procedures.
  3. Workforce security. This standard requires the CE to implement policies and procedures to ensure that all members of its workforce have appropriate access to ePHI and to prevent those workforce members who do not have access from obtaining access. There are three implementation specifications for this standard:
    • Authorization and/or supervision (addressable). The CE must have a process for ensuring that the workforce working with ePHI has adequate authorization and supervision.
    • Workforce clearance procedure (addressable). There must be a process to determine what access is appropriate for each workforce member.
    • Termination procedures (addressable). There must be a process for terminating access to ePHI when a workforce member is no longer employed or his or her responsibilities change.
  4. Information access management. This standard requires the CE to implement policies and procedures for authorizing access to ePHI. There are three implementation specifications within this standard. The first (not shown here) applies to health care clearinghouses, and the other two apply to health care organizations:
    • Access authorization (addressable). The CE must have a process for granting access to ePHI through a workstation, transaction, program, or other process.
    • Access establishment and modification (addressable). The CE must have a process (based on the access authorization) to establish, document, review, and modify a user’s right to access a workstation, transaction, program, or process.
  5. Security awareness and training. This standard requires the CE to implement awareness and training programs for all members of its workforce. This training should include periodic security reminders and address protection from malicious software, log-in monitoring, and password management. (These items to be addressed in training are all listed as addressable implementation specifications.)
  6. Security incident reporting. This standard requires the CE to implement policies and procedures to address security incidents.
  7. Contingency plan. This standard has five implementation specifications:
    • Data backup plan (required).
    • Disaster recovery plan (required).
    • Emergency mode operation plan (required).
    • Testing and revision procedures (addressable). The CE should periodically test and modify all contingency plans.
    • Applications and data criticality analysis (addressable). The CE should assess the relative criticality of specific applications and data in support of its contingency plan.
  8. Evaluation. This standard requires the CE to periodically perform technical and nontechnical evaluations in response to changes that may affect the security of ePHI.
  9. Business associate contracts and other arrangements. This standard outlines the conditions under which a CE must have a formal agreement with business associates in order to exchange ePHI.

The Physical Safeguards section contains four standards:

  1. Facility access controls. This standard requires the CE to implement policies and procedures to limit physical access to its electronic information systems and the facilities in which they are housed to authorized users. There are four implementation specifications with this standard:
    • Contingency operations (addressable). The CE should have a process for allowing facility access to support the restoration of lost data under the disaster recovery plan and emergency mode operation plan.
    • Facility security plan (addressable). The CE must have a process to safeguard the facility and its equipment from unauthorized access, tampering, and theft.
    • Access control and validation (addressable). The CE should have a process to control and validate access to facilities based on users’ roles or functions.
    • Maintenance records (addressable). The CE should have a process to document repairs and modifications to the physical components of a facility as they relate to security.
  2. Workstation use. This standard requires the CE to implement policies and procedures that specify the proper functions to be performed and the manner in which those functions are to be performed on a specific workstation or class of workstation that can be used to access ePHI, and that also specify the physical attributes of the surroundings of such workstations.
  3. Workstation security. This standard requires the CE to implement physical safeguards for all workstations that are used to access ePHI and to restrict access to authorized users.
  4. Device and media controls. This standard requires the CE to implement policies and procedures for the movement of hardware and electronic media that contain ePHI into and out of a facility and within a facility. There are four implementation specifications with this standard:
    • Disposal (required). The CE must have a process for the final disposition of ePHI and of the hardware and electronic media on which it is stored.
    • Media reuse (required). The CE must have a process for removal of ePHI from electronic media before the media can be reused.
    • Accountability (addressable). The CE must maintain a record of movements of hardware and electronic media and any person responsible for these items.
    • Data backup and storage (addressable). The CE must create a retrievable, exact copy of ePHI, when needed, before movement of equipment.

The Technical Safeguards section has five standards:

  1. Access control. This standard requires the CE to implement technical policies and procedures for electronic information systems that maintain ePHI in order to allow access only to those persons or software programs that have been granted access rights as specified in the administrative safeguards. There are four implementation specifications within this standard:
    • Unique user identification (required). The CE must assign a unique name or number for identifying and tracking each user’s identity.
    • Emergency access procedure (required). The CE must establish procedures for obtaining necessary ePHI in an emergency.
    • Automatic log-off (addressable). The CE must implement electronic processes that terminate an electronic session after a predetermined time of inactivity.
    • Encryption and decryption (addressable). The CE should implement a mechanism to encrypt and decrypt ePHI as needed.
  2. Audit controls. This standard requires the CE to implement hardware, software, and procedures that record and examine activity in the information systems that contain ePHI.
  3. Integrity. This standard requires the CE to implement policies and procedures to protect ePHI from improper alteration or destruction.
  4. Person or entity authentication. This standard requires the CE to implement procedures to verify that a person or entity seeking access to ePHI is in fact the person or entity claimed.
  5. Transmission security. This standard requires the CE to implement technical measures to guard against unauthorized access to ePHI being transmitted across a network. There are two implementation specifications with this standard:
    • Integrity controls (addressable). The CE must implement security measures to ensure that electronically transmitted ePHI is not improperly modified without detection.
    • Encryption (addressable). The CE should encrypt ePHI whenever it is deemed appropriate.

The Policies, Procedures, and Documentation section has two standards:

  1. Policies and procedures. This standard requires the CE to establish and implement policies and procedures to comply with the standards, implementation specifications, and other requirements.
  2. Documentation. This standard requires the CE to maintain the policies and procedures implemented to comply with the Security Rule in written form. There are three implementation specifications:
    • Time limit (required). The CE must retain the documentation for six years from the date of its creation or the date when it was last in effect, whichever is later.
    • Availability (required). The CE must make the documentation available to those persons responsible for implementing the policies and procedures.
    • Updates (required). The CE must review the documentation periodically and update it as needed.

HITECH EXPANSION OF THE HIPAA SECURITY RULE

In 2009, the Health Information Technology for Economic and Clinical Health (HITECH) Act was signed by President Obama. Among other things, this Act expanded the HIPAA Privacy and Security Rules. The expansion to the Security Rule includes

Under HITECH, any HIPAA-covered entity or business associate (including HIEs) that “accesses, maintains, retains, modifies, records, stores, destroys or otherwise holds, uses or discloses unsecured protected health information” must notify each individual affected by a breach of health information (68 Fed. Reg. 34, 8333–8381, Feb. 20, 2003). This notification must be made by first-class mail within sixty days of discovery. If more than ten individuals are involved, the entity must post the facts of the breach to a conspicuous web site or by means of major print or broadcast media. If more than five hundred individuals are involved, the entity must report the breach to “prominent media outlets.” It is important to note that HITECH defines unsecured protected health information as PHI not secured through the use of technology or methodology specified by HHS guidance (Coppersmith, Gordon, Schermer, & Brockelman, PLC, 2012). By implementing adequate security programs that adhere to HHS guidance, health care organizations can avoid the necessity of potentially harmful and expensive public notification if a theft or loss of PHI occurs.

HITECH gave the responsibility for enforcing the HIPAA Privacy and Security Rules to the Office of Civil Rights (OCR) beginning in July 2009. Since that time they have received over five hundred complaints specific to the Security Rule. The results of these OCR enforcement activities and those for the Privacy Rule, including specific case examples, are available to the public at http://www.hhs.gov/ocr/privacy/hipaa/enforcement/index.html.

This section has provided an outline of the key components of the HIPAA security standards published in 2003 (68 Fed. Reg. 34, 8333–8381, Feb. 20, 2003) and the HITECH expansion. In the next sections we will examine some of the practices that can be employed to address the regulations and ensure that an organization has an effective security program.

ADMINISTRATIVE SAFEGUARDS

As you have seen from the HIPAA security standards outline, administrative safeguards cover a wide range of organizational activities. We do not attempt in this section to give a comprehensive, detailed view of all possible administrative safeguards but rather to present a few practices that can be used as part of a total administrative effort to improve the health care organization’s information security program. We will discuss the following topics:

Risk Analysis and Management

One of the key components of applying administrative safeguards to protect the organization’s health care information is risk analysis. It is impossible to establish an effective risk management program if the organization is not aware of the risks or threats that exist. Risk analysis is relatively new to health care. Few organizations had implemented formal security risk assessment prior to the publication of the HIPAA rules. This in no way minimizes its importance. However, health care has had to look to other industries for examples of risk assessment processes (Reynolds, 2009; Walsh, 2003).

Steve Weil (2004) defines risk as the “likelihood that a specific threat will exploit a certain vulnerability, and the resulting impact of that event.” He introduces a risk analysis process with eight parts, or steps:

  1. Boundary definition. During the boundary definition step, the organization should develop a detailed inventory of all health information and information systems. This review can be conducted using interviews, inspections, questionnaires, or other means. The important thing in this step is to identify all the patient-specific health information, health care information systems (both internal and external), and users of the information and systems.
  2. Threat identification. Identifying threats will result in a list of all potential threats to the organization’s health care information systems. The three general types of threats that should be considered are
    1. Natural, such as floods and fires
    2. Human, which can be intentional or unintentional
    3. Environmental, such as power outages
  3. Vulnerability identification. In this step the organization identifies all the specific vulnerabilities that exist in its own health care information systems. Generally, vulnerabilities take the form of flaws or weaknesses in system procedures or design. Software packages are available to assist with identifying vulnerabilities, but the organization may also need to conduct interviews, surveys, and the like. Some organizations may employ external consultants to help them identify the vulnerabilities in their systems.
  4. Security control analysis. The organization also needs to conduct a thorough analysis of the security controls that are currently in place. These include both preventive controls, such as access controls and authentication procedures, and controls designed to detect actual or potential breaches, such as audit trails and alarms.
  5. Risk likelihood determination. This step in the process involves assigning a risk rating to each area of the health care information system. There is a variety of rating systems that may be employed. Weil recommends using a fairly straightforward high-risk, medium-risk, and low-risk system of rating.
  6. Impact analysis. This is the step in which the organization determines what the actual impact of specific security breaches would be. A breach may affect confidentiality, integrity, or availability. Impact, too, can be rated as high, medium, or low.

    Table 11.1 lists the three security objectives—confidentiality, integrity, and availability, the definitions of each, and examples of potential impacts should the objective be lost or compromised (Scholl et al., 2008).

  7. Risk determination. The information gathered up to this point in the risk analysis process is now brought together in order to determine the actual level of risk to specific information and specific information systems. The risk determination is based on
    1. The likelihood that a certain threat will attempt to exploit a specific vulnerability (high, medium, or low)
    2. The level of impact should the threat successfully exploit the vulnerability (high, medium, or low)
    3. The adequacy of planned or existing security controls (high, medium, or low)

      Each specific system or type of information can be assessed for each of these three factors, and then these assessments can be combined to produce an overall risk rating of high (needing immediate attention), medium (needing attention soon), or low (existing controls are acceptable).

  8. Security control recommendations. The final step of the process is to compile a summary report on the findings of the analysis and recommendations for improving security controls.

Table 11.1. Security objectives

Security ObjectiveDefinitionExamples of Impact if Compromised
ConfidentialityProtection of information from unauthorized disclosureLoss of public confidence, embarrassment, financial loss due to legal action
IntegrityProtection of information from improper modificationData inaccuracy, fraud, or erroneous decisions; could lead to loss of confidentiality or system availability
AvailabilityAvailability of functioning mission-critical IT systemsLoss of productive time, decreased end users’ performance, compromised organizational mission

The risk analysis should lead to the development of policies and procedures outlining risk management procedures and sanctions or consequences for employees and other individuals who do not follow the established pro­cedures. All health care organizations should have a formal security risk management program in place. In general, this program is administered by the organization’s security officer.

Chief Security Officer

Each health care organization must have a single individual who is responsible for overseeing the information security program. Generally, this individual is identified as the organization’s chief security officer. The chief security officer may report to the chief information officer (CIO) or to another administrator in the health care organization. The role of security officer may be 100 percent of an individual’s job responsibilities or only a fraction, depending on the size of the organization and the scope of its health care information systems. Regardless of the actual reporting structure, it is essential that the chief security officer be given the authority to effectively manage the security program, apply sanctions, and influence employees. As Tom Walsh (2003, p. 15) stated in identifying the importance of the security officer, “Influence can leverage the right people to get the job done.”

System Security Evaluation

Chief security officers must periodically evaluate their organization’s health care information systems and networks for proper technical controls and processes. There are several useful resources available to facilitate the technical security evaluation. The U.S. Department of Commerce National Institute of Standards and Technology (NIST) Computer Security Division publishes over three hundred information security documents, including the Federal Information Processing Standards (FIPS), the Special Publication (SP) 800 series and others. NIST Special Publication 800–66 provides an introductory resource for implementing the HIPAA Security Rule that references existing federal information security guidance. Although HIPAA-covered entities and business associates are not required to follow this guidance, it provides them with a description of technologies and methodologies to secure their PHI. Entities that effectively secure PHI can avoid the extensive HITECH notification requirements in the event of a breach in security (AMA, 2010).

There are also general standards for security technologies that span all types of organizations developed by the International Organization for Standardization (ISO) as ISO Standard 15408 (titled Information Technology—Security Techniques—Evaluation Criteria for IT Security). These standards, updated in 2009, allow any organization to use a common set of requirements and thus to compare the results of independent security evaluations (ISO, 2009).

Contingency, Business Continuity, and Disaster Recovery Planning

The National Institute for Standards and Technology (NIST) identifies three types of contingency-related plans: the continuity of operations plan (also known as business continuity plan), contingency plan, and disaster recovery plan. Table 11.2 defines and compares the typical scope of each type of plan (Scholl et al., 2008).

Table 11.2. Types of contingency-related plans

Type of PlanDescriptionScope
Continuity of operations plan (aka business continuity plan)A predetermined set of instructions or procedures that describe how an organization’s essential functions will be sustained for up to thirty days as a result of a disaster event before returning to normal operationsAddresses the organization’s critical missions; not specifically IT-focused
Contingency planManagement policy and procedures designed to maintain or restore business operations, including computer operations, possibly at an alternate location, in the event of emergencies, system failures, or disastersAddresses the organization’s IT disruptions
Disaster recovery planA written plan for processing critical applications in the event of a major hardware or software failure or destruction of facilitiesLimited to major disruptions with long-term effects; typically IT-focused

In reality, a health care organization is likely to have a set of plans. According to the NIST, this suite of plans is necessary “to properly prepare response, recovery, and continuity activities for disruptions affecting the organization’s IT systems, business processes, and the facility” (Scholl et al., 2008). These plans must be coordinated to prevent duplication or contradiction of guidance. Ideally, all contingency plans would be coordinated to create a comprehensive, robust, enterprise-wide business continuity plan.

Contingency Plan

The HIPAA Security Rule specifically addresses contingency plans for ePHI under the Administrative Safeguards section. In addition, contingency and disaster recovery planning and related activities for ePHI are mentioned in several other sections of the regulations. As outlined earlier in the chapter, there are specific components of the contingency plan that covered entities must address. The NIST offers detailed guidance in its publication 800–66, An Introductory Resource Guide for Implementing the Health Insurance Portability and Accountability Act (HIPAA) Security Rule, Appendix F. NIST (Scholl et al., 2008) defines contingency planning as generally including one or more of three approaches to restoring disrupted IT services:

The NIST also offers a specific approach to developing contingency plans. The entire approach is beyond the scope of this chapter, but the following provides a brief outline of each step (Scholl et al., 2008):

  1. Develop the contingency planning policy statement
  2. Conduct the business impact analysis (BIA)
    1. Identify critical IT resources
    2. Identify disruption impacts
    3. Identify allowable outage times
    4. Develop recovery priorities
  3. Identify preventative controls (see the Perspective for a list of potential preventative controls)
  4. Develop recovery strategies
  5. Develop an IT contingency plan with the following components:
    1. Supporting information (introduction, background, and so forth)
    2. Notification and activation
    3. Recovery phase
    4. Reconstitution phase
    5. Appendixes listing personnel and vendor contact lists
  6. Conduct plan testing, training, and exercises
  7. Maintain the plan

PHYSICAL SAFEGUARDS

A security program must address physical as well as technical and administrative safeguards. Physical safeguards involve protecting the actual computer hardware, software, data, and information from physical damage or loss due to natural, human, or environmental threats. Several specific issues related to physical security are addressed in this section:

Assigned Security Responsibility

Each component of the health care information system should be secure, and one easily identifiable employee should be responsible for that security. These individuals are in turn accountable to the chief security officer. For example, in a nursing department the department manager might be responsible for ensuring that all employees have been trained to understand and use security measures and that they know the importance of maintaining the security of patient information. The network administrator, however, might be the person responsible for assigning initial passwords and removing access from terminated employees or employees who transfer to other departments (Reynolds, 2009).

Media Controls

The physical media on which health information is stored must be physically protected. Media controls are the policies and procedures that govern the receipt and removal of hardware, software, and computer media such as disks and tapes into and out of the organization and also their movements inside the organization.

Media controls also encompass data storage. Backup tapes, for example, must be stored in a secure area with limited access. The final disposition of electronic media is another aspect of media controls. Policies for the destruction of patient information must address the electronic media and hardware (workstations and servers) that contain patient information. As organizations gather old computers, all patient data must be removed before this equipment goes to surplus or is otherwise disposed of (Reynolds, 2009).

Physical Access Controls

Physical access controls are designed to limit physical access to health infor­mation to persons authorized to see that information. Locks and keys are examples of physical access controls. However, it is obvious that not all workstations can be kept under lock and key. This might create a secure system, but it would not be readily available to the health care providers who need patient information. Some of the physical access control components that can be employed are equipment control; a facility security plan; procedures that verify user identity before allowing physical access to an area; a procedure for maintaining records of repairs and modifications to hardware, software, and physical facilities; and a visitor sign-in procedure. Organizations should have a system, such as an inventory control system, that tells them exactly what equipment is currently in use in their health care information system. An inventory control system generally involves marking or tagging each piece of equipment with a unique number and assigning each piece to a location and a responsible person. When equipment is moved, retired, or destroyed, that action must be documented in the inventory control system. Another form of equipment control is to install antitheft devices, such as chains that attach computers to desks, alarms, and other tools that deter thieves.

A facility security plan is one that ensures that the individuals in a certain area are authorized to have access to that area. The main computer operations of a health care organization will generally be under tight security, including video surveillance and personal security checks. Badges with photographs are common in health care facilities to help identify personnel who are authorized to access certain buildings and facilities. Some secure areas require individuals to punch a code into a keypad or swipe an identification card over a card reader before entry is allowed. The facility security plan should also have procedures for admitting visitors. Each visitor might sign in and be issued a temporary identification badge, for example. There may be areas of the organization that are not open to visitors at all (Reynolds, 2009).

Workstation Security

Workstations that allow access to patient information should be placed in areas that are secure or monitored at all times. The workstations in the reception area or other public areas should be situated so that visitors or others cannot read the screens. Devices can be placed over workstation monitors that prevent people from reading a screen unless they are directly in front of it. Another aspect of workstation security is developing clear policies for workstation use. These policies should delineate, among other things, the appropriate functions to perform on the workstation and rules for sharing workstations.

Organizations that allow personnel to work from home have additional workstation security issues. Employees working from home must be given clear guidance on appropriate use of the organization’s computer resources, whether these resources involve hardware, software, or web access. Employ­­ees should access any patient-identifiable information through a secure connection, with adequate monitoring to ensure that the user is in fact the authorized employee. (Specific remote access guidelines are discussed later in this chapter.)

All the aspects of physical security require adequate training of all personnel with potential access to the health care information systems. Employees, agents, and contractors with access to locations that house patient information must all participate in security and confidentiality awareness education (Reynolds, 2009).

TECHNICAL SAFEGUARDS

Many different technical safeguards can be used to help secure health care information systems and the networks on which they reside. Again, we will not provide a comprehensive list of all available safeguards but will present a few representative examples. We will discuss technical safeguards related to the following topics:

Access Control

Only individuals with a need to know should have access to patient-identifiable health information. Modern computer systems, including databases and networks, allow users to access a variety of resources such as individual files, database files, and tapes and to use printers and other peripheral devices. This sharing of resources is an important component of effective health care information systems, but it requires that network administrators and database administrators set appropriate access rights for each resource. Often users of a health care information system have to be assigned network access rights and separate application access rights before they can use the system.

Control over access to health data may involve any of the following methods:

Before we discuss each of these options, a brief explanation of access rights is necessary. Traditional user-based and role-based access rights have two parameters—who and how. The who is a list of the users with rights to access the information or computer resource in question. This list, called an access control list, may be organized by individual users or by groups of users. These groups are generally defined by role or job function. For example, all coders in the health information management department would be granted the same access rights, all registered nurses in a particular job classification would be granted the same access, and so forth.

The how parameter of the access control scheme specifies how a user may access the resource. Typical actions users might be allowed to take are read, write, edit, execute, append, and print. Only so-called owners and administrators will be granted full rights so that they can modify or delete or create new components for the resource. Clearly, owner and administrative privileges for the use of health care information systems should be carefully monitored.

User-based access control is defined as “a security mechanism used to grant users of a system access based upon the identity of the user.” With role-based access control (RBAC), access decisions are based on the roles individual users have within the organization. “With RBAC, rather than attempting to map an organization’s security policy to a relatively low-level set of technical controls (typically, access control lists), each user is assigned to one or more predefined roles, each of which has been assigned the various privileges needed to perform the role” (63 Fed. Reg. 155, August 12, 1998). One of the benefits of role-based over user-based access is that as new applications are added, privileges are more easily assigned. Discretionary assignment of access by an administrator is limited with RBAC. Users must be assigned to a specific role in order to be assigned access to a specific application.

Context-based access control is the most stringent of the three options. Harry Smith (2001) describes it this way: “A context-based access control scheme begins with the protection afforded by either a user-based or role-based access control design and takes it one step further … Context-based access control takes into account the person attempting to access the data, the type of data being accessed and the context of the transaction in which the access attempt is made.” In other words, the context-based access has three parameters to consider: the who, the how, and the context in which the data are to be accessed. The Case Study here illustrates the differences among the three types of access control (Reynolds, 2009).

Entity Authentication

Access control mechanisms are effective means of controlling who gains entry to a health care information system only when there is a system for ensuring the identity of the individual attempting to gain access. Entity authentication is defined in the HIPAA Security Rule as “the corroboration that a person is the one claimed.” Entity authentication associated with health care information systems should include at least (1) automatic log-off and (2) a unique user identifier (Reynolds, 2009).

Automatic log-off is a security procedure that causes a computer session to end after a predetermined period of inactivity, such as ten minutes. Multiple software products are available that allow network administrators to set automatic log-off parameters. Once installed, these log-off systems act like any other screensaver on a typical workstation, coming on after a set period of inactivity. Users are then required to enter a network password to deactivate the log-off system screen. Generally, a device driver is also installed that prevents rebooting to deactivate the log-off system. Other security measures that may be included in automatic log-off products are features that prevent users from changing the screensaver and that allow an authorized person to set local password options in case the user is not connected to the network. Failed log-in attempts may be recorded and reported on, along with statistics on user log-ins, elapsed time, and user identification.

Each user of a health care information system must be assigned a unique identifier. This identifier is a combination of characters and numbers assigned and maintained by the security system. It is used to track individual user activity. This identifier is commonly called the user ID or log-on ID. It is the public, or known, portion of most user log-on procedures. For example, many organizations will assign a log-on identifier that is the same as the user’s e-mail address or a combination of the user’s last and first name. It is generally fairly easy to identify a user by his or her log-on. John Doe’s log-on identifier might be “doej,” for example. Because of the public nature of the log-in, additional safeguards beyond the log-on ID are needed.

Entity authentication can be implemented in a number of different ways in a health care information system. The most common entity authentication method is a password system. Other mechanisms include personal identification numbers (PINs), biometric identification systems, telephone callback procedures, and tokens. These implementation methods can be used alone or in combination with other systems. Security experts often encourage layered security systems that use more than one security mechanism. As one security expert has stated, “A series of overlapping solutions works much more effectively, even when you know the solutions are individually fallible. If you line up three security controls that are each 60 percent effective, together they’re something like 90 percent effective against a given attack” (Briney, 2000).

Walsh (2003) recommends a system that uses a two-factor authentication. He identifies these three methods for authentication, and any two of them used together would constitute a two-factor system:

Password Systems

The most common way to control access to a health care information system (or any other computer system, for that matter) is through a combination of the user ID and a password or PIN. User IDs and passwords for a system are maintained either as a part of the access control list for the network or local operating system or in a special database. The list or database is then searched for a match before the user is allowed to access the system requested. Although the user ID is not secret, the password or PIN is. Passwords are generally stored in an encrypted form for which no decryption is available (Oz, 2009; White, 2011).

Although password and PIN systems are the most common forms of entity authentication, they also provide the weakest form of security. A password is defined by TechTarget (2012) as an “unspaced sequence of characters used to determine that a computer user requesting access to a computer system is really that particular user.” Typically, a password is made up of four to sixteen characters. One of the biggest problems with passwords is that users may share them or publicly display them. Users will often write down passwords they cannot remember. They may even tape or post the password on the computer workstation. Health care organizations must take steps to prevent this type of password misuse. Clear policies on the use and maintenance of passwords, education for employees, and meaningful sanctions for policy violators are essential.

Another common problem with passwords is that when they are simple enough to remember, they may be simple enough for someone else to guess. Passwords are encrypted, but there are software programs available, called password crackers, that can be used to identify an unknown or forgotten password. Unfortunately, unauthorized persons seeking to gain access to computer systems can also use these applications (TechTarget, 2012; White, 2011). Health care organizations should establish enforceable, clear guidelines for choosing passwords. The following Perspective offers some suggestions (Reynolds, 2009; TechTarget, 2012; White, 2011).

Biometric Identification Systems

Because of the inherent weaknesses of password systems, other identification systems have been developed. Biometric identification systems employ users’ biological data, in the form, for example, of a voiceprint, fingerprint, handprint, retinal scan, faceprint, or full body scan. Biometrics is beginning to play a role in health care information system security. Biometric devices consist of a reader or scanning device, software that converts the scanned information into digital form, and a database that stores the biometric data for comparison.

Telephone Callback Procedures

Telephone callback procedures are another form of entity authentication in use today. Callback is used primarily when employees have access to a health care information system from home. When a modem dials into the system, a special callback application asks for the telephone number from which the call has been placed. If this number is not an authorized number, the callback application will not allow access.

Tokens

Tokens are devices, such as key cards, that are inserted into doors or computers. With token authentication systems, identification is based on the user’s possession of the token (Eng, 2001). The disadvantage of tokens is that they can be lost, misplaced, or stolen. When tokens are used in combination with a password or PIN, it is essential that the password or PIN not be written on the token or in a location near where the token is stored.

Audit Trails

Webopedia (2012) defines an audit trail as “a record showing who has accessed a computer system and what operations he or she has performed during a given period of time.” In addition, there are separate audit trail software products that enable network administrators to monitor use of network resources. Audit trails are generated by specialized software, and they have multiple uses in securing information systems. These uses may be categorized as follows (Gopalakrishna, 2000):

Data Encryption

Data encryption is used to ensure that data transferred from one location on a network to another are secure from anyone eavesdropping or seeking to intercept them. This becomes particularly important when sensitive data, such as health information, are transmitted over public networks such as the In­­ternet or across wireless networks. Secure data are data that cannot be intercepted, copied, modified, or deleted either while in transit or stored, such as on a disk or tape.

Cryptography is the study of encryption and decryption techniques. It is a complicated science with a vast number of associated techniques. Only the basic concepts and some current authentication technologies will be discussed in this chapter. Public Key Infrastructure, wired equivalent privacy (WEP), and WiFi protected access (WPA) are forms of encryption being used in health care organizations today. (WEP and WPA apply specifically to wireless networks and will be discussed later in this chapter.) These protocols are used to authenticate the senders and receivers of messages transmitted over public networks, such as the Internet or wireless networks.

Some basic terms associated with encryption are plaintext, encryption algorithm, ciphertext, and key. Plaintext refers to data before any encryption has taken place. In other words, the original datum or message is recorded in the computer system as plaintext. An encryption algorithm is a computer program that converts plaintext into an enciphered form. The ciphertext is the data after the encryption algorithm has been applied. The key in an encryption and decryption procedure is unique data that are needed both to create the ciphertext and to decrypt the ciphertext back to the original message. Figure 11.1 is a simple diagram of the components of an encryption and decryption system (White, 2011).

c11-fig-0001

Figure 11.1. Encryption procedure

Consider one of the simplest and earliest known forms of encryption—letter shift or Caesar cipher. The Caesar cipher replaces each plaintext letter with the letter found a fixed number of places down the alphabet. The fixed number of places in the alphabet is the “key” to unlocking the resulting ciphertext. Exhibit 11.1 illustrates the basic concepts in encryption using the Caesar cipher. In this example, the fixed number of “shift letters” is three (Felder, 1999).

Firewall Protection

A firewall is “a system or combination of systems that supports an access control policy between two networks” (White, 2011, p. 369). The term firewall may be used to describe software that protects computing resources or to describe a combination of software, hardware, and policies that protects these resources. The most common place to find a firewall is between the health care organization’s internal network and the Internet. This firewall prevents users who are accessing the health care network via the Internet from using certain portions of that network and also prevents internal users from accessing various portions of the Internet (Oz, 2009).

The basic types of firewalls are (1) packet filter, or network level, and (2) proxy servers, or application level. The packet filter firewall is essentially a router that has been programmed to filter out some types of data and to allow other types to pass through. The early versions of these firewalls were fairly easy to fool. As routers have become more sophisticated, the protection offered by this type of firewall has increased. The proxy server is a more complex firewall device. The proxy server firewall is software that runs on a computer that acts as the gatekeeper to an organization’s network. All external transactions enter the organization’s network through the proxy server. The request for information is actually “stopped” at the proxy server, where a proxy application is created. This proxy is what goes into the organization’s network to retrieve the requested information (White, 2011).

As important as firewalls are to the overall security of health care information systems, they cannot protect a system from all types of attacks. Many viruses, for example, can hide inside documents that will not be stopped by a firewall.

Virus Checking

There are various methods used to maliciously attack computers and networks. One of the most common is a computer virus infection. A computer virus is defined as “a small program that alters the way a computer operates without the knowledge of the computer’s users” (White, 2011, p. 389). Viruses are generally designed to cripple the computer operation by deleting, altering, or corrupting data, files, or operating system components. Following are some common types of viruses:

  1. A Trojan horse is released when the user opens an infected e-mail attachment. The malicious code is hidden within seemingly harmless code.
  2. A botnet takes over operations of the infected computer.
  3. A macro virus is programmed into an attached macro file found in spreadsheet, database, and word processing documents.
  4. A boot sector virus resides in a removable media device such as a flash drive.
  5. A polymorphic virus mutates with every infection, making it very difficult to identify.
  6. A file infector virus infects a piece of executable code. When the infected program is executed, the virus spreads to the computer.
  7. A worm copies itself from one system to another over a network without human assistance (White, 2011).

Checking for malicious software and other system attacks is an important component of a health information security program. As discussed earlier, malicious attacks are very common and can cause extensive damage and loss of productivity. Antivirus software is effective as long as the virus catalogue is updated frequently. Most antivirus software packages can be set to automatically scan the user’s computer system periodically to detect and clean any viruses or other malicious software that is found.

SECURITY IN A WIRELESS ENVIRONMENT

As discussed in earlier chapters, wireless technologies are changing the way health care information systems operate. These technologies cover a wide range of capabilities. Wireless LAN (WLAN) devices allow users to move laptops easily from place to place within the health care organization. Bluetooth technologies allow data synchronization and application sharing across a variety of devices, such as keyboards, printers, and other peripheral devices. Handheld devices and tablets allow remote users to synchronize personal data and to access health care organizations’ network services, such as calendars, e-mail, and Internet access. These technologies offer flexibility and new capabilities to health care providers and the individuals who support them (Karygiannis & Owens, 2002). However, the adoption of wireless technologies has been relatively rapid, creating concerns about the level of security they offer in an environment like the health care organization. According to a white paper written by Fluke Networks, the issues with wireless security are “exactly the same as with wired security. The problem with wireless is that it’s difficult to limit the transmission media to just the areas that we control, or just the hosts we want on our network” (2003, p. 1).

WLANs consist of client devices (laptops, smart phones, and tablets) and access points (APs). The APs are designed to connect the client devices to the organization’s overall (wired) network infrastructure. (The components of the WLAN are discussed in more detail in Chapter Nine.) In 2012, the National Institute of Standards and Technology recommended that organizations implement the following guidelines to improve security of WLANs (Souppaya & Scarfone, 2012). These guidelines address two main components of wireless network security—system configuration and system monitoring:

The specific capability recommendations (Souppaya & Scarfone, 2012) for continuous monitoring of WLANs include detection of

Health care organizations that use wireless technologies should pay close attention to risk analysis for these technologies and make safeguards a part of ongoing risk management. As with other networks and information systems, the organization must know where the threats and vulnerabilities are. Securing the handheld devices, tablets, and laptop computers commonly associated with a wireless network also poses challenges for the health care organization. Clear policies, and appropriate sanctions for those violating the policies, should be established to govern the downloading of patient-specific information onto personal devices such as these.

REMOTE ACCESS SECURITY

Health care organizations, like many other modern organizations, allow personnel to work from home. This remote access creates additional security issues. In fact there have been a number of security incidents related to the remote use of laptops and other portable devices that store ePHI. In response to these incidents and the potential risk of HIPAA violations due to remote access, CMS issued a HIPAA security guidance document. Tables 11.3, 11.4, and 11.5, taken from this HIPAA security guidance document, list potential risks in accessing, storing, and transmitting ePHI when using portable devices in remote locations and describe the management strategies recommended to mitigate these risks.

Table 11.3. CMS recommendations for accessing ePHI remotely

Source: CMS, 2006.

RisksPossible Risk Management Strategies
Log-on or password information is lost or stolen, resulting in potential unauthorized or improper access to or inappropriate viewing or modification of ePHIImplement two-factor authentication for granting remote access to systems that contain ePHI. This process requires factors beyond general usernames and passwords to gain access to systems (such as requiring users to answer a security question such as “favorite pet’s name”).
Implement a technical process for creating unique usernames and performing authentication when granting remote access to a workforce member. This may be done using Remote Authentication Dial-In User Service (RADIUS) or other similar tools.
Employees access ePHI when not authorized to do so while working offsiteDevelop and employ proper clearance procedures and verify training of workforce members prior to granting remote access.
Establish remote access roles specific to applications and business requirements. Different remote users may require different levels of access based on job function.
Ensure that the issue of unauthorized access to ePHI is appropriately addressed in the required sanction policy.
Home or other offsite workstations left unattended, risking improper access to ePHIEstablish appropriate procedures for session termination (time-out) on inactive portable or remote devices; covered entities can work with vendors to deliver systems or applications with appropriate defaults.
Contamination of systems by a virus introduced from an infected external device used to gain remote access to systems that contain ePHIInstall personal firewall software on all laptops that store or access ePHI or connect to networks on which ePHI is accessible.
Install, use, and regularly update virus-protection software on all portable or remote devices that access ePHI.

Table 11.4. CMS recommendations for storing ePHI on portable devices

Source: CMS, 2006.

RisksPossible Risk Management Strategies
Laptop or other portable device is lost or stolen, resulting in potential unauthorized or improper access to or modification of ePHI housed or accessible through the deviceIdentify the types of hardware and electronic media that must be tracked, such as hard drives, magnetic tapes or disks, optical disks or digital memory cards, and security equipment, and develop inventory control systems
Implement process for maintaining a record of the movements of and person(s) responsible for or permitted to use hardware and electronic media containing ePHI
Require use of lock-down or other locking mechanisms for unattended laptops
Password-protect files
Password-protect all portable or remote devices that store ePHI
Require that all portable or remote devices that store ePHI employ encryption technologies of the appropriate strength
Develop processes to ensure appropriate security updates are deployed to portable devices such as smart phones and PDAs
Consider the use of biometrics, such as fingerprint readers, on portable devices
Use of external device to access corporate data, resulting in the loss of operationally critical ePHI on the remote deviceDevelop processes to ensure backup of all ePHI entered into remote systems
Deploy policy to encrypt backup and archival media; ensure that policies direct the use of encryption technologies of the appropriate strength
Loss or theft of ePHI left on devices after inappropriate disposal by the organizationEstablish ePHI deletion policies and media disposal procedures: at a minimum this involves complete deletion, via specialized deletion tools, of all disks and backup media prior to disposal; for systems at the end of their operational lifecycle, physical destruction may be appropriate
Data is left on an external device (accidentally or intentionally), such as in a library or hotel business centerProhibit or prevent download of ePHI onto remote systems or devices without an operational justification
Ensure workforce is appropriately trained on policies that require users to search for and delete any files intentionally or unintentionally saved to an external device
Minimize use of browser-cached data in web-based applications that manage ePHI, particularly those accessed remotely
Contamination of systems by a virus introduced from a portable storage deviceInstall virus-protection software on all portable or remote devices that store ePHI

Table 11.5. CMS recommendations for transmitting ePHI from remote locations

Source: CMS, 2006.

RisksPossible Risk Management Strategies
Data intercepted or modified during transmissionProhibit transmission of ePHI via open networks, such as the Internet, where appropriate
Prohibit the use of offsite devices or wireless access points (such as hotel workstations) for nonsecure access to e-mail
Use more secure connections for e-mail via SSL and message-level standards such as S/MIME, SET, PEM, or PGP
Implement and mandate appropriately strong encryption solutions for transmission of ePHI such as SSL or HTTPS
SSL should be a minimum requirement for all Internet-facing systems that manage ePHI in any form, including corporate web-mail systems
Contamination of systems by a virus introduced from an external device used to transmit ePHIInstall virus-protection software on portable devices that can be used to transmit ePHI

SUMMARY

Health information is created, maintained, and stored using computer technology. The use of this technology creates new issues in protecting patients’ rights to privacy and confidentiality, and demands that health care organizations develop comprehensive information security programs. The publication of the final HIPAA Security Rule in 2003 underscores the importance of securing health information and the need for comprehensive security programs. The standards and specifications of the HIPAA rule can serve as a framework for health care organizations as they design their individual security programs.

Information security programs need to be designed to address internal and external threats to health care information systems, whether those threats are intentional or unintentional. Health information security programs should address administrative, physical, and technical safeguards. This chapter not only outlined the HIPAA security requirements but also provided a discussion of many of the common security measures that can be employed to minimize potential risks to health information.

KEY TERMS

  1. Access control
  2. Administrative safeguards
  3. Assigned security responsibility
  4. Audit trail
  5. Automatic log-off
  6. Biometric identification system
  7. Business continuity plan
  8. Chief security officer
  9. Ciphertext
  10. Continuity of operations plan
  11. Contingency plan
  12. Covered entity (CE)
  13. Data encryption
  14. Disaster recovery plan
  15. Encryption algorithm
  16. Entity authentication
  17. Firewall
  18. HIPAA Security Rule
  19. Key
  20. Media controls
  21. Password
  22. Personal identification number (PIN)
  23. Physical access controls
  24. Physical safeguards
  25. Plaintext
  26. Public key infrastructure (PKI)
  27. Remote access
  28. Risk analysis
  29. System security evaluation
  30. Technical safeguards
  31. Telephone callback systems
  32. Tokens
  33. Virus checking
  34. Wireless LAN (WLAN)

LEARNING ACTIVITIES

  1. Do an Internet or library search for recent articles discussing the HIPAA Security Rule. From your research, write a short paper discussing the impact of these security regulations on health care organizations. How have these regulations changed the way organizations view security? Do you think the regulations are too stringent? Not stringent enough? Just right? Explain your rationale.
  2. Interview a chief security officer at a hospital or other health care facility. What are the major job responsibilities of this individual? To whom does he or she report within the organization? What are the biggest challenges of the job?
  3. Contact a physician’s office or clinic and ask if the organization has a security plan. Discuss the process that staff undertook to complete the plan, or develop an outline of a plan for them.
  4. Visit a local hospital or other health care organization’s IT department. Ask to view their IT contingency plan. Determine whether or not the plan meets all of the HIPAA security requirements. Investigate whether the contingency plan is part of a broader business continuity plan.
  5. Visit the Office of Civil Rights Enforcement Activities and Results web site. Read at least five case examples involving HIPAA security violations. What do these cases have in common? What are their differences? Do all of the Security Rule violations you read also involve Privacy Rule violations? What were your impressions of the types of cases you read and their resolutions?

REFERENCES

American Health Information Management Association. (2003). Final rule for HIPAA security standards. Chicago: Author.

American Medical Association. (2010). HIPAA Security Rule: Frequently asked questions regarding encryption of personal health information. Practice Management Center, American Medical Association. Retrieved April 2012 from http://www.ama-assn.org/ama1/pub/upload/mm/368/hipaa-phi-encryption.pdf

Associated Press. (1995, February 28). 7 Get fake HIV-positive calls. Retrieved December 2004 from http://www.aegis.com/news/ap/1995/AP950248.html

Briney, A. (2000). 2000 information security industry survey. Retrieved November 2004 from http://www.infosecuritymag.com

Centers for Medicare and Medicaid Services. (2004). HIPAA administrative simplification: Security—Final rule. Retrieved November 2004 from http://www.cms.hhs.gov/hipaa/hipaa2/regulations/security

Centers for Medicare and Medicaid Services. (2006, December 28). HIPAA security guidance. Retrieved April 2012 from http://www.hhs.gov/ocr/privacy/hipaa/administrative/securityrule/remoteuse.pdf

Coppersmith, Gordon, Schermer, & Brockelman, PLC. (2012). HITECH Act expands HIPAA Privacy and Security Rules. Retrieved March 2012 from http://www.azhha.org/member_and_media_resources/documents/HITECHAct.pdf

Eng, J. (2001). Computer network security for the radiology enterprise. Radiology, 220(2), 304–309.

Felder, R. (1999, November 1). Caesar’s shift. (How a Caesar cipher works). Cobblestone. Retrieved October 2012 from http://www.highbeam.com/doc/1G1-62495672.html

Fluke Networks. (2003). Wireless security notes: A brief analysis of risks. Retrieved December 2004 from http://wp.bitpipe.com/resource/org_1014144860_961/fnet_wireless_security_bpx.pdf

Gopalakrishna, R. (2000). Audit trails. Retrieved July 2004 from http://www.cerias.purdue.edu/homes/rgk/at.html

International Organization for Standardization (ISO). (2009). ISO/IEC 15408–1, 3rd ed.: Information technology—Security techniques—Evaluation criteria for IT security, Part 1, Introduction and general model. Retrieved April 2012 from http://standards.iso.org/ittf/PubliclyAvailableStandards/index.html

Karygiannis, T., & Owens, L. (2002). Wireless network security: 802.11, Bluetooth and handheld devices (Special Publication 800–48). Gaithersburg, MD: National Institute of Standards and Technology.

Keckley, P. H., Coughlin S., & Gupta, S. (2011). Issue brief: Privacy and security in health care—A fresh look. Deloitte Center for Health Solutions. Retrieved April 2012 from http://www.deloitte.com/us/privacyandsecurityinhealthcare

Oz, E. (2009). Management information systems: Instructor edition (6th ed.). Boston: Course Technology.

Reynolds, R. B. (2009). The HIPAA Security Rule. In M. S. Brodnick, M. C. McCain, L. A. Rinehart-Thompson, & R. Reynolds (Eds.), Fundamentals of law for health informatics and information management (pp. 195–236). Chicago: American Health Information Management Association.

Rozek, P., & Groth, D. (2008). Business continuity planning. It’s a critical element of disaster preparedness. Can you afford to keep it off your radar? Health Management Technology, 29(3), 10–12.

Scholl, M., Stine, K., Hash, J., Bowen, P., Johnson, A., Smith, C. D., & Steinberg, D. I. (2008). An introductory resource guide for implementing the Health Insurance Portability and Accountability Act (HIPAA) Security Rule. Gaithersburg, MD: Computer Security Division, Information Technology Laboratory, National Institute of Standards and Technology. Retrieved April 2012 from http://csrc.nist.gov/publications/nistpubs/800-66-Rev1/SP-800-66-Revision1.pdf

Smith, H. (2001). A context-based access control model for HIPAA privacy and security compliance. SANS Institute, Information Security Reading Room. Retrieved December 2004 from http://www.sans.org/rr/whitepapers/legal/44.php

Souppaya, M., & Scarfone, K. (2012). Guidelines for securing wireless local area networks (WLANs). Recommendations of the National Institute of Standards and Technology (NIST Special Publication 800–153). Retrieved April 2012 from http://csrc.nist.gov/publications/nistpubs/800-153/sp800-153.pdf

TechTarget. (2012). Password definition. SearchSecurity. Retrieved April 2012 from http://searchsecurity.techtarget.com/definition/password

Walsh, T. (2003). Best practices for compliance with the final Security Rule. Journal of Healthcare Information Management, 17(3), 14–18.

Webopedia. (2012). Audit trail definition. Retrieved April 2012 from http://www.webopedia.com/TERM/A/audit_trail.html

Weil, S. (2004). The final HIPAA Security Rule: Conducting effective risk analysis. Retrieved April 2004 from http://www.hipaadvisory.com/action/Security/riskanalysis.htm

White, C. (2011). Data communications and computer networks: A business user’s approach. Instructor edition (6th ed.). Boston: Course Technology.