This chapter provides a study guide for the Security+ Exam SYO-101. Each section of this chapter is designed to cover specific objectives of the exam. Each section heading identifies the exam domain, and discusses the key details that you should grasp before taking the exam.
An overview of the sections in this chapter that cover the objectives of the Security+ exam is as follows:
This section covers the details of general concepts and terms related to IT security. These concepts include methods of access control, authentication, and auditing. This section also includes a study of various types of attacks and malicious code, identifying and disabling nonessential services, and protocols to reduce vulnerability of computers and networks.
This section covers a study of security concepts related to computer communications such as remote access, email, Internet-based services, directory services, and file transfer protocols. You will also learn about the security risks involved in wireless networks.
This section includes a study of implementing security in the IT infrastructure by creating security baselines, implementing Intrusion Detection Systems (IDS), and other security topologies. This also includes a study of vulnerable points in the network, such as network devices and media.
This section includes a study of concepts related to encryption methods that are used to provide confidentiality, integrity, authentication, and non-repudiation. These encryption methods protect the transfer of data from one location to another in a network. You will learn how encryption algorithms and digital certificates are used to create a Public Key Infrastructure (PKI). The PKI is responsible for the creation, distribution, storage, expiration, and revocation of digital certificates.
This section covers concepts related to operational and organizational security. This includes a study of the physical security of the network, as well as creating backup and disaster recovery policies, security policies, and incident response policies. You will also learn about privilege management, computer forensics and risk identification, and guidelines for training end users on how to create documentation related to security practices in an organization.
The sections in this chapter are designed to follow the exam objectives as closely as possible. This Study Guide should be used to reinforce your knowledge of key concepts tested in the exam. If you study a topic and do not understand it completely, I recommend that you go over it again and memorize key facts until you feel comfortable with the concepts. The chapter contains a number of terms, notes, bulleted points, and tables that you will need to review multiple times. Pay special attention to new terms and acronyms (the ones you are not familiar with) because these may be tested in the exam.
Studying for the Security+ certification exam requires that you have access to a computer network. Although it is not essential, it is good to have a Windows-based computer network to perform the exercises included in this chapter. These exercises are a required part of your preparation for the exam. A small network with a Windows XP desktop and a Windows 2000 Server or Windows Server 2003 would serve the purpose as well. Needless to say, you will also need an active Internet connection.
The exercises included in this Study Guide should be part of your preparation for the exam. Do not perform any exercises in a production environment. Instead, create a test environment where you can work without having to worry about the security risks while performing the given exercises.
The first section of this chapter deals mainly with fundamental knowledge of authentication, access control, and auditing, also known as AAA in the computer security arena. Along with this, you will learn about different types of attacks and about malicious code that can cause significant damage to the organization's security setup. The concepts discussed in the following section are as follows:
Access control methods
Authentication methods
Auditing and logging
System scanning
Types of attacks
Types of malicious code
Risks involved in social engineering
Identifying and disabling nonessential services and protocols
Each of these concepts is discussed in the following sections.
In this section, you will learn about different types of access control methods. These methods are used to grant or deny access to a network or computer resource by means of security policies and hardware or software applications. In its simplest form, access control to files, folders, and other shared network resources is achieved by means of assigning permissions. Smart cards and biometric devices are examples of hardware devices used for access control. Access control can also be implemented by means of network devices, such as routers and wireless Access Points (APs). You can also achieve access control by implementing security policies, such as remote access policies and rules for connecting to a virtual private network (VPN). The following are the main models or mechanisms employed for access control:
Mandatory Access Control
Discretionary Access Control
Role-Based Access Control
MAC is a mechanism, usually hardcoded into an operating system, that protects computer processes, data, and system devices from unauthorized use. Once implemented, MAC is applied universally to all objects on the system. It may also be built into an application to grant or deny permissions and is universally applied to all objects. The basic concept behind MAC is that it cannot be changed by any user. Moreover, the control of access can be defined at multiple levels to provide granular control.
All operating systems, such as Microsoft's Windows, Unix/Linux, and Netware, include MAC mechanisms. The operating systems hardcode access control individually on each object, and even the owners of the object or resource cannot change the implemented level of access. In other words, MAC is nondiscretionary, and the users who create an object may not have so-called "full control" over the object they create.
The main purpose of MAC is to define a security architecture that makes evaluations of contexts based on security labels. In a nutshell, MAC is hardcoded and nondiscretionary, is universally applied to all objects by the operating system, and is sometimes also known as label-based access control.
DAC is a mechanism that is usually implemented by the operating system. Administrators or users who are creators/owners of an object or resource are the main users of DAC, which allows them to grant or deny permissions. NTFS permissions (used in Windows-based computers) are a good example of DAC. It is also possible to change ownership of objects or resources when DAC is used.
The owner or administrator of the object mainly controls the control of access to an object or resource. As with MAC, you can also have multiple levels of access control with DAC. But at the same time, DAC does not provide the level of access control that is available with MAC. It is not hardcoded into any operating system.
To have an idea of how DAC is applied, we will perform the following exercise on a Windows XP Professional computer that has a drive formatted with an NTFS file system:
Click Start → Programs → Accessories → Windows Explorer.
Locate a user data folder.
Right-click the folder and select Properties.
Click on the Security tab.
The NTFS permissions that have been set on the folder are displayed, as shown in Figure 11-1. The shared folder in this case is the network resource and the permissions assigned to the folder are termed the DAC list.
Click Cancel to close the dialog box.
The RBAC is a mechanism used to implement security on objects based on the roles or job functions of individual users or user groups. Employees of an organization are categorized by their need to perform different types of roles (jobs) within the organization, and permissions to computer or network resources are granted to these users based on their roles.
RBAC offers the most flexibility in defining access control to available network resources. For example, users in a network can be classified into various categories or groups based on their job functions, and access permissions to objects can be granted to these groups. The job functions and access permissions can be modified at any point in time based on the requirements of the organization. RBAC thus provides simplified and centralized administration of network resources. It is more flexible than MAC and is highly configurable.
Authentication is the process of confirming that someone or something is authentic, which means that the claim made about something is true. In the context of computer security, authentication is the method of verifying that the identity of a person or an application seeking access to a system, object, or a resource is true. For example, if a user wants to access a network domain, the authentication of the user (or the user's digital identity) is usually verified by the username and password supplied by the user. These data items are also known as the credentials of the user. If the username and password of the user matches those stored in the security database of the computer, the user is allowed access. This process is known as the authentication process.
Authentication can be a one-way or two-way-process. In one-way authentication, only one of the entities verifies the identity of the other, while in a two-way authentication, both entities verify the identity of each other before a secure communication channel is established. In the previous example, you learned about the simplest form of one-way authentication wherein the identity of the user is verified by the system.
Authentication is termed as the first point of controlling access to a system. Further access can be controlled by using authorization, which is a term very closely related to authentication. Authorization is provided as part of the operating system and is the process of allowing access to only those resources to which a particular user is authorized. These resources may include the system services and devices, data, and application programs.
User credentials sent by the user during the authentication process can be transmitted either in clear text or in encrypted form. Some applications, such as File Transfer Protocol (FTP) and Telnet, transmit usernames and passwords in clear text. User credentials transmitted in clear text are considered security risks, as anyone monitoring the network transmissions can easily capture these credentials and misuse them. There are several methods, as you will learn in the following pages, that can be used to encrypt and secure user credentials as they are transmitted over the network.
The following sections discuss a number of authentication mechanisms that are used in computer networks.
Kerberos is a cross-platform authentication protocol used for mutual authentication of users and services in a secure manner. This protocol is created and maintained by the Massachusetts Institute of Technology (MIT) and is defined in RFC 1510. Kerberos v5 is the current version. The protocol ensures the integrity of data as it is transmitted over the network. Microsoft's Windows-based network operating systems (Windows 2000 and later) use Kerberos v5 as the default authentication protocol. It is also widely used in other operating systems, such as Unix and Cisco IOS. The authentication process is the same in all operating system environments.
Kerberos protocol is built upon Symmetric Key Cryptography and requires a trusted third party. In Windows Server 2003 environments, Kerberos can be implemented in its own Active Directory domains. Kerberos works in a Key Distribution Center (KDC), which is usually a network server used to issue secure encrypted keys and tokens (tickets) to authenticate a user or a service. The tickets carry a timestamp and expire as soon as the user or the service logs off.
Let's look at how Kerberos authentication works. Consider a Kerberos realm that includes a KDC (also known as the authentication server), a client (a user, service, or a computer), and a resource server. Consider that the client needs to access a resource or shared object on the resource server. The following steps are carried out to complete the authentication process:
The client presents its credentials to the KDC for authentication by means of username and password, smart card, or biometrics.
The KDC issues a Ticket Granting Ticket (TGT) to the client. The TGT is associated with an access token that remains active until the time the client is logged on. This TGT is cached locally and is used later if the session remains active.
When the client needs to access the resource server, it presents the cached TGT to the KDC. The KDC grants a session ticket to the client.
The client presents the session ticket to the resource server and the client is then granted access to the resources on the resource server.
The Kerberos authentication process is known as a realm, as shown in Figure 11-2.
The TGT remains active for the entire active session. It carries a timestamp to ensure that it is not misused to launch replay, or spoofing, attacks against the network. Replay attacks happen when someone captures network transmissions, modifies this information, and then retransmits the modified information on the network to gain unauthorized access to resources. You will learn more about security attacks later in this section.
Kerberos is heavily dependent on the synchronization of clocks on the clients and servers. Session tickets granted by the KDC to the client must be presented to the server within the established time limits, or else they may be discarded. TGT is not dependent on time and remains valid until the client is logged on. TGT is cached locally by the client and can be used if the user session remains active.
CHAP is widely used for remote access in conjunction with the Point-to-Point Protocol (PPP). CHAP periodically verifies the authenticity of the remote user using a three-way handshake even after the communication channel has been established. CHAP authentication involves the following steps:
When the communication link is established, the authentication server sends a "challenge" message to the peer.
The peer responds with a value calculated using a one-way hash function such as Message Digest 5 (MD5).
The authentication server checks the response to ensure that the value is equal to its own calculation of the hash value. If the two values match, the authentication server acknowledges the authentication; otherwise, the connection is terminated.
The authentication server sends the challenge message to the peer at random intervals and repeats steps 1 to 3.
One drawback of CHAP is that it cannot work with encrypted password databases and is considered a weak authentication protocol. It is still better than Password Authentication Protocol (PAP), in which passwords are transmitted in clear text. Microsoft has implemented its own version of CHAP, known as MS-CHAP, which is currently in version 2.0 and is the preferred authentication protocol for remote access services.
Certificates, or Public Key Certificates, use digital signatures to bind a public key to the identity of a person or a computer. The certificates are used to ensure that the public key belongs to the individual. Certificates are widely used for Internet-based authentications, as well as for authenticating users and computers in network environments, to access network resources and services where directory services are implemented. They are also used when data transmissions are secured using Internet Protocol Security (IPSec) protocol. All of these are parts of the PKI, which is discussed later in this chapter.
In a PKI certificate, servers are used to create, store, distribute, validate, and expire digitally created signatures and other identity information about users and systems. Certificates are created by a trusted third party known as the Certification Authority, or Certificate Authority (CA). Examples of commercially available CAs are Verisign and Thawte. It is also a common practice to create a CA within an organization to manage certificates for users and systems within the organization or with trusted business partners. In Windows 2000 and later operating systems, certificates are used for authenticating users and granting access to Active Directory objects. CA used within an organization is known as an Enterprise CA or a Standalone CA.
Another common use of certificates is for software signing. Software is digitally signed to ensure the user who downloads it that it is legitimate or has been developed by a trusted software vendor. Digitally signed software ensures that the software has not been tampered with since it was developed and made available for download. Certificates are also implemented in Internet services to authenticate users and verify their identity. Web servers must have a certificate installed in order to use the Secure Socket Layer (SSL).
A certificate essentially includes the following information:
The public key being signed.
A name that can be that of a user, a computer, or an organization.
The name of the CA issuing the certificate.
The validity period of the certificate.
The digital signature of the certificate, which is generated using the CA's private key.
The combination of username and password is one of the most common methods of authenticating users in a computer network. Almost all network operating systems implement some kind of authentication mechanism wherein users can simply use a locally created username and password to get access to the network and shared resources within that network. These include Microsoft's Windows, Unix/Linux, Netware OS, and MAC OS X. This is the simplest form of authentication and can be implemented easily, but it also comes with its own limitations. In a secure network environment, simply using the combination of a username and password may not be enough to protect the network against unauthorized access.
Many organizations document and implement password policies that control how users can create and manage their passwords in order to secure network resources. If any user does not follow these policies, her user account may be locked until the administrator manually unlocks it. The following is an example of strong password policy:
Passwords must be at least seven characters long.
Passwords must contain a combination of upper- and lowercase letters, numbers, and special characters.
Passwords must not contain the full or partial first or last name of the user.
Passwords must not contain anything to do with personal identity such as birthdays, Social Security numbers, name of their hometown, names of pets, etc.
Users must change their passwords every six weeks.
Users must not reuse old passwords.
With a properly enforced password policy, an organization can attain some security for its network resources.
An authentication token (also known as a security token or a hardware token) is considered the most trusted method to verify the identity of a user or a system. Tokens provide a very high level of security for authenticating users because of the multiple factors employed to verify the identity. It is almost impossible to duplicate the information contained in a security token in order to gain unauthorized access to a secure network. Figure 11-3 shows different types of security tokens.
In its simplest form, an authentication token consists of the following two parts:
A hardware device that is coded to generate token values at predetermined intervals.
A software-based component that tracks and verifies that these codes are valid.
Hardware tokens are small enough to be carried on a key chain or in a wallet. Some security tokens may contain cryptographic keys while others may contain biometrics data such as the user's fingerprints. Some tokens have a built-in keypad, and the user is required to key in a Personal Identification Number (PIN).
Authentication tokens come in a variety of packaging and features. RSA's SecureID is one type of security token that employs a two-factor authentication mechanism. Other vendors employ digital signatures methods, while still others use the single sign-on software mechanisms. Some tokens utilize the one-time password technology. With the single sign-on software, the user need not remember his passwords as they are stored on tokens and are regularly changed. With the one-time password technology, the password changes after each successful login or after a specified interval of time.
When using secure methods in computer authentication, a factor is a piece of information that is present to prove the identity of a user. In a multifactor authentication mechanism, any combination of the following types of factors may be utilized:
A something you know factor, such as your password or PIN.
A something you have factor, such as your hardware token or a smart card.
A something you are factor, such as your fingerprints, your eye retina, or other biometrics that can be used for identity.
A something you do factor, such as your handwriting or your voice patterns.
Multifactor authentication is considered acceptably secure because it employs multiple factors to verify the identity of the user or service requesting authentication. For example, when withdrawing money from a bank's ATM, you need a debit card, which is a something you have factor. You will also need to know the correct PIN to complete the transaction, which is a something you know factor.
Mutual authentication, or two-way authentication, is the process where both parties authenticate each other before the communication link can be established. In case the communication is to be set up between a client and a server, both the client and server would authenticate each other using a mutually acceptable authentication protocol. This ensures that both the client and the server can verify each other's identity. In a typical setup, the process is carried out in the background without any user intervention.
In secure web transactions, such as online banking, mutual authentication may use secure socket SSL or certificates for the authentication purpose. However, due to the complexity, the cost involved, and the effectiveness, most web applications are built in so that the clients are not required to have certificates. This leaves the transaction or the communication open to Man-in-the-Middle (MITM) attacks.
Almost all network operating systems provide ways for mutual authentication when offering remote access to clients. Remote Authentication Dial-in User Service (RADIUS) is one of the commonly used authentication protocols employed in remote access. RADIUS provides mutual authentication to verify and authenticate both sides of the communication.
Biometrics refers to the authentication technology used to verify the identity of a user by measuring and analyzing human physical and behavior characteristics. This is done with the help of advanced biometric authentication devices that can read or measure and analyze fingerprints, scan the eye retina and facial patterns, and/or measure body temperature. Handwriting and voice patterns are also commonly used in biometrics. Biometric authentication provides the highest level of authenticity about a person, which is much more reliable than a simple username and password combination. It is nearly impossible to impersonate a person when biometric authentication is used for authentication.
Auditing is the process of tracking and logging activities of users and processes on computer systems and networks. It can be useful in multiple scenarios such as: troubleshooting a failed process, detecting a security breach on the part of an internal or external user; and tracking unauthorized access to secure data. Auditing and logging enables administrators to link desired or undesired processes to specific user accounts and system processes. When linked to user accounts, it is possible to track a security breach such as unauthorized access to confidential data by identifying the user who made the attempt. When linked to processes, it is helpful in diagnosing problems related to process failures. Auditing and logging, in certain situations, may also be helpful in collecting evidence that can be used against an unauthorized user during criminal investigations.
System auditing is the process of tracking usage and authorized or unauthorized access to system services and data. This may also be helpful in diagnosing problems related to application failures during the development or implementation phase. Since auditing puts a significant processing load on servers, you must first make sure that the benefits of auditing are clearly understood and visible.
While administrators should implement certain audits manually, network operating systems include processes that automatically audit system process and log audit data that can be analyzed later in order to troubleshoot system failures. Administrators usually configure auditing of network and system resources as well as privileges assigned to a user manually. Auditing is essentially a two-step process: first, auditing is enabled on resources; and second, administrators must view and analyze the data collected by the audits.
In its basic form, a secure computing environment can be established by splitting duties of employees within an organization. This ensures that whatever actions are taken by an employee are consistently supervised or controlled by someone superior in the organizational hierarchy. Some of the basic guidelines are as follows:
The same person should not be authorized to both originate a request and approve it.
Access to classified and confidential data must be restricted.
Conversion, copying, and concealment of data must not be allowed.
Almost all network operating systems include methods to audit system processes and user activities. These audits can be logged in special log files, which is a process called event logging. The log files can be viewed and analyzed to track problems related to security breaches and to troubleshoot process problems. Operating systems such as Microsoft Windows Server 2003 include a management console named Event Viewer, where you can view the logs related to system processes, security, and applications.
Log files essentially contain confidential data that a typical user must not be able to access. It is a common practice to send log files to a secure location where it is not possible to modify the data, and only authorized personnel can view and analyze information.
System scanning is the process of analyzing the current security settings of a system or a network to identify and repair potential vulnerabilities. These vulnerabilities weaken the system security and open it up for possible attacks or security breaches against a particular system or against the entire network. System scanning is performed by software utilities, usually included with network operating systems. Third-party software tools may also be used for this purpose.
Apart from identifying weak or vulnerable areas of the system, system-scanning utilities can also be useful in ensuring system reliability and performance. These utilities make sure that password and account policies are strong enough to prevent unauthorized access. They test the response of a system or the network in scenarios that could lead to a potential attack by an outsider. An example of such an attack is the Denial of Service (DoS) attack.
System scanning tools are generally used to make sure that the system is accessible only through the use of acceptable means of inside and outside access. They are also used to create false attacks against the network to ensure that the network is capable of detecting the attacks and taking appropriate corrective action. Some of the popular scanning tools used for system scanning are the System Administrator's Tool for Analyzing Networks (SATAN) and Nessus. Both of these tools work in Unix and Linux environments. SATAN is mainly used to detect known vulnerabilities in a system and fix them. Nessus is a client/server-based tool that can even launch a false attack against a network. Nessus is very useful for scanning remote systems.
At the time of this writing, Microsoft plans to release Windows Defender, a real-time spyware monitoring tool for Windows-based systems. This tool will mainly be used to detect and block pop-up windows and detect performance problems.
Attacks on computer systems and networks are launched in several different ways and with several different techniques. Attacks on computer networks may be targeted at an application, a service, or the entire network. It may be an active or a passive attack. By definition, attacks can be classified into the following categories:
When the person attacking a system or a network is actively involved in the process, the attack is said to be active. Active attacks can be easily detected. In most active attacks, the attacker quietly captures data transmitted on network wires and attempts to cause a partial or complete shutdown of a network service. DoS and Distributed Denial of Service (DDoS) are examples of active attacks.
When the person trying to attack a system or network is quietly monitoring the network for some condition to be met, or just collecting information to launch an attack, the attack is said to be passive. Examples of passive attacks include sniffing, eavesdropping, and vulnerability scanning.
These attacks are launched using one or more methods of guessing the password of a legitimate user of the network. Examples of password attacks include dictionary-based attacks, password guessing, and brute force attacks.
When the attacker uses applications written specifically to cause damage to a system or network, the attack is said to be a code attack. Examples of code attacks include viruses, Trojan horses, worms, and logic bombs.
A brief description of different types of attacks are given in the following sections.
In computer security, a DoS attack is an attack on computer systems, services, resources, or the entire network that results in the unavailability of a network or its resources to its legitimate users. Potential targets of DoS attacks are the main components of Internet services such as high-profile web servers and DNS servers. The intent is to bring down an organization's web site(s). The attacker may use any of the following methods to launch a DoS attack:
Try to flood the network in order to prevent legitimate network traffic from passing through.
Try to disrupt the connection between two systems in order to prevent access to a service.
Try to prevent a legitimate user from accessing a service or a resource.
Try to disrupt service from a particular system or a part of the network.
DOS attacks generally do not cause an outage of all network services. They are targeted at specific services, such as the Domain Name System (DNS) service. If the attack on a DNS server is successful, the users may not be able to resolve domain names or even to connect to the Internet.
DoS attacks usually result in the following:
A significant consumption of system or network resources such as CPU time, disk space, or network bandwidth. This is also termed as a resource consumption attack.
The modification or change in the configuration of network hardware such as network servers, routers, and switches.
The disruption of the network services and applications such as databases, applications, and web servers.
All of the given outcomes of a DoS attack prevent legitimate users from using a system, network services, or shared resources. The following are some examples of DoS attacks:
A SYN flood attack is carried out by sending a flood of TCP/SYN packets with forged information about the sender. SYN flood attacks are discussed later in this section.
An ICMP flood attack includes smurf attacks and ping floods. A smurf attack is launched by using misconfigured network devices. Malicious packets are sent to all hosts on a particular network by using the broadcast messages. These packets carry false IP addresses of the sender. A ping flood attack is launched by sending a large number of ping requests to network hosts, which may result in the consumption of a significant amount of network bandwidth.
A UDP flood attack is carried out by sending a large number of UDP echo packets to a large number of network hosts. The attacker uses a fake source IP address.
A land attack involves sending a spoofed (having false information) TCP SYN packet to a target network host. The packet contains the host's own IP address as its source and destination. The result is that after receiving the packet, the host continues to reply to itself until it crashes.
Nukes are malformed or specially crafted packets. They usually exploit an open TCP port on a network host to launch an attack. For example, WinNuke uses the NetBIOS open port 139 to send out-of-band data to a network host, causing it to crash.
Application-level floods usually cause buffer overflows in a system. The system becomes so confused that it consumes all of its resources—such as CPU time or disk space—and then eventually crashes. Buffer overflow is discussed later in this section.
An amplified form of DoS is the DDoS, which is explained in the next section.
A DDoS attack is an amplified form of a DoS attack that is targeted at the entire network instead of at a single system or service. This is a two-step attack. The attacker first compromises a number of computers spread across the Internet and installs a specially created software application on them. The computers are known as masters. The application installed on these compromised computers or masters then helps the attacker further by installing the application on several more computers that are known as zombies. Zombies launch attacks on thousands of computers connected to the Internet and then collectively attack a particular Internet host to make it unavailable to legitimate users. In the event of a DDoS attack, it is nearly impossible to detect the originator of the attack because the attack takes place in multiple steps, and several Internet hosts are involved in attacking a particular Internet host.
Usually, the attacker first employs some kind of technique to detect vulnerabilities in Internet hosts. The applications that detect these vulnerabilities are known as Rootkits. Figure 11-4 illustrates the basic structure of a DDoS attack setup.
In a nutshell, the computers or hosts involved in a DDoS attack include the following:
DDoS attacks are essentially targeted at computers directly connected to the Internet. While some of the target computers become masters, others become zombies. Zombies act upon instructions from masters to launch a collective DDoS attack against the target Internet host.
The following describes the components of a DDoS:
This is the software application that is used by the attacker to initiate the DDoS attack. The client sends instructions to its subordinates to launch the attack.
This is the component of the application that is installed and run on zombies to further launch attacks on target Internet hosts. The target Internet host becomes the victim of a simultaneous attack from multiple zombies.
A reflected DDoS attack happens when large numbers of computers receive forged requests that otherwise appear to be legitimate. The IP address of the sender is forged using spoofing methods. All computers that receive the request reply to it. The replies go to the target or the victim computer. When the victim computer receives the (many) responses, it becomes flooded and unable to service legitimate clients. ICMP Echo requests are one of the several types of requests that can be used in reflected DDoS attacks. Other types of DDoS attacks that can be launched by zombies include SYN floods, UDP floods, etc.
The SYN flood, or TCP/SYN, attack utilizes a common weakness of TCP/IP. A TCP/IP session between two hosts is established using the exchange of TCP/SYN, TCP/SYN-ACK, and TCP/ACK messages. The attacker sends a large number of TCP/SYN messages to the target host with a forged source IP address. The server getting these requests treats these messages as connection requests and sends TCP/SYN-ACK messages to all of the forged IP addresses that do not exist. The result is that the server leaves the ports open to receive TCP/ACK messages from hosts that do not exist, and the response never arrives. These half-open connections are actually consumed resources of the server that otherwise would have been utilized by legitimate users to connect to the server. The server ultimately looks busy and denies connections to the actual clients.
If you are pretending to be someone who you are not, you are spoofing. In other words, spoofing is the process of providing false identity about someone's identity in order to gain unauthorized access to secure resources on a system or the network. In computer security, attackers use IP spoofing in order to gain access to secure system resources or networks. Attackers send IP packets that contain a false IP address.
Computer attacks using IP spoofing can be categorized as follows:
Blind IP spoofing occurs when the attacker just sends IP packets to the target computer and does not usually wait for a response. The attacker is only making a guess that at some point he may be able to get a response from the target computer. If the attempt is successful, the attacker may further cause damage to the computer or get confidential information from the person communicating with the attacker.
Informed IP spoofing, or non-blind IP spoofing, occurs when the attacker is sure about getting a response from the target computer to begin a communication session. This may result in significant loss to the target computer or to the person communicating with the attacker. For example, the attacker may pose as a bank or an employee of a credit company and ask for confidential information from the victim.
Most IP spoofing occurs between trusted computers on the Internet or on internal networks of large organizations. Trust relationships between networks or domains usually allow users to log on to other domains without supplying credentials. By spoofing the IP address of a trusted computer, the attacker may be able to connect to the target computer without authentication.
The best protection against IP spoofing is to use packet filtering in networks. Packet filtering allows administrators to block packets that originate from outside the network but that carry IP addresses of hosts inside the network. Network routers usually handle this part. TCP/IP has built-in protection against IP spoofing as it uses sequential numbers when computers communicate to each other. Using encryption and mutual authentication can also prevent IP spoofing.
A MITM attack occurs when the attacker is actively listening or monitoring the communications between two hosts. The attacker is able to read, insert, or modify the messages being exchanged between the two hosts, without any of them knowing that the information is being compromised.
As noted earlier in the previous section, a TCP/IP communication session is established after a successful three-way handshake. The computer requesting a connection sends a TCP/SYN packet to the server, and the server then responds with a TCP/SYN-ACK message, which the computer requesting the connection accepts by sending a TCP/ACK message. This is illustrated in Figure 11-5.
When host A wants to communicate with host B, which is a server, it sends a TCP/SYN packet to host B. This packet contains the IP address of the source, which is host A. The attacker can place himself somewhere between hosts A and B and monitor the communication taking place between the two hosts. He can intercept the TCP sequence numbers and successfully use these to falsify information going to host B. From then onwards, the communication takes place between host B and the attacker, and host A keeps waiting for a response from host B.
MITM attacks remain a serious threat to many organizations, even those that use encrypted communications between systems and networks. The best protection against MITM attacks is to use encrypted messaging systems so that the attacker is not able to decrypt or intercept the communication taking place. Other methods of preventing MITM attacks include the following:
Use strong mutual authentication.
Use strong passwords.
Use advanced techniques of authentication, such as biometrics.
Use public key cryptography to encrypt information exchange.
A replay attack is usually launched against an entire network in which the valid data transmitted across the network is repeated or delayed. This attack is the result of poor security in the TCP/IP protocol wherein TCP sequence numbers can be regenerated. For example, consider that Jeff wants to do some online banking. An attacker named Adam is monitoring the entire exchange of messages between Jeff's computer and the bank's server. Adam captures a significant amount of data during these transmissions and tries to repeat the transactions on the bank's server using this information.
In case the attacker is not able to capture the correct TCP sequence numbers, he tries to guess all kinds of numbers to get a correct sequence number to gain access to a secure server. This may cause the legitimate user's connection to drop.
To prevent replay attacks, session tokens can be used. In the preceding example, if session tokens are used, the bank's server would generate a session token for Jeff that would expire as soon as Jeff completed the transaction. Any replay by the attacker would not be successful. Other safeguards against replay attacks include use of timestamping, Secure Shell (SSH), IPSec, more randomization of TCP sequence numbers, and so on.
TCP/IP hijacking, or session hijacking, refers to the capture of session information by an attacker to gain unauthorized access to the information. The attacker generally is able to hijack insecure TCP/IP sessions such as FTP, Telnet, Rlogin, or other unencrypted TCP/IP sessions. Internet cookies that store personal information about a user can also lead to an attacker getting confidential information and hijacking an active TCP/IP session. Cookies, which are stored locally on a user's computer, normally contain a user's login credentials such as the username and password. Several Internet-based applications heavily rely on cookies to initiate and maintain a communication session between the user's computer and the web server. The attacker would simply steal a user's cookie and hijack the TCP/IP communication session, while the legitimate user would get a "session expired" or a "session timeout" message. The user might consider it normal, and the attacker would continue to use the hijacked session for his personal gains.
TCP/IP hijacking can be prevented by using secure session keys, which are normally encrypted and randomized. In the case of Internet-based applications, SSL encryption should be used along with strong random session keys.
The term weak key refers to the method of generating a key for an encryption algorithm that would make the resulting encryption exhibit undesirable behavior. It is always preferred that the encryption algorithm used should not have any weak keys. The following encryption algorithms are said to contain weak keys:
The Data Encryption Standard (DES) is known to have a few weak keys, which cause the DES algorithm to behave identically in encryption and decryption processes.
The weak Initialization Vectors (IV) in an RC4 algorithm can expose a wireless system to plaintext attacks. RC4 is very commonly used in popular protocols such as SSL.
It is easy to identify the weak keys used in the International Data Encryption Algorithm (IDEA) in a plain-text attack.
The Blowfish algorithm is known to use weak keys that result in production of bad substitution boxes (S-Boxes).
No encryption algorithm is actually designed to have weak keys. When all the keys used in an algorithm are equally strong, the key design is known as flat keyspace.
Password attacks occur when an attacker attempts to get a user's password by guessing it or finding it stored in a database using a dictionary attack or a brute force attack. A password attack is known as password cracking. These attacks are discussed in the following sections.
Many users do not understand the purpose and usefulness of strong passwords and choose weak passwords that anyone can easily guess. It is also a common practice to use passwords that contain very few characters, contain names of hometowns, pets, or dates of birth. Sometimes, users keep their passwords blank for quick logon. Most of the newer network operating systems do not allow blank or weak passwords. Particularly when password policy is forced in a Windows Server 2003 domain, users are not allowed to keep blank passwords or passwords that contain full or part of their usernames. They are also forced to change their passwords at regular intervals.
It is also very common for users to keep the default password assigned to them. This makes it easy for an attacker to guess the user password to gain unauthorized access to the system. This is particularly true with network hardware when administrators forget to change the default passwords used to configure the hardware.
A dictionary attack also exploits the tendency of users to choose weak passwords. Password-cracking applications come with built-in "dictionaries" or lists of words that can be easily used to guess a weak password by multiple combinations of characters. The cracking program tries to guess a password by encrypting each word in the dictionary, each time checking to see whether there is any match between the encrypted password and the generated password. These applications are so efficient that they can try thousands of combinations per second.
A brute force attack is the process of defeating an encryption scheme by trying a large number of possibilities. The applications written to launch brute force attacks try to use different combinations of keys to decrypt an encrypted message. In the context of password guessing, the brute force attack is perhaps a last resort that an attacker can use to crack a password. Brute force techniques usually speed up the process of guessing passwords.
Most modern network operating systems store passwords in an encrypted form. The encryption is carried out by using a one-way hashing function. Message Digest 5 (MD5) is one of the common hashing functions used to create a hash of stored passwords. One-way hashing ensures that once the password is hashed, it cannot be restored. When a user enters a password, it again goes through the same hashing function and the output is compared to the stored value. If the values match, the user is allowed access. An attacker using the brute force technique will not be able to launch a password attack unless he obtains a copy of the username and the hashed passwords. If the attacker is able to guess a password using a brute force attack, the password is considered cracked.
A buffer overflow is a system condition that causes a breach in system security or a memory usage exception resulting in a system crash. It can be a result of either a programming error or an active attack on the system. An attacker may launch a buffer overflow attack by writing malicious code specifically aimed at filling all the memory buffers of the target system. Buffer overflow may also be due to an incorrect choice of a programming language that cannot handle memory buffers appropriately. Buffer overflows may cause systems to produce undesired results or even crash.
Software exploitation refers to taking undue advantage of a software bug, glitch, or vulnerability in an application code to gain unauthorized access to a system or to launch a DoS attack against the system. Software exploitation is closely related to buffer overflows. Software written by in-house programmers may leave security holes that could be used by attackers to launch such attacks as the buffer overflow attack. Software exploitation may also result in escalated privileges being granted to an unauthorized user.
A back door is the process of bypassing the normal authentication process of a computer to gain access to its resources. There are several applications specifically designed to gain back door access to systems and networks. A slight modification to an installed application on a system can also cause back door entry to a system. Even legitimate applications can cause back doors and remain invisible to a normal computer user. Examples of such applications are PCAnywhere (Symantec) and Back Orifice, both used for remote administration of computer systems.
Trojan horses and rootkits are also termed as back doors. While the Trojan horse appears to be a useful application to the unsuspecting user, rootkits are designed to look for vulnerabilities in a system. An attacker can easily mask his presence using a rootkit. These applications grant remote access to the attacker.
Back doors are of two types: symmetric and asymmetric. A symmetric back door is the traditional type, and anyone who finds one can use it to exploit the system. The asymmetric back door is specially designed to allow system access to only the creator of the program.
It is possible to detect malicious software, including back doors, but when a legitimate application acts as a back door or is configured to act as one, the task becomes difficult.
Malicious code or malware, is a software application that is designed to infiltrate a user's computer without his knowledge or permission. Malware includes viruses, Trojan horses, worms, and applications such as adware, spyware, botnets, and loggers. The following are main categories of malware:
These applications are written to infect a system without any obvious commercial gains.
These applications are written to infect the target system and conceal the identity of the attacker. They appear to the user as if they are in his interest. If the user installs the application, he becomes a victim.
These applications are written specifically to gather information about the active user on the system in order to gain some kind of commercial profits. These applications generally appear as pop-up windows on the user's computer.
A computer virus is a self-replicating application that inserts itself into other executables on the computer and spreads itself using that executable. A computer virus is essentially malware that is created for the sole purpose of destroying a user's data. The executable file in which the virus inserts itself is called the virus host. A virus needs an executable file to spread itself. In order to let the virus work or infect a computer it must first load into the memory of a system, and the system must then follow the instruction code contained in the virus program.
A computer virus can travel from one computer to another, and infects every computer on its way—just like a real life infection. A virus can infect data stored on floppy disks, in email, on hard disks, and even on network storage devices. Remember that the infected program must be executed before the virus can spread to infect other parts of the system or data.
The following are different types of viruses:
A boot sector, or BootStrap virus infects the first sector on the hard disk. This sector is used to boot or start up the computer. If this sector is infected with a virus, the virus becomes active as soon as the computer starts.
A parasitic virus infects an executable file or an application on a computer. The infected file actually remains intact, but when the file is run, the virus runs first.
A worm is a computer virus that does not infect any particular executable or application but resides in the active memory of computers. This virus usually keeps scanning the network for vulnerabilities and then replicates itself onto other computers using those security holes. The effects of worms are not easily noticeable until entire system or network resources appear to have been consumed by the virus.
The most common type of worm is the email virus that uses email addresses from the address book of a user to spread itself.
A Trojan horse, or simply a Trojan, is a malicious code that is embedded inside a legitimate application. The application appears to be very useful or interesting and harmless to the user until it is executed. Trojans are different from other computer viruses because they must be executed by the victim user who falls for the interesting "software."
Trojans fall into the following two categories:
Software applications that are otherwise useful but have been corrupted by a malicious user by inserting code into the application that triggers itself when the application is executed.
Software applications that are specifically created to cause damage to the user's computer when executed. These types of Trojans are usually hidden inside games, image files, or software that appears to give access to some free stuff to the user. The purpose of the Trojan is to somehow trick the user into executing the application.
Most of the modern Trojans contain code that is basically used to gather information about the user. These Trojans fall into the category of spyware and appear as pop-up windows on the user's computer screen. Some Trojans are written very precisely to allow the user's computer to be controlled remotely by the attacker.
The main difference between a virus and a Trojan is that viruses are self-replicating programs while Trojans need some action on the part of the user. If the user does not fall into the trap of the Trojan, it does not execute. So, the next time you notice a pop-up window offering you free emoticons or desktop screen savers, be careful. A Trojan may be waiting to execute in order to steal personal information stored on your computer.
To protect computers from Trojan horses, the following precautions can be taken:
Keep your operating system updated with the latest service packs, security patches, and hotfixes offered by the manufacturer.
Install antivirus software on your system and keep it updated.
Configure your email settings so that attachments contained in incoming mail do not open automatically. Some Trojans come embedded within email attachments.
Do not use peer-to-peer sharing networks such as Kazaa or Limewire. These leave open ports on your computer when you are sharing your data with others on the Internet. These networks are generally unprotected from Trojans and other viruses.
Some of the well-known Trojans include Back Orifice (and Back Orifice 2000), Beast Trojan, NetBus, SubSeven, and Downloader EV.
Logic bombs and time bombs are types of specially written malicious code that reside in a particular system and wait for some condition to be met or for a specific event to happen before it triggers itself. A logic bomb is a virus, and a time bomb is a Trojan. A programmer may have a special code written to delete all data and other files from his system as soon as he leaves the company (a logic bomb). The action may trigger as soon as the administrator deletes or disables the programmer's account from the network. Another programmer may write code that waits for a specific date such as April 1st (April Fools' day) to trigger it (a time bomb).
Wardialing is used in remote access networks to gain access to a remote access server by dialing a large block of known telephone numbers. The attacker uses an application known as war dialer to automatically dial a large block of numbers to search for a server that will respond. These types of applications also log whatever information they find on the remote servers. It is very uncommon to find any connected modems without the knowledge of administrators, but the attacker works on the theory of probability. If he is able to access any server that has a connected modem, and it responds to the attackers dialing attempts, the attacker is successful in penetrating into the network of the organization.
Dumpster diving is the process of physically "diving" into trash containers and collecting pieces of information from corporate or domestic waste. People often throw away pieces of paper or other items that contain personal information such as their name, address, phone number, date of birth, Social Security number, etc. A dumpster diver may collect this information and use it for his benefit. In large organizations, users even throw away pieces of paper that contain their usernames and passwords. To prevent dumpster diving, it is useful to get a good paper shredder so that papers are destroyed before they are thrown into trash.
Social engineering refers to the process of getting personal or confidential information about someone by taking him into confidence. The so-called "social engineer" generally tricks the victim over the telephone or on the Internet into revealing sensitive information. Instead of exploiting any security vulnerabilities in computer systems, the attacker capitalizes on the victim's own tendency of trusting someone.
Social engineering also involves face-to-face interactions between a computer user and an attacker to get access to the computer by taking the victim into confidence. It may also come in the form of an email attachment that asks the user to give away confidential information to the sender of the message. Phishing attacks are very common outcomes of social engineering. In a phishing attack, users of computer systems frequently indulge in interesting chats over the Internet or over the phone with unknown attackers, and witlessly reveal sensitive information such as their password or credit card number.
Unfortunately, no technical configuration of systems or networks can protect an organization from social engineering. There is no firewall that can stop social engineering attacks. The best protection against social engineering is to train the users about the security policies of the organization.
When you install an operating system, several services and protocols are installed by default. Chances are good that most of these services or protocols will never be used, but they may leave the system vulnerable to outside attacks. In order to protect and secure the system from potential attacks, it is necessary that any nonessential services and protocols be identified and disabled or be completely removed from the system. This not only improves system performance but also helps to fill in possible security holes.
Nonessential services include those system services and applications that are not used on a server or a desktop computer. For example, services such as the Dynamic Host Control Protocol (DHCP), DNS, FTP, Telnet, or the Remote Access Service (RAS) are mostly configured on servers and may never be used on desktops. These services not only consume system resources but also make the system vulnerable to outside attacks. These and other services that are not required on a system should be disabled as part of your actions to maintain a secure network environment. If a system is not a part of the Active Directory domain, you may remove the directory services and the DNS from a Windows system. Similarly, if a system does not require file and print services, these may be disabled or removed.
Nonessential protocols are not used on a desktop or a server. For example, if a system is not connected to a legacy Windows systems, you may remove the NetBIOS protocol. Similarly, you may disable the Internet Control Message Protocol (ICMP) if you do not want the system to respond to system management queries. If there are no NetWare servers in the network, there is no reason to keep the Internet Packet Exchange/Sequenced Package Exchange (IPX/SPX) protocol installed on any system.
There may be situations where you need some protocols and services installed on certain systems but not on others. A thorough study of all services and protocols that are installed by default on each system may be a good idea to help decide whether the services or protocols are not required and can be disabled.
You must be very careful when disabling or removing nonessential services and protocols on a system. Some services depend on other services to work that otherwise may seem to be nonessential. Removing services that other services depend on may leave your system inaccessible to other services.