Network Ports, Services, and Threats
Network Design Elements and Components
In today’s network infrastructures, it is critical to know the fundamentals of basic security infrastructure. Before any computer is connected to the Internet, planning must occur to make sure that the network is designed in a secure manner. Many of the attacks that hackers use are successful because of an insecure network design. That is why it is so important for a security professional to use secure topologies and tools like intrusion detection and prevention. Another example is virtual local area networks (VLANs), which are responsible for securing a broadcast domain to a group of switch ports. This relates directly to secure topologies, because different Internet Protocol (IP) subnets can be put on different port groupings and separated, either by routing or by applying an access control list (ACL). This allows for separation of network traffic, for example, the executive group can be isolated from the general user population on a network.
Other items related to topology that we examine in this chapter include demilitarized zones (DMZs). We will explore how DMZs can be used in conjunction with network address translation (NAT) and extranets to help build a more secure network. By understanding each of these items, you will see how they can be used to build a layered defense against attack.
This chapter also covers intrusion detection. It is important to understand not only the concepts of intrusion detection, but also the use and placement of intrusion detection systems (IDSes) within a network infrastructure. The placement of an IDS is critical to deployment success. We will also cover intrusion prevention systems (IPSes), honeypots, honeynets, and incident response and how they each have a part to play in protecting your network environment.
TEST DAY TIP An ACL is a list of users who have permission to access a resource or modify a file. ACLs are used in nearly all modern-day operating systems (OSes) to determine what permissions a user has on a particular resource or file.
All networks contain services that provide some type of functionality. Some of the services are essential to the health of the network, or required for user functionality, but others can be disabled or removed because they are superfluous. When services exist on networks that are not actively being used, the changes of exploitation are increased. Simply having a service enabled, offers additional opportunity for hackers to attempt entrance into your infrastructure. If a service is required and utilized in the organization, it becomes your job as the administrator to safeguard the service and ensure that all is in working order. When a network service is installed and made available but it is not in use or required by the organization, there is a tendency for the service to fall out of view. It may not be noticed or monitored by system administrators, which provides a perfect mechanism for malicious attackers. They can hammer away at your environment, seemingly without your knowledge in an attempt to breach your environment.
When you are considering whether to enable and disable services, there are things that must be considered to protect the network and its internal systems. It is important to evaluate the current needs and conditions of the network and infrastructure, and then begin to eliminate unnecessary services. This leads to a cleaner network structure, which then becomes less vulnerable to attack.
Not all networks are created the same; thus, not all networks should be physically laid out in the same fashion. The judicious usage of differing security topologies in a network can offer enhanced protection and performance. We will discuss the components of a network and the security implications of each. By understanding the fundamentals of each component and being able to design a network with security considerations in mind, you will be able to better prepare yourself and your environment for the inevitable barrage of attacks that take place every day. With the right planning and design you will be able to minimize the impact of attacks, while successfully protecting important data.
Many tools that exist today can help you to better manage and secure your network environment. We will focus on a few specific tools that give you the visibility that is needed to keep your network secure, especially intrusion detection and protection, firewalls, honeypots, content filters, and protocol analyzers. These tools will allow network administrators to monitor, detect, and contain malicious activity in any network environment. Each of these tools plays a different part in the day-to-day work of a network administrator and makes sure that you are well armed and well prepared to handle whatever malicious attacks might come your way.
A successful security strategy requires many layers and components. One of these components is the intrusion detection system (IDS) and the newer derivation of this technology, the intrusion prevention system (IPS). Intrusion detection is an important piece of security in that it acts as a detective control. A simple analogy to an intrusion detection system is a house alarm. When an intruder enters a house through an entry point that is monitored by the alarm the siren sounds and the police are alerted. Similarly an intrusion prevention system would not only sound the siren and alert the police but it would also kick the intruders out of the house and keep them out by closing the window and locking it automatically. The big distinction between an IDS/IPS and a firewall or other edge screening device is that the latter are not capable of detailed inspection of network traffic patterns and behavior that match known attack signatures. Therefore they are unable to reliably detect or prevent developing or in progress attacks.
The simplest definition of an IDS is “a specialized tool that can detect and identify malicious traffic or activity in a network or on a host.” To achieve this an IDS often utilizes a database of known attack signatures which it can use to compare patterns of activity, traffic, or behavior it sees in the network or on a host. Once an attack has been identified the IDS can issue alarms or alerts or take a variety of actions to terminate the attack. These actions typically range from modifying firewall or router access lists to block the connection from the attacker to using a TCP reset to terminate the connection at both the source and the target. In the end the final goal is the same—interrupt the connection between the attacker and the target and stop the attack.
Like firewalls, intrusion detection systems may be software-based or may combine hardware and software (in the form of preinstalled and preconfigured standalone IDS devices). There are many opinions as to what is the best option. For the exam what’s important is to understand the differences. Often, IDS software runs on the same devices or servers where firewalls, proxies, or other boundary services operate. Although such devices tend to operate at the network periphery, IDS systems can detect and deal with insider attacks as well as external attacks as long as the sensors are placed appropriately to detect such attacks.
As we explained in Chapter 4, intrusion protection systems (IPSes) are a possible line of defense against system attacks. By being proactive and defensive in your approach, as opposed to reactive, you stop more attempts at network access at the door. IPSes typically exist at the boundaries of your network infrastructure and function much like a firewall. The big distinction between IPS and firewalls is that IPSes are smarter devices in that they make determinations based on content as opposed to ports and protocols. By being able to examine content at the application layer the IPSes can perform a better job at protecting your network from things like worms and Trojans, before the destructive content is allowed into your environment.
An IPS is capable of responding to attacks when they occur. This behavior is desirable from two points of view. For one thing, a computer system can track behavior and activity in near-real time and respond much more quickly and decisively during the early stages of an attack. Because automation helps hackers mount attacks, it stands to reason that it should also help security professionals fend them off as they occur. For another thing, an IPS can stand guard and run 24 h per day and 7 days per week, but network administrators may not be able to respond as quickly during off hours as they can during peak hours. By automating a response and moving these systems from detection to prevention, they actually have the ability to block incoming traffic from one or more addresses from which an attack originates. This allows the IPS the ability to halt an attack in process and block future attacks from the same address.
Network intrusion detection systems (NIDS) and network intrusion prevention systems (NIPS) are similar in concept, and an NIPS is at first glance what seems to be an extension of an NIDS, but in actuality, the two systems are complementary and behave in a cooperative fashion. An NIDS exists for the purpose of catching malicious activity once it has arrived in your world. Whether the NIDS in your DMZ or your intranet captures the offending activity is immaterial; in both instances the activity is occurring within your network environment. With an NIPS the activity is typically being detected at the perimeter and disallowed from entering the network.
By deploying an NIDS and an NIPS you provide for a multilayered defense and ideally your NIPS is able to thwart attacks approaching your network from the outside in. Anything that makes it past the NIPS ideally would then be caught by the NIDS inside the network. Attacks originating from inside the network would also be addressed by the NIDS.
Head of the Class
Weighing IDS Options
In addition to the various IDS and IPS vendors mentioned in the list below, judicious use of a good Internet search engine can help network administrators to identify more potential suppliers than they would ever have the time or inclination to investigate in detail. That is why we also urge administrators to consider an alternative: deferring some or all of the organization’s network security technology decisions to a special type of outsourcing company. Known as managed security services providers (MSSPs), these organizations help their customers select, install, and maintain state-of-the-art security policies and technical infrastructures to match. For example, Guardent is an MSSP that includes comprehensive firewall, IDS and IPS services among its many customer offerings; visit www.guardent.com for a description of the company’s various service programs and offerings.
A huge number of potential vendors can provide IDS and IPS products to companies and organizations. Without specifically endorsing any particular vendor, the following products offer some of the most widely used and best-known solutions in this product space:
Cisco Systems It is best known for its switches and routers, but offers significant firewall and intrusion detection products as well (www.cisco.com).
GFI LANguard It is a family of monitoring, scanning, and file integrity check products that offer broad intrusion detection and response capabilities (www.gfi.com/languard/).
TippingPoint It is a division of 3Com that makes an inline IPS device that is considered one of the first IPS devices on the market.
Internet Security Systems (ISS) (a division of IBM) ISS offers a family of enterprise-class security products called RealSecure, that includes comprehensive intrusion detection and response capabilities (www.iss.net).
McAfee It offers the IntruShield IPS systems that can handle gigabit speeds and greater (www.mcafee.com).
Sourcefire It is the best known vendor of open source IDS software as it is the developer of Snort, which is an open source IDS application that can be run on Windows or Linux systems (www.snort.org).
Head of the Class
Getting Real Experience Using an IDS
One of the best ways to get some experience using IDS tools like TCPDump and Snort is to check out one of the growing number of bootable Linux OSes. Because all of the tools are precompiled and ready to run right off the CD, you only have to boot the computer to the disk. One good example of such a bootable disk is Backtrack. This CD-based Linux OS actually has more than 300 security tools that are ready to run. Learn more at www.remote-exploit.org/backtrack.html.
A clearinghouse for ISPs known as ISP-Planet offers all kinds of interesting information online about MSSPs, plus related firewall, virtual private networking (VPN), intrusion detection, security monitoring, antivirus, and other security services. For more information, visit any or all of the following URLs:
ISP-Planet Survey Managed Security Service Providers, participating provider’s chart, www.isp-planet.com/technology/mssp/participants_chart.html.
Managed firewall services chart, www.isp-planet.com/technology/mssp/firewalls_chart.html.
Managed VPN chart, www.isp-planet.com/technology/mssp/services_chart.html.
Managed intrusion detection and security monitoring, www.isp-planet.com/technology/mssp/monitoring_chart.html.
Managed antivirus and managed content filtering and URL blocking, www.isp-planet.com/technology/mssp/mssp_survey2.html.
Managed vulnerability assessment and emergency response and forensics, www.isp-planet.com/technology/mssp/mssp_survey3.html.
Exercise 1 introduces you to WinDump. This tool is similar to the Linux tool TCP-Dump. It is a simple packet-capture program that can be used to help demonstrate how IDS systems work. All IDS systems must first capture packets so that the traffic can be analyzed.
1. Go to www.winpcap.org/windump/install/
2. At the top of the page you will see a link for WinPcap. This program will need to be installed as it will allow the capture of low level packets.
3. Next, download and install the WinDump program from the link indicated on the same Web page.
4. You’ll now need to open a command prompt by clicking Start, Run, and entering cmd in the Open Dialog box.
5. With a command prompt open, you can now start the program by typing WinDump from the command line. By default, it will use the first Ethernet adaptor found. You can display the help screen by typing windump—h. The example below specifies the second adaptor.
C:\>windump -i 2
6. You should now see the program running. If there is little traffic on your network, you can open a second command prompt and ping a host such as www.yahoo.com. The results should be seen in the screen you have open that is running WinDump as seen below.
windump: listening on \Device\ethO_
14:07:02.563213 IP earth.137 > 192.168.123.181.137: UDP, length 50
14:07:04.061618 IP earth.137 > 192.168.123.181.137: UDP, length 50
14:07:05.562375 IP earth.137 > 192.168.123.181.137: UDP, length 50
A firewall is the most common device used to protect an internal network from outside intruders. When properly configured, a firewall blocks access to an internal network from the outside, and blocks users of the internal network from accessing potentially dangerous external networks or ports.
There are three firewall technologies examined in the Security+ exam:
Packet filtering
Application layer gateways
Stateful inspection
Head of the Class
What Is a Firewall?
A firewall is a security system that is intended to protect an organization’s network against external threats, such as hackers, coming from another network, such as the Internet.
In simple terms, a firewall is a hardware or software device used to keep undesirables electronically out of a network the same way that locked doors and secured server racks keep undesirables physically away from a network. A firewall filters traffic crossing it (both inbound and outbound) based on rules established by the firewall administrator. In this way, it acts as a sort of digital traffic cop, allowing some (or all) of the systems on the internal network to communicate with some of the systems on the Internet, but only if the communications comply with the defined rule set.
All of these technologies have advantages and disadvantages, but the Security+ exam specifically focuses on their abilities and the configuration of their rules. A packet-filtering firewall works at the network layer of the Open Systems Interconnect (OSI) model and is designed to operate rapidly by either allowing or denying packets. The second generation of firewalls is called “circuit level firewalls,” but this type has been largely disbanded as later generations of firewalls absorbed their functions. An application layer gateway operates at the application layer of the OSI model, analyzing each packet, and verifying that it contains the correct type of data for the specific application it is attempting to communicate with. A stateful inspection firewall checks each packet to verify that it is an expected response to a current communications session. This type of firewall operates at the network layer, but is aware of the transport, session, presentation, and application layers and derives its state table based on these layers of the OSI model. Another term for this type of firewall is a “deep packet inspection” firewall, indicating its use of all layers within the packet including examination of the data itself.
To better understand the function of these different types of firewalls, we must first understand what exactly the firewall is doing. The highest level of security requires that firewalls be able to access, analyze, and utilize communication information, communication-derived state, and application-derived state, and be able to perform information manipulation. Each of these terms is defined below:
Communication Information Information from all layers in the packet.
Communication-derived State The state as derived from previous communications.
Application-derived State The state as derived from other applications.
Information Manipulation The ability to perform logical or arithmetic functions on data in any part of the packet.
Different firewall technologies support these requirements in different ways. Again, keep in mind that some circumstances may not require all of these, but only a subset. In that case, it is best to go with a firewall technology that fits the situation rather than one that is simply the newest technology. Table 6.1 shows the firewall technologies and their support of these security requirements.
A proxy server is a server that sits between an intranet and its Internet connection. Proxy servers provide features such as document caching (for faster browser retrieval) and access control. Proxy servers can provide security for a network by filtering and discarding requests that are deemed inappropriate by an administrator. Proxy servers also protect the internal network by masking all internal IP addresses—all connections to Internet servers appear to be coming from the IP address of the proxy servers.
A network layer firewall or a packet-filtering firewall works at the network layer of the OSI model and can be configured to deny or allow access to specific ports or IP addresses. The two policies that can be followed when creating packet-filtering firewall rules are allow by default and deny by default. Allow by default allows all traffic to pass through the firewall except traffic that is specifically denied. Deny by default blocks all traffic from passing through the firewall except for traffic that is explicitly allowed.
Deny by default is the best security policy, because it follows the general security concept of restricting all access to the minimum level necessary to support business needs. The best practice is to deny access to all ports except those that are absolutely necessary. For example, if configuring an externally facing firewall for a demilitarized zone (DMZ), Security+ technicians may want to deny all ports except port 443 (the Secure Sockets Layer [SSL] port) to require all connections coming in to the DMZ to use Hypertext Transfer Protocol Secure (HTTPS) to connect to the Web servers. Although it is not practical to assume that only one port will be needed, the idea is to keep access to a minimum by following the best practice of denying by default.
A firewall works in two directions. It can be used to keep intruders at bay, and it can be used to restrict access to an external network from its internal users. Why do this? A good example is found in some Trojan horse programs. When Trojan horse applications are initially installed, they report back to a centralized location to notify the author or distributor that the program has been activated. Some Trojan horse applications do this by reporting to an Internet Relay Chat (IRC) channel or by connecting to a specific port on a remote computer. By denying access to these external ports in the firewall configuration, Security+ technicians can prevent these malicious programs from compromising their internal network.
The Security+ exam extensively covers ports and how they should come into play in a firewall configuration. The first thing to know is that of 65,535 total ports, ports 0 through 1,023 are considered well-known ports. These ports are used for specific network services and should be considered the only ports allowed to transmit traffic through a firewall. Ports outside the range of 0 through 1,023 are either registered ports or dynamic/private ports.
User ports range from 1,024 through 49,151.
Dynamic/private ports range from 49,152 through 65,535.
If there are no specialty applications communicating with a network, any connection attempt to a port outside the well-known ports range should be considered suspect. Although there are some network applications that work outside of this range that may need to go through a firewall, they should be considered the exception and not the rule. With this in mind, ports 0 through 1,023 still should not be enabled. Many of these ports also offer vulnerabilities; therefore, it is best to continue with the best practice of denying by default and only opening the ports necessary for specific needs.
For a complete list of assigned ports, visit the Internet Assigned Numbers Authority (IANA) at www.iana.net. The direct link to their list of ports is at www.iana.org/assignments/port-numbers. The IANA is the centralized organization responsible for assigning IP addresses and ports. They are also the authoritative source for which ports applications are authorized to use for the services the applications are providing.
Damage and Defense
Denial-of-Service Attacks
A port is a connection point into a device. Ports can be physical, such as serial ports or parallel ports, or they can be logical. Logical ports are ports used by networking protocols to define a network connection point to a device. Using Transmission Control Protocol/Internet Protocol (TCP/IP), both TCP and User Datagram Protocol (UDP) logical ports are used as connection points to a network device. Because a network device can have thousands of connections active at any given time, these ports are used to differentiate between the connections to the device.
A port is described as well known for a particular service when it is normal and common to find that particular software running at that particular port number. For example, Web servers run on port 80 by default, and File Transfer Protocol (FTP) file transfers use ports 20 and 21 on the server when it is in active mode. In passive mode, the server uses a random port for data connection and port 21 for the control connection.
To determine what port number to use, technicians need to know what port number the given software is using. To make that determination easier, there is a list of common services that run on computers along with their respective well-known ports. This allows the technician to apply the policy of denying by default, and only open the specific port necessary for the application to work. For example, if they want to allow the Siebel Customer Relations Management application from Oracle to work through a firewall, they would check against a port list (or the vendor’s documentation) to determine that they need to allow traffic to port 2,320 to go through the firewall. A good place to search for port numbers and their associated services online is on Wikipedia. This list is fairly up to date and can help you find information on a very large number of services running on all ports (http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers). You will notice that even Trojan horse applications have well-known port numbers. A few of these have been listed in Table 6.2.
EXAM WARNING The Security+ exam requires that you understand how the FTP process works. There are two modes in which FTP operates: active and passive.
Active Mode
1. The FTP client initializes a control connection from a random port higherthan 1,024 to the server’s port 21.
2. The FTP client sends a PORT command instructing the server to connect to a port on the client one higher than the client’s control port. This is the client’s data port.
3. The server sends data to the client from server port 20 to the client’s data port. Passive Mode
Passive Mode
1. The FTP client initializes a random port higherthan 1,023 as the control port, and initializes the port one higher than the control port as the data port.
2. The FTP client sends a PASV command instructing the server to open a random data port.
3. The server sends a POfiTcommand notifying the client of the data port number that was just initialized.
4. The FTP client then sends data from the data port it initialized to the data port the server instructed it to use.
Unfortunately for nearly every possible port number, there is a virus or Trojan horse application that could be running there. For a more comprehensive list of Trojans listed by the port they use, go to the SANS Institute Web site at www.sans.org/resources/idfaq/oddports.php.
EXAM WARNING The Security+ exam puts a great deal of weight on your knowledge of specific well-known ports for common network services. The most important ports to remember are:
20 FTP Active Mode Control Port (see the Security+ exam warning on FTP for further information)
21 FTP Active Mode Data Port (see the Security+ exam warning on FTP for further information)
22 Secure Shell (SSH)
23 Telnet
25 Simple Mail Transfer Protocol (SMTP)
80 HTTP
110 Post Office Protocol 3 (POP3)
119 Network News Transfer Protocol (NNTP)
143 Internet Message Access Protocol (IMAP)
443 SSL(HTTPS)
Memorizing these ports and the services that run on them will help you with firewall and network access questions on the Security+ exam.
Packet filtering has both benefits and drawbacks. One of the benefits is speed. Because only the header of a packet is examined and a simple table of rules is checked, this technology is very fast. A second benefit is ease of use. The rules for this type of firewall are easy to define and ports can be opened or closed quickly. In addition, packet-filtering firewalls are transparent to network devices. Packets can pass through a packet-filtering firewall without the sender or receiver of the packet being aware of the extra step. A major bonus of using a packet-filtering firewall is that most current routers support packet filtering.
There are two major drawbacks to packet filtering:
A port is either open or closed. With this configuration, there is no way of simply opening a port in the firewall when a specific application needs it and then closing it when the transaction is complete. When a port is open, there is always a hole in the firewall waiting for someone to attack.
The second major drawback to packet filtering is that it does not understand the contents of any packet beyond the header. Therefore, if a packet has a valid header, it can contain any payload. This is a common failing point that is easily exploited.
To expand on this, as only the header is examined, packets cannot be filtered by user name, only IP addresses. With some network services such as Trivial File Transfer Protocol (TFTP) or various UNIX “r” commands (rsh, rep, and so forth), this can cause a problem. Because the port for these services is either opened or closed for all users, the options are either to restrict system administrators from using the services, or invite the possibility of any user connecting and using these services. The operation of this firewall technology is illustrated in Figure 6.1.
Referring to Figure 6.1 the sequence of events is as follows:
1. Communication from the client starts by going through the seven layers of the OSI model.
2. The packet is then transmitted over the physical media to the packet-filtering firewall.
3. The firewall works at the network layer of the OSI model and examines the header of the packet.
4. If the packet is destined for an allowed port, the packet is sent through the firewall over the physical media and up through the layers of the OSI model to the destination address and port.
FIGURE 6.1
Packet Filtering Technology
The second firewall technology is called application filtering or an application-layer gateway. This technology is more advanced than packet filtering, as it examines the entire packet and determines what should be done with the packet based on specific defined rules. For example, with an application-layer gateway, if a Telnet packet is sent through the standard FTP port, the firewall can determine this and block the packet if a rule is defined disallowing Telnet traffic through the FTP port. It should be noted that this technology is used by proxy servers to provide application-layer filtering to clients.
One of the major benefits of application-layer gateway technology is its application-layer awareness. Because application-layer gateway technology can determine more information from a packet than a simple packet filter can, application-layer gateway technology uses more complex rules to determine the validity of any given packet. These rules take advantage of the fact that application-layer gateways can determine whether data in a packet matches what is expected for data going to a specific port. For example, the application-layer gateway can tell if packets containing controls for a Trojan horse application are being sent to the HTTP port (80) and thus, can block them.
Although application-layer gateway technology is much more advanced than packet-filtering technology, it does have its drawbacks. Because every packet is disassembled completely and then checked against a complex set of rules, application-layer gateways are much slower than the packet filters. In addition, only a limited set of application rules are predefined, and any application not included in the predefined list must have custom rules defined and loaded into the firewall. Finally, application-layer gateways process the packet at the application layer of the OSI model. By doing so, the application-layer gateway must then rebuild the packet from the top down and send it back out. This breaks the concept behind client/server architecture and slows the firewall down even further.
Client/server architecture is based on the concept of a client system requesting the services of a server system. This was developed to increase application performance and cut down on the network traffic created by earlier file sharing or mainframe architectures. When using an application-layer gateway, the client/server architecture is broken as the packets no longer flow between the client and the server. Instead, they are deconstructed and reconstructed at the firewall. The client makes a connection to the firewall at which point the packet is analyzed, then the firewall creates a connection to the server for the client. By doing this, the firewall is acting as a proxy between the client and the server. The operation of this technology is illustrated in Figure 6.2.
A honeypot is a computer system that is deliberately exposed to public access—usually on the Internet—for the express purpose of attracting and distracting attackers. In other words, these are the technical equivalent of the familiar police “sting” operation. Although the strategy involved in luring hackers to spend time investigating attractive network devices or servers can cause its own problems, finding ways to lure intruders into a system or network improves the odds of being able to identify those intruders and pursue them more effectively. Figure 6.3 shows a graphical representation of the honeypot concept in action.
FIGURE 6.2
Application-Layer Gateway Technology
Notes from the Field
Walking the Line between Opportunity and Entrapment
Most law enforcement officers are aware of the fine line they must walk when setting up a “sting”—an operation in which police officers pretend to be victims or participants in crime, with the goal of getting criminal suspects to commit an illegal act in their presence. Most states have laws that prohibit entrapment, that is, law enforcement officers are not allowed to cause a person to commit a crime and then arrest him or her for doing it. Entrapment is a defense to prosecution; if the accused person can show at trial that he or she was entrapped, the result must be an acquittal.
FIGURE 6.3
A Honeypot in Use to Keep Attackers from Affecting Critical Production Servers
Courts have traditionally held, however, that providing a mere opportunity’for a criminal to commit a crime does not constitute entrapment. To entrap involves using persuasion, duress, or other undue pressure to force someone to commit a crime that the person would not otherwise have committed. Under this holding, setting up a honeypot or honeynet would be like the (perfectly legitimate) police tactic of placing an abandoned automobile by the side of the road and watching it to see if anyone attempts to burglarize, vandalize, or steal it. It should also be noted that entrapment only applies to the actions of law enforcement or government personnel. A civilian cannot entrap, regardless of how much pressure is exerted on the target to commit the crime. (However, a civilian could be subject to other charges, such as criminal solicitation or criminal conspiracy, for causing someone else to commit a crime.)
The following characteristics are typical of honeypots:
Systems or devices used as lures are set up with only “out of the box” default installations, so that they are deliberately made subject to all known vulnerabilities, exploits, and attacks.
The systems or devices used as lures do not include sensitive information (for example, passwords, data, applications, or services an organization depends on or must absolutely protect), so these lures can be compromised, or even destroyed, without causing damage, loss, or harm to the organization that presents them to be attacked.
Systems or devices used as lures often also contain deliberately tantalizing objects or resources, such as files namedpassword. db, folders named Top Secret, and so forth—often consisting only of encrypted garbage data or log files of no real significance or value—to attract and hold an attacker’s interest long enough to give a backtrace a chance of identifying the attack’s point of origin.
Systems or devices used as lures also include or are monitored by passive applications that can detect and report on attacks or intrusions as soon as they start, so the process of backtracing and identification can begin as soon as possible.
EXAM WARNING A honeypot is a computer system that is deliberately exposed to public access—usually on the Internet—for the express purpose of attracting and distracting attackers. Likewise, a honeynet is a network set up for the same purpose, where attackers not only find vulnerable services or servers, but also find vulnerable routers, firewalls, and other network boundary devices, security applications, and so forth. You must know these for the Security+ exam.
The honeypot technique is best reserved for use when a company or organization employs full-time Information Technology (IT) security professionals who can monitor and deal with these lures on a regular basis, or when law enforcement operations seek to target specific suspects in a “virtual sting” operation. In such situations, the risks are sure to be well understood, and proper security precautions, processes, and procedures are far more likely to already be in place (and properly practiced). Nevertheless, for organizations that seek to identify and pursue attackers more proactively, honeypots can provide valuable tools to aid in such activities.
Exercise 2 outlines the basic process to set up a Windows Honeypot. Although there are many vendors of honeypots that will run on both Windows and Linux computers, this exercise will describe the install of a commercial honeypot that can be used on a corporate network.
1. KFSensor is a Windows-based honeypot IDS that can be downloaded as a demo from www.keyfocus.net/kfsensor/.
2. Fill out the required information for download.
3. Once the program downloads, accept the install defaults and allow the program to reboot the computer to finish the install.
4. Once installed, the program will step you through a wizard process that will configure a basic honeypot.
5. Allow the system to run for some time to capture data. The program will install a sensor in the program tray that will turn red when the system is probed by an attacker.
A honeynet is a network that is set up for the same purpose as a honeypot: to attract potential attackers and distract them from your production network. In a honeynet, attackers will not only find vulnerable services or servers but also find vulnerable routers, firewalls, and other network boundary devices, security applications, and so forth.
The following characteristics are typical of honeynets:
Network devices used as lures are set up with only “out of the box” default installations, so that they are deliberately made subject to all known vulnerabilities, exploits, and attacks.
The devices used as lures do not include sensitive information (for example, passwords, data, applications, or services an organization depends on or must absolutely protect), so these lures can be compromised, or even destroyed, without causing damage, loss, or harm to the organization that presents them to be attacked.
Devices used as lures also include or are monitored by passive applications that can detect and report on attacks or intrusions as soon as they start, so the process of backtracing and identification can begin as soon as possible.
The Honeynet Project at www.honeynet.org is probably the best overall resource on the topic online; it not only provides copious information on the project’s work to define and document standard honeypots and honeynets, but it also does a great job of exploring hacker mindsets, motivations, tools, and attack techniques.
Although this technique of using honeypots or honeynets can help identify the unwary or unsophisticated attacker, it also runs the risk of attracting additional attention from savvier attackers. Honeypots or honeynets, once identified, are often publicized on hacker message boards or mailing lists, and thus become more subject to attacks and hacker activity than they otherwise might be. Likewise, if the organization that sets up a honeypot or honeynet is itself identified, its production systems and networks may also be subjected to more attacks than might otherwise occur.
Content filtering is the process used by various applications to examine content passing through and make a decision on the data based on a set of criteria. Actions are based on the analysis of the content and the resulting actions can results in block or allow.
Content filtering is commonly performed on e-mail and is often also applies to Web page access as well. Filtering out gambling or gaming sites from company machines may be a desired effect of management and can be achieved through content filtering. Examples of content filters include WebSense and Secure Computings WebWasher/SmartFilter. An open source content filter example would be Dans-Guardian and Squid.
A protocol analyzer is used to examine network traffic as it travels along your Ethernet network. They are called by many names, such as pack analyzer, network analyzer, and sniffer, but all function in the same basic way. As traffic moves across the network from machine to machine, the protocol analyzer takes a capture of each packet. This capture is essentially a photocopy, and the original packet is not harmed or altered. Capturing the data allows a malicious hacker to obtain your data and potentially piece it back together to analyze the contents.
Different protocol analyzers function differently, but the overall principle is the same. A sniffer is typically software installed on a machine that can then capture all the traffic on a designated network. Much of the traffic on the network will be destined for all machines, as in the case of broadcast traffic. These packets will be picked up and saved as part of the capture. Also, all traffic destined to and coming from the machine running the sniffer will be captured. To capture traffic addressed to/from another machine on the network, the sniffer should be run in promiscuous mode. If a hub exists on the network this allows the capturing of all packets on the network regardless of their source or destination. Be aware that not all protocol analyzers support promiscuous mode, and having switches on the network makes promiscuous mode difficult to use because of the nature of switched traffic. In the cases where a sniffer that runs promiscuous mode is not available or unfeasible, it might make sense to use the built-in monitor port on the switch instead—if it exists. The monitor port exists to allow for the capture of all data that passes through the switch. Depending on your network architecture, this could encompass one or many subnets.
In this section, we will discuss network ports, network services, and potential threats to your network. To properly protect your network, you need to first identify the existing vulnerabilities. As we will discuss, knowing what exists in your network is the best first defense. By identifying ports that are open but may not be in use, you will be able to begin to close the peep holes into your network from the outside world. By monitoring required services and removing all others, you reduce the opportunity for attack and begin to make your environment more predictable.
Also, by becoming familiar with common network threats that exist today you can take measures to prepare your environment to stand against these threats. The easiest way for a hacker to make its way into your environment is to exploit known vulnerabilities. By understanding how these threats work you will be able to safeguard against them as best possible and be ready for when new threats arise.
As discussed earlier in Chapter 2, OS Hardening, unnecessary network ports and protocols in your environment should be eliminated whenever possible. Many internal networks today utilize TPC/IP as the primary protocol. This has resulted in the partial or complete elimination of such protocols as Internetwork Packet Exchange (IPX), Sequenced Packet Exchange (SPX), and/or NetBIOS Extended User Interface (NetBEUI). It is also important to look at the specific operational protocols used in a network such as Internet Control Messaging Protocol (ICMP), Internet Group Management Protocol (IGMP), Service Advertising Protocol (SAP), and the Network Basic Input/Output System (NetBIOS) functionality associated with Server Message Block (SMB) transmissions in Windows-based systems.
Notes from the Field
Eliminate External NetBIOS Traffic
One of the most common methods of obtaining access to a Windows-based system and then gaining control of that system is through NetBIOS traffic. Windows-based systems use NetBIOS in conjunction with SMB to exchange service information and establish secure channel communications between machines for session maintenance. If file and print sharing is enabled on a Windows computer, NetBIOS traffic can be viewed on the external network unless it has been disabled on the external interface. With the proliferation of digital subscriber line (DSL), Broadband, and other “always on” connections to the Internet, it is vital that this functionality be disabled on all interfaces exposed to the Internet.
Although considering removal of nonessential protocols, it is important to look at every area of the network to determine what is actually occurring and running on the system. The appropriate tools are needed to do this, and the Internet contains a wealth of resources for tools and information to analyze and inspect systems.
A number of functional (and free) tools can be found at sites such as www.found-stone.com/knowledge/free_tools.html. Among these, tools like SuperScan 3.0 are extremely useful in the evaluation process. Monitoring a mixed environment of Windows, UNIX, Linux, and/or Netware machines can be accomplished using tools such as Big Brother, which may be downloaded and evaluated (or in some cases used without charge) by visiting www.bb4.com or Nagios that can be found at www.nagios.org. Another useful tool is Nmap, a portscanner, which is available at http://insecure.org/nmap/. These tools can be used to scan, monitor, and report on multiple platforms giving a better view of what is present in an environment. In UNIX- and Linux-based systems, nonessential services can be controlled in a variety of ways depending on the distribution being worked with. This may include editing or making changes to configuration files like xinetd.conf or inetd.conf or the use of graphical administration tools like linuxconf or webmin in Linux, or the use of facilities like svcadm in Solaris. It may also include the use of ipchains, iptables, pf, or ipfilter in various versions to restrict the options available for connection at a firewall.
NOTE As you begin to evaluate the need to remove protocols and services, make sure that the items you are removing are within your area of control. Consult with your system administrator on the appropriate action to take, and make sure you have prepared a plan to back out and recover if you found that you have removed something that is later deemed necessary or if you make a mistake.
EXAM WARNING The Security+ exam can ask specific questions about ports and what services they support. It’s advisable to learn common ports before attempting the exam. Here are some common ports and services:
21 FTP
22 Secure Shell (SSH)
23 Telnet
25 Simple Mail Transfer Protocol (SMTP)
53 DNS
80 HTTP
110 Post Office Protocol (POP)
161 Simple Network Management Protocol (SNMP)
443 SSL
Memorizing these will help you with the Security+ exam.
Modern Windows-based platforms allow the configuration of OS and network services from provided administrative tools. This can include a service applet in a control panel or a Microsoft Management Console (MMC) tool in a Windows XP/ Vista/2003/2008 environment. It may also be possible to check or modify configurations at the network adaptor properties and configuration pages. In either case, it is important to restrict access and thus limit vulnerability due to unused or unnecessary services or protocols.
Let’s take a moment to use a tool to check what protocols and services are running on systems in a network. This will give you an idea of what you are working with. Exercise 3 uses Nmap to look at the configuration of a network, specifically to generate a discussion and overview of the services and protocols that might be considered when thinking about restricting access at various levels. Nmap is used to scan ports and while it is not a full blown security scanner it can identify additional information about a service that can be used to determine an exploit that could be effective. Security scanners that can be used to detail existing vulnerabilities include products like Nessus and LANGuard Network Security Scanner. If using a UNIX-based platform, a number of evaluation tools have been developed, such as Amap, POf, and Nessus, which can perform a variety of port and security scans. In Exercise 3, you will scan a network to identify potential vulnerabilities.
In this exercise, you will examine a network to identify open ports and what could be potential problems or holes in specific systems. In this exercise, you are going to use Nmap, which you can download and install for free prior to starting the exercise by going to http://insecure.org/nmap/download.html and selecting the download tool. This tool is available for Windows or Linux computers.
To begin the exercise, launch Nmap from the command line. You will want to make sure that you install the program into a folder that is in the path or that you open it from the installed folder. When you have opened a command line prompt, complete the exercise by performing the following steps:
1. From the command line type Nmap. This should generate the following response:
C:\>nmap
Nmap V. 4.20 Usage: nmap [Scan Type(s)] [Options] <host or net
list>
Some Common Scan Types ('*' options require root privileges)
* -sS TCP SYN stealth port scan (default if privileged (root))
-sT TCP connectO port scan (default for unprivileged users)
* -sU UDP port scan
-sP ping scan (Find any reachable machines)
* -sF,-sX,-sN Stealth FIN, Xmas, or Null scan (experts only)
-sR/-I RPC/Identd scan (use with other scan types)
Some Common Options (none are required, most can be combined):
* -0 Use TCP/IP fingerprinting to guess remote operating system
-p <range> ports to scan. Example range:
"1-1024,1080,6666,31337"
-F Only scans ports listed in nmap-services
-v Verbose. Its use is recommended. Use twice for greater
effect.
-P0 Don't ping hosts (needed to scan www.microsoft.com and
others)
* -Ddecoy_hostl,decoy2[,...] Hide scan using many decoys
-T <Paranoid|Sneaky|Polite|Normal|Aggressive|Insane> General
timing policy
-n/-R Never do DNS resolution/Always resolve [default:
sometimes resolve]
-0N/-0X/-0G <1ogfile> Output normal/XML/grepable scan logs to
<1 ogfil e>
-iL <i nputfil e> Get targets from file; Use '-' for stdin
* -S <your_IP>/-e <devicename> Specify source address or
network interface
--interactive Go into interactive mode (then press h for help)
--win_help Windows-specific features
Example: nmap -v -sS -0 www.my.com 192.168.0.0/16 '192.88-
90.*.*'
2. This should give you some idea of some of the types of scans that Nmap can perform. Notice the first and second entries. The -sS is a Transmission Control Protocol (TCP) stealth scan, and the -sT is a TCP full connect. The difference in these is that the stealth scan does only two of the three steps of the TCP handshake, while the full connect scan does all three steps and is slightly more reliable.
Now run Nmap with the -sT option and configure it to scan the entire subnet. The following gives an example of the proper syntax.
C:\>nmap -sT 192.168.1.1-254
3. The scan may take some time. On a large network expect the tool to take longer as there will be many hosts for it to scan.
4. When the scan is complete the results will be returned that will look similar to those shown here.
Interesting ports on (192.168.1.17):
(The 1,600 ports scanned but not shown below are in state: filtered)
Interesting ports on (192.168.1.18):
(The 1,594 ports scanned but not shown below are in state: filtered)
Interesting ports on (192.168.1.19):
(The 1,594 ports scanned but not shown below are in state: filtered)
Interesting ports on VENUS (192.168.1.20):
(The 1,596 ports scanned but not shown below are in state: filtered)
Interesting ports on PLUTO (192.168.1.21):
(The 1,596 ports scanned but not shown below are in state: filtered)
Interesting ports on (192.168.1.25):
(The 1,598 ports scanned but not shown below are in state: filtered)
Nmap run completed—254 IP addresses (six hosts up) scanned in 2,528 s.
In the example mentioned earlier, notice how you can see the ports that were identified on each system. Although this is the same type of tool that would be used by an attacker, it’s also a valuable tool for the security professional. You can see from the example that there are a number of ports open on each of the hosts that were probed. Remember that these machines are in an internal network, so some of these ports should be allowed.
TEST DAY TIP Spend a few minutes reviewing port and protocol numbers for standard services provided in the network environment. This will help when you are analyzing questions that require configuration of ACL lists and determinations of appropriate blocks to install to secure a network.
The question as to “should the ports be open” should lead us back to our earlier discussion of policy and risk assessment. If nothing else this type of tool can allow us to see if our hardening activities have worked and verify that no one has opened services on a system that is not allowed. Even for ports that are allowed and have been identified by scanning tools, decisions must be made as to which of these ports are likely to be vulnerable, and then the risks of the vulnerability weighed against the need for the particular service connected to that port. Port vulnerabilities are constantly updated by various vendors and should be reviewed and evaluated for risk at regular intervals to reduce potential problems. It is important to remember that scans of a network should be conducted initially to develop a baseline of what services and protocols are active on the network. Once the network has been secured according to policy, these scans should be conducted on a periodic basis to ensure that the network is in compliance with policy.
Network threats exist in today’s world in many forms. It seems as if the more creative network administrators become in protecting their environments, the more creative hackers and script kiddies become at innovating ways to get past the most admirable security efforts.
One of the more exciting and dynamic aspects of network security relates to the threat of attacks. A great deal of media attention and many vendor product offerings have been targeting attacks and attack methodologies. This is perhaps the reason that CompTIA has been focusing many questions in this particular area. Although there are many different varieties and methods of attack, they can generally all be grouped into several categories:
By the general target of the attack (application, network, or mixed)
By whether the attack is active or passive
By how the attack works (for example, via password cracking, or by exploiting code and cryptographic algorithms)
It’s important to realize that the boundaries between these three categories aren’t fixed. As attacks become more complex, they tend to be both application-based and network-based, which has spawned the new term mixed threat applications. An example of such an attack can be seen in the MyDoom worm, which targeted Windows machines in 2004. Victims received an e-mail indicating a delivery error, and if they executed the attached file, MyDoom would take over. The compromised machine would reproduce the attack by sending the e-mail to contacts in the user’s address book and copying the attachment to peer-to-peer (P2P) sharing directories. It would also open a backdoor on port 3,127, and try to launch a denial of service (DoS) attack against The SCO Group or Microsoft. So, as attackers get more creative, we have seen more and more combined and sophisticated threats. In the next few sections, we will detail some of the most common network threats and attack techniques so that you can be aware of them and understand how to recognize their symptoms and thereby devise a plan to thwart attack.
Head of the Class
Attack Methodologies in Plain English
In this section, we’ve listed network attacks, application attacks, and mixed threat attacks, and within those are included buffer overflows, distributed denial of service (DDoS) attacks, fragmentation attacks, and theft of service attacks. Although the list of descriptions might look overwhelming, generally the names are self-explanatory. For example, consider a DoS attack. As its name implies, this attack is designed to do just one thing—render a computer or network nonfunctional so as to deny service to its legitimate users. That’s it. So, a DoS could be as simple as unplugging machines at random in a data center or as complex as organizing an army of hacked computers to send packets to a single host to overwhelm it and shut down its communications. Another term that has caused some confusion is a mixed threat attack. This simply describes any type of attack that is comprised of two different, smaller attacks. For example, an attack that goes after Outlook clients and then sets up a bootleg music server on the victim machine is classified as a mixed threat attack.
TCP/IP hijacking, or session hijacking, is a problem that has appeared in most TCP/ IP-based applications, ranging from simple Telnet sessions to Web-based e-commerce applications. To hijack a TCP/IP connection, a malicious user must first have the ability to intercept a legitimate user’s data, and then insert himself or herself into that session much like a MITM attack. A tool known as Hunt (www.packetstormsecurity.org/sniffers/hunt/) is very commonly used to monitor and hijack sessions. It works especially well on basic Telnet or FTP sessions.
A more interesting and malicious form of session hijacking involves Web-based applications (especially e-commerce and other applications that rely heavily on cookies to maintain session state). The first scenario involves hijacking a user’s cookie, which is normally used to store login credentials and other sensitive information, and using that cookie to then access that user’s session. The legitimate user will simply receive a “session expired” or “login failed” message and probably will not even be aware that anything suspicious happened. The other issue with Web server applications that can lead to session hijacking is incorrectly configured session timeouts. A Web application is typically configured to timeout a user’s session after a set period of inactivity. If this timeout is too large, it leaves a window of opportunity for an attacker to potentially use a hijacked cookie or even predict a session ID number and hijack a user’s session.
To prevent these types of attacks, as with other TCP/IP-based attacks, the use of encrypted sessions are key; in the case of Web applications, unique and pseudorandom session IDs and cookies should be used along with SSL encryption. This makes it harder for attackers to guess the appropriate sequence to insert into connections, or to intercept communications that are encrypted during transit.
Null sessions are unauthenticated connections. When someone attempts to connect to a Windows machine and does not present credentials, they can potentially successfully connect as an anonymous user, thus creating a Null session.
Null sessions present vulnerability in that once someone has connected to a machine there is a lot to be learned about the machine. The more that is exposed about the machine, the more ammunition a hacker will have to attempt to gain further access. For instance, in Windows NT/2000 content about the local machine SAM database was potentially accessible from a null session. Once someone has obtained information about local usernames, they can then launch a brute force or dictionary attack in an attempt to gain additional access to the machine.
Null session can be controlled to some degree with registry hacks that can be deployed out to your machines, but the version of Windows operating system will dictate what can be configured for null session behavior on your machine.
The most classic example of spoofing is IP spoofing. TCP/IP requires that every host fills in its own source address on packets, and there are almost no measures in place to stop hosts from lying. Spoofing, by definition, is always intentional. However, the fact that some malfunctions and misconfigurations can cause the exact same effect as an intentional spoof causes difficulty in determining whether an incorrect address indicates a spoof.
Spoofing is a result of some inherent flaws in TCP/IP. TCP/IP basically assumes that all computers are telling the truth. There is little or no checking done to verify that a packet really comes from the address indicated in the IP header. When the protocols were being designed in the late 1960s, engineers didn’t anticipate that anyone would or could use the protocol maliciously. In fact, one engineer at the time described the system as flawless because “computers don’t lie.” There are different types of IP spoofing attacks. These include blind spoofing attacks in which the attacker can only send packets and has to make assumptions or guesses about replies, and informed attacks in which the attacker can monitor, and therefore participate in, bidirectional communications.
There are ways to combat spoofing, however. Stateful firewalls usually have spoofing protection whereby they define which IPs are allowed to originate in each of their interfaces. If a packet claimed to be from a network specified as belonging to a different interface, the packet is quickly dropped. This protects from both blind and informed attacks. An easy way to defeat blind spoofing attacks is to disable source routing in your network at your firewall, at your router, or both. Source routing is, in short, a way to tell your packet to take the same path back that it took while going forward. This information is contained in the packet’s IP Options, and disabling this will prevent attackers from using it to get responses back from their spoofed packets.
Spoofing is not always malicious. Some network redundancy schemes rely on automated spoofing to take over the identity of a downed server. This is because the networking technologies never accounted for the need for one server to take over for another.
Technologies and methodologies exist that can help safeguard against spoofing of these capability challenges. These include:
Using firewalls to guard against unauthorized transmissions.
Not relying on security through obscurity, the expectation that using undocumented protocols will protect you.
Using various cryptographic algorithms to provide differing levels of authentication.
Subtle attacks are far more effective than obvious ones. Spoofing has an advantage in this respect over a straight vulnerability exploit. The concept of spoofing includes pretending to be a trusted source, thereby increasing the chances that the attack will go unnoticed.
TEST DAY TIP Knowledge of TCP/IP is really helpful when dealing with spoofing and sequence attacks. Having a good grasp of the fundamentals of TCP/IP will make the attacks seem less abstract. Additionally, knowledge of not only what these attacks are, but how they work, will better prepare you to answer test questions.
If the attacks use just occasional induced failures as part of their subtlety, users will often chalk it up to normal problems that occur all the time. By careful application of this technique over time, users’ behavior can often be manipulated.
Address Resolution Protocol (ARP) spoofing can be quickly and easily done with a variety of tools, most of which are designed to work on UNIX OSes. One of the best all-around suites is a package called dsniff. It contains an ARP spoofing utility and a number of other sniffing tools that can be beneficial when spoofing.
To make the most of dsniff you’ll need a Layer 2 switch into which all of your lab machines are plugged. It is also helpful to have various other machines doing routine activities such as Web surfing, checking POP mail, or using Instant Messenger software.
1. To run dsniff for this exercise, you will need a UNIX-based machine. To download the package and to check compatibility, visit the dsniff Web site at www.monkey.org/~dugsong/dsniff.
2. After you’ve downloaded and installed the software, you will see a utility called arpspoof. This is the tool that we’ll be using to impersonate the gateway host. The gateway is the host that routes the traffic to other networks.
3. You’ll also need to make sure that IP forwarding is turned on in your kernel. If you’re using *BSD UNIX, you can enable this with the sysctl command (sysctl – w net.inet.ip.forwarding=1). After this has been done, you should be ready to spoof the gateway.
4. arpspoof is a really flexible tool. It will allow you to poison the ARP of the entire local area network (LAN), or target a single host. Poisoning is the act of tricking the other computers into thinking you are another host. The usage is as follows:
home# arpspoof –i fxp0 10.10.0.1
This will start the attack using interface fxp0, and will intercept any packets bound for 10.10.0.1. The output will show you the current ARP traffic.
5. Congratulations, you’ve just become your gateway.
You can leave the arpspoof process running, and experiment in another window with some of the various sniffing tools which dsniff offers. Dsniff itself is a jack-of-all-trades password grabber. It will fetch passwords for Telnet, FTP, HTTP, Instant Messaging (IM), Oracle, and almost any other password that is transmitted in the clear. Another tool, mailsnarf, will grab any and all e-mail messages it sees, and store them in a standard Berkeley mbox file for later viewing. Finally, one of the more visually impressive tools is WebSpy. This tool will grab Universal Resource Locator (URL) strings sniffed from a specified host, and display them on your local terminal, giving the appearance of surfing along with the victim.
You should now have a good idea of the kind of damage an attacker can do with ARP spoofing and the right tools. This should also make clear the importance of using encryption to handle data. Additionally, any misconceptions about the security or sniffing protection provided by switched networks should now be alleviated thanks to the magic of ARP spoofing!
As you have probably already begun to realize, the TCP/IP protocols were not designed with security in mind and contain a number of fundamental flaws that simply cannot be fixed due to the nature of the protocols. One issue that has resulted from IPv4’s lack of security is the MITM attack. To fully understand how a MITM attack works, let’s quickly review how TCP/IP works.
TCP/IP was formally introduced in 1974 by Vinton Cerf. The original purpose of TCP/IP was not to provide security; rather, it was to provide a high-speed, reliable, communication network links.
A TCP/IP connection is formed with a three-way handshake. As seen in Figure 6.4, a host (Host A) that wants to send data to another host (Host B) will initiate communications by sending a SYN packet. The SYN packet contains, among other things, the source and destination IP addresses as well as the source and destination port numbers. Host B will respond with a SYN/ACK. The SYN from Host B prompts Host A to send another ACK and the connection is established.
FIGURE 6.4
A Standard TCP/IP Handshake
If a malicious individual can place himself between Host A and Host B, for example compromising an upstream router belonging to the ISP of one of the hosts, he or she can then monitor the packets moving between the two hosts. It is then possible for the malicious individual to analyze and change packets coming and going to the host. It is quite easy for a malicious person to perform this type of attack on Telnet sessions, but, the attacker must first be able to predict the right TCP sequence number and properly modify the data for this type of attack to actually work—all before the session times out waiting for the response. Obviously, doing this manually is hard to pull off; however, tools designed to watch for and modify specific data have been written and work very well.
There are a few ways in which you can prevent MITM attacks from happening, like using a TCP/IP implementation that generates TCP sequence numbers that are as close to truly random as possible.
In a replay attack, a malicious person captures an amount of sensitive traffic, and then simply replays it back to the host in an attempt to replicate the transaction. For example, consider an electronic money transfer. User A transfers a sum of money to Bank B. Malicious User C captures User A’s network traffic, then replays the transaction in an attempt to cause the transaction to be repeated multiple times. Obviously, this attack has no benefit to User C, but could result in User A losing money. Replay attacks, while possible in theory, are quite unlikely due to multiple factors such as the level of difficulty of predicting TCP sequence numbers. However, it has been proven that the formula for generating random TCP sequence numbers, especially in older OSes, isn’t truly random or even that difficult to predict, which makes this attack possible.
Another potential scenario for a replay attack is this: an attacker replays the captured data with all potential sequence numbers, in hopes of getting lucky and hitting the right one, thus causing the user’s connection to drop, or in some cases, to insert arbitrary data into a session.
As with MITM attacks, the use of random TCP sequence numbers and encryption like SSH or Internet Protocol Security (IPSec) can help defend against this problem. The use of timestamps also helps defend against replay attacks.
Even with the most comprehensive filtering in place all firewalls are still vulnerable to DoS attacks. These attacks attempt to render a network inaccessible by flooding a device such as a firewall with packets to the point that it can no longer accept valid packets. This works by overloading the processor of the firewall by forcing it to attempt to process a number of packets far past its limitations. By performing a DoS attack directly against a firewall, an attacker can get the firewall to overload its buffers and start letting all traffic through without filtering it. If a technician is alerted to an attack of this type, they can block the specific IP address that the attack is coming from at their router.
An alternative attack that is more difficult to defend against is the DDoS attack. This attack is worse, because it can come from a large number of computers at the same time. This is accomplished either by the attacker having a large distributed network of systems all over the world (unlikely) or by infecting normal users’ computers with a Trojan horse application, which allows the attacker to force the systems to attack specific targets without the end user’s knowledge. These end-user computers are systems that have been attacked in the past and infected with a Trojan horse by the attacker. By doing this, the attacker is able to set up a large number of systems (called zombies) to perform a DoS attack at the same time. This type of attack constitutes a DDoS attack. Performing an attack in this manner is more effective due to the number of packets being sent. In addition, it introduces another layer of systems between the attacker and the target, making the attacker more difficult to trace.
Domain name kiting is when someone purchases a domain name and then soon after deletes the registration only to immediately reregister it. Because there is normally a 5-day registration grace period offered by many domain name registrars’, domain kiters will abuse this grace period by canceling the domain name registrations to avoid paying for them. This way they can use the domain names without cost.
Because the grace period offered by registrars allows the registration of a domain name to be canceled without cost or penalty as long as the cancellation comes within 5 days of the registration, you can effectively “own” and use a domain name during this short timeframe without actually paying for it.
It has become relatively easy to drop a domain name and claim the refund at the end of the grace period and by taking advantage of this process abusers are able to keep the registrations active on their most revenue-generating sites by cycling through cancellations and an endless refresh of their choice domain name registrations. As no cost is involved in turning over the domain names, domain kiters make money out of domains they are not paying for.
Domain Name Tasting
Another concept that is very similar to domain name kiting is called domain name tasting. The two are similar in that they are both the abuse of domain names and the grace period associated with them. Domain name tasters register a domain name to exploit the Web site names for profit.
Domain name investors will register groups of domain names to determine which namespaces will generate revenue through search engine queries and pay-per-click advertising mechanisms. They will often register typos of legitimate business sites hoping for human error to land Internet travelers on their Web sites, which in turn increases their bottom line.
If it is determined that a specific domain name is not returning profit for the tasters then they will simply drop the domain name, claim a refund, and continue on to the next group of names.
DNS poisoning or DNS cache poisoning occurs when a server is fed altered or spoofed records that are then retained in the DNS server cache. Once the DNS cache on a server has been “poisoned” in this fashion, because servers use their cache as the first mechanism to respond to incoming requests, all additional queries for the same record will be responded to with the falsified information.
Attackers can use this method to redirect valid requests to malicious sites. The malicious sites may be controlled by the offender and contain viruses or worms that are distributed, or they may simply offensive sites already in existence on the Internet. For example, imagine if your child were to type in www.barbie.com and instead of connecting to a pretty pink site with Barbie dolls and Barbie games ends up on an adult pornographic Web site.
DNS poisoning is a real threat that can be reduced by taking a few security precautions. First, by ensuring that your DNS server is up to date on patches and updates for known vulnerabilities you will help to ensure the safety of your DNS cache. Also, by taking advantage of Secure DNS whenever possible and employing digital signatures you will help to reduce the threat of DNS poisoning.
ARP is a broadcast-based protocol that functions at Layer 2 of the OSI model. Its purpose is to map a known IP address to its corresponding Media Access C ontrol (MAC) address for a packet to be properly addressed. A MAC address is a unique number assigned to network interface cards (NICs) by their manufacturers. ARP poisoning occurs when a client machine sends out an ARP request for another machine’s MAC address information and is sent falsified information instead. The spoofed ARP message allows the attacker to associate a MAC address of their choosing to a particular IP address, which means any traffic meant for that IP address would be mistakenly sent to the attacker instead. This opens the door for numerous attack mechanisms to be employed. Once the data has been intercepted, the attacker could choose to modify the data before forwarding it, which is called a man-in-the-middle attack or even launch a DoS attack against a victim by associating a nonexistent MAC address to the IP address of the victim’s default gateway.
When you are designing a network it is a good idea to have security in mind from the beginning. As you piece things together to meet your needs, there is a good probability that security will be among the things you must consider. Understanding the components and elements used in network design and how they work together is a good first step to building an effective design. In this section, we will discuss the following components of network design:
DMZs
Subnets
Network Access Translation
Network Access Control/Network Access Protection
IP Telephony
Although differing components can be effectively used together, in some instances they need to be used completely separately from each other. You must imagine the different pieces that make up a network as discrete network segments holding systems that share common requirements. They are sometimes called security zones and some of these common requirements can be:
The types of information the zone handles
Who uses the zone
What levels of security the zone requires to protect its data
EXAM WARNING A security zone is defined as any portion of a network that has specific security concerns or requirements. Intranets, extranets, DMZs, and VLANs are all security zones.
It is possible to have systems in a zone running different OSes, such as Windows Vista and NetWare 6.5. The type of computer, whether a PC, server, or mainframe, is not as important as the security needs of the computer. For example, there is a network that uses Windows 2003 servers as domain controllers, Domain name system (DNS) servers, and Dynamic Host Control Protocol (DHCP) servers. There are also Windows XP Professional clients and NetWare 6.5 file servers on the network. Some users may be using Macintosh computers running OS X or OS 9, while others may be running one or more types of Linux or UNIX. This is an extremely varied network, but it may still only have one or two security zones. The key is that the type of a computer and its operating system are not as important with regards to security zones and where the machines may fall. Each of these components helps to make up your network topology and if used correctly can assist you in creating a safe and effective network design.
For example, suppose you have an e-commerce application that uses Microsoft’s Internet Information Server (IIS) running a custom Active Server Page (ASP) application, which calls on a second set of servers hosting custom COM+ components, which in turn interact with a third set of servers that house an Structured Query Language (SQL) 2005 database. Figure 6.5 provides an example of this concept.
This is a fairly complex example, but helps illustrate the need for differing security topologies on the same network. Under no circumstances should COM+ servers or SQL 2005 servers be exposed to the Internet directly—they should be protected by placing them behind a strong security solution. At the same time, you do not want to leave IIS servers exposed to every hacker and script kiddie out there, so they should be placed in a DMZ or behind the first firewall or router. The idea here is to layer security so that a breach of one set of servers such as the IIS servers does not directly expose COM+ or SQL servers.
FIGURE 6.5
The Complex N-tier Arrangement
In the early days of business Internet connectivity, the concept of security zones was developed to separate systems available to the public Internet from private systems available for internal use by an organization. A device called a firewall was utilized to separate the zones. Figure 6.6 shows a visual representation of the basic firewall concept.
Many of these early firewalls had only basic abilities and usually functioned only as a packet filter. Packet filters rely on ACLs. ACLs allow the packet filter to be configured to block or allow traffic based on attributes such as IP address and source and destination port. Packet filters are considered stateless, while more advanced modern firewalls are considered to be stateful. Regardless of what type of firewall you are working with, most provide the ability to:
Block traffic based on certain rules. The rules can block unwanted, unsolicited, spurious, or malicious traffic (Figure 6.3).
Mask the presence of networks or hosts to the outside world. Firewalls can also ensure that unnecessary information about the makeup of the internal network is not available to the outside world.
Log and maintain audit trails of incoming and outgoing traffic.
Provide additional authentication methods.
FIGURE 6.6
A Basic Firewall Installation
FIGURE 6.7
A Sample Firewall Rule Set
As you can see in Figure 6.7, you have quite a lot of flexibility when creating firewall rules. If you examine the row across the top of the image, you will notice the different components that we can configure when creating a new firewall rule. For instance, source and destination columns allow you to specify the source and destination IP addresses; the action column indicates what to do with traffic that matches a particular rule, the time column allows you to specify when the rule is in effect, and so on. When firewalls are processing rules they will typically move through the rule set from top to bottom, looking for a match for the traffic they are processing. Once a match is found the action in the matching rule will be performed on the data packets. The last rule in the firewall configuration is oftentimes a catch all type of rule, so if the data doesn’t match any other rule, it will match the last rule which is normally a drop or deny rule. So for instance, the last rule in the image shows a source and destination of ANY, which indicates all traffic will meet this rule. The action says drop, which means all traffic that has matched this rule will be immediately dropped.
Some newer firewalls include more advanced features, such as integrated VPN applications that allow remote users to access local systems through a secure, encrypted tunnel. Some firewalls have integrated IDSes in their product and can make firewall rule changes based on the detection of suspicious events happening at the network gateway. (IDS products and their use are covered later in this chapter.) These new hybrid technologies have much promise and make great choices for creating a “defense in depth” strategy, but remember that the more work the firewall is doing to support these other functions, the more chance there is that these additional tools may impact the throughput of the firewall device.
Notes from the Field
Using a Defense-in-Depth Strategy
The defense-in-depth strategy specifies the use of multiple layers of network security. In this way, you avoid depending on one single protective measure deployed on your network. In other words, to eliminate the false feeling of security because you implemented a firewall on your Internet connection, you should implement other security measures such as an IDS, auditing, and biometrics for access control. You need many levels of security (hence, defense in depth) to be able to feel safe from potential threats. A possible defense-in-depth matrix with auditing included could look like the graphic in Figure 6.8.
In addition, when a number of these features are implemented on any single device, it creates a wide opportunity for a successful attacker if that device is ever compromised. If one of these hybrid information security devices is chosen, it is important to stay extra vigilant about applying patches and to include in the risk mitigation planning how to deal with a situation in which this device falls under the control of an attacker.
FIGURE 6.8
A Graphical Representation of Defense in Depth
Although the installation of a firewall or hybrid device protects the internal systems of an organization, it does nothing to protect the systems that are made available to the public Internet. A different type of implementation is needed to add basic protection for those systems that are offered for public use. Thus enters the concept of the DMZ. The servers that are located in the DMZ reside outside of the protected internal network. We will discuss DMZs in more detail later in the chapter. The rest of the internal network is called the intranet, which means a private internal network. The intranet, therefore, is every part of a network that lies on the inside of the last firewall from the Internet. Figure 6.9 gives an example of an intranet.
TEST DAY TIP Risk mitigation, according to the Project Management Institute (PMI), seeks to reduce the probability and/or impact of a specific risk below an acceptable threshold. For more information on risk and project management, see the PMI online at www.pmi.org.
TEST DAY TIP The terminology can be confusing to beginners. One might think the internal network would be the Internet, but this is not the case. An Internet (including the global Internet) refers to communications between different networks, while the intranet refers to communications within a network. It may help to use a comparison: interstate commerce refers to business transacted across state lines (between different states), while intrastate commerce refers to business transacted within one state.
It is expected that all traffic on the intranet will be secure and safe from the prying eyes on the Internet. It is the network security professional’s job to make sure that this happens. Although a security breach of a DMZ system can be costly to a company, a breach that occurs inside an intranet could be extraordinarily costly and damaging. If this happens, customers and business partners might lose faith in the company’s ability to safeguard sensitive information, and other attackers will likely make the network a favorite target for future attacks.
FIGURE 6.9
A Simple Intranet Example
To ensure that all traffic on the intranet is secure, the following issues should be addressed:
Make sure that the firewall is configured properly to stop attack attempts at the firewall. There are many different opinions on how to do this, but the majority of security professionals agree that you should start with a deny all or “block everything” mentality and then open the firewall on a case-by-case basis, thereby only allowing specific types of traffic to cross it (regardless of which direction the traffic is flowing). It’s important to remember that each open port and service offers the attacker an additional path from which he may potentially target the network.
Make sure that the firewall is configured properly to prevent unauthorized network traffic, such as file sharing programs (for example, BitTorrent, Gnutella, or Morpheus) from being used on the internal network.
Make sure the firewall will watch traffic that egresses or leaves the network from trusted hosts, and ensure that it is not intercepted and altered en route; steps should also be taken to try to eliminate spoofing from attackers.
Make sure that the antivirus software is in use and up to date. Consider implementing an enterprise-level solution, consisting of a central server responsible for coordinating and controlling the identification and collection of viruses on your network.
Educate users on the necessity of keeping their computers logged out when not in use.
Implement IPSec on the intranet between all clients and servers to prevent eavesdropping; note that more often than not, the greatest enemy lies on the inside of the firewall.
Conduct regular, but unannounced, security audits and inspections. Be sure to closely monitor all logs that are applicable.
Do not allow the installation of modems or unsecured wireless access points on any intranet computers. Do not allow any connection to the Internet except through the firewall and proxy servers, as applicable.
Of course, there are literally hundreds of other issues that may need to be addressed but these are some of the easiest ones to take care of and the most commonly exploited ones.
NOTE All of the Internet security measures listed here should be used at your discretion, based on what is available and what meets the business needs of your company. You can use any one of these, all of these, or continue with an almost infinite list of applied security measures that are covered in this book.
Extranets are a special implementation of the intranet topology. Creating an extranet allows for access to a network or portions of the network by trusted customers, partners, or other users. These users, who are external to the network—they are on the Internet side of the firewalls and other security mechanisms—can then be allowed to access private information stored on the internal network that they would not want to place on the DMZ for general public access. The amount of access that each user or group of users is allowed to have to the intranet can be easily customized to ensure that each user or group gets what they need and nothing more. Additionally, some organizations create extranets to allow their own employees to have access to certain internal data while away from the private network.
NOTE You must have a functional intranet setup before attempting to create an extranet.
The following is an example of how two companies might each choose to implement an extranet solution for their mutual benefit. Company A makes door stoppers and has recently entered into a joint agreement with Company B. Company B makes cardboard boxes. By partnering together, both companies are hoping to achieve some form of financial gain. Company A is now able to get cardboard boxes (which it needs to ship its product) made faster, cheaper, and to exact specification; Company B benefits from newfound revenue from Company A. Everybody wins and both companies are very happy. After some time, both companies realize that they could streamline this process even more if they each had access to certain pieces of data about the other company. For example, Company A wants to keep track of when its cardboard boxes will be arriving. Company B, on the other hand, wants to be able to control box production by looking at how many orders for door stoppers Company A has. What these two companies need is an extranet. By implementing an extranet solution, both companies will be able to get the specific data they need to make their relationship even more profitable, without either company having to grant full, unrestricted access to its private internal network. Figure 6.10 depicts this extranet solution.
Users attempting to gain access to an extranet require some form of authentication before they are allowed access to resources. The type of access control implemented can vary, but some of the more common include usernames/passwords and digital certificates. Once an extranet user has been successfully authenticated, they can gain access to the resources that are allowed for their access level. In the previous example, a user from Company B’s production department might need to see information about the number of door stoppers being ordered, while a user from Company A’s shipping department might need to see information detailing when the next shipment of boxes is expected.
FIGURE 6.10
A Simple Extranet Example
EXAM WARNING Be able to readily define an extranet. You must know the difference between the Internet, intranet, and extranet.
In computer security, the DMZ is a “neutral” network segment where systems accessible to the public Internet are housed, which offers some basic levels of protection against attacks. The term “DMZ” is derived from the military and is used to describe a “safe” or buffer area between two countries where, by mutual agreement, no troops or war-making activities are allowed. In the next sections we will explore this concept in more detail.
There are usually strict rules regarding what is allowed within a zone. When applying this term to the IT security realm, it can be used to create DMZ segments in usually one of two ways:
Layered DMZ implementation
Multiple interface firewall implementation
In the first method, the systems that require protection are placed between two firewall devices with different rule sets, which allow systems on the Internet to connect to the offered services on the DMZ systems, but prevents them from connecting to the computers on the internal segments of the organization’s network (often called the protected network).
FIGURE 6.11
A Multiple Interface Firewall DMZ Implementation
The second method is to add a third interface to the firewall and place the DMZ systems on that network segment (Figure 6.11). As an example, this is the way Cisco PIX firewalls are designed. This design allows the same firewall to manage the traffic between the Internet, the DMZ, and the protected network. Using one firewall instead of two lowers the costs of the hardware and centralizes the rule sets for the network, making it easier to manage and troubleshoot problems. Currently, this multiple interface design is a common method for creating a DMZ segment.
In either case, the DMZ systems are offered some level of protection from the public Internet while they remain accessible for the specific services they provide to external users. In addition, the internal network is protected by a firewall from both the external network and the systems in the DMZ. Because the DMZ systems still offer public access, they are more prone to compromise and thus they are not trusted by the systems in the protected network. A good first step in building a strong defense is to harden the DMZ systems by removing all unnecessary services and unneeded components. The result is a bastion host. This scenario allows for public services while still maintaining a degree of protection against attack.
EXAM WARNING Hosts located in a DMZ are generally accessed from both internal network clients and public (external) Internet clients. Examples of DMZ bastion hosts are DNS servers, Web servers, and FTP servers. A bastion host is a system on the public side of the firewall, which is exposed to attack. The word bastion comes from a sixteenth-century French word, meaning the projecting part of a fortress wall that faces the outside and is exposed to attackers.
The role of the firewall in all of these scenarios is to manage the traffic between the network segments. The basic idea is that other systems on the Internet are allowed to access only the services of the DMZ systems that have been made public. If an Internet system attempts to connect to a service not made public, the firewall drops the traffic and logs the information about the attempt (if configured to do so). Systems on a protected network are allowed to access the Internet as they require, and they may also access the DMZ systems for managing the computers, gathering data, or updating content. In this way, systems are exposed only to attacks against the services that they offer, and not to underlying processes that may be running on them.
The systems in the DMZ can host any or all of the following services:
Internet Web Site Access IIS or Apache servers that provide Web sites for public and private usage. Examples would be www.microsoft.com or www.netserverworld.com. Both of these Web sites have both publicly and privately available contents.
FTP Services FTP file servers that provide public and private downloading and uploading of files. Examples would be the FTP servers used by popular download providers at www.downloads.com or www.tucows.com. FTP is designed for faster file transfer with less overhead, but does not have all of the special features that are available in HTTP, the protocol used for Web page transfer.
EXAM WARNING Remember that FTP has significant security issues in that username and password information is passed in clear text and can easily be sniffed.
E-mail Relaying A special e-mail server that acts as a middleman of sorts. Instead of e-mail passing directly from the source server to the destination server (or the next hop in the path), it passes through an e-mail relay that then forwards it. E-mail relays are a double-edged sword and most security professionals prefer to have this function disabled on all publicly accessible e-mail servers. On the other hand, some companies have started offering e-mail relaying services to organizations as a means of providing e-mail security.
DNS Services A DNS server might be placed in the DMZ to point incoming access requests to the appropriate server with the DMZ. This can alternatively be provided by the Internet Service Provider (ISP), usually for a nominal extra service charge. If DNS servers are placed in the DMZ, it is important to be careful and ensure that they cannot be made to conduct a zone transfer (a complete transfer of all DNS zone information from one server to another) to any server. This is a common security hole found in many publicly accessible DNS servers. Attackers typically look for this vulnerability by scanning to see if port TCP 53 is open. When you are placing a DNS server into the DMZ, it is often a good idea to examine the usage of split horizon DNS. Split horizon DNS is when there are two authoritative sources for your domain namespace, and the contents of the databases differ depending on whether the server is serving internal or external queries. Split horizon DNS adds security to the environment because the external database that may reside in the DMZ would only contain records that would be appropriate to expose, while the internal database would be protected on the LAN.
Intrusion Detection The placement of an IDS system (discussed later in this chapter) in the DMZ is difficult and depends on the network requirements. IDSes placed in the DMZ will tend to give more false positive results than those inside the private internal network, because of the nature of Internet traffic and the large number of script kiddies out there. To reduce the larger number of false positives, as the administrator you must perform IDS tuning. IDS tuning is the process of adjusting the settings on your IDS system so that it is more appropriately configured to recognize normal traffic patterns in your environment. This allows the system to better detect truly unusual traffic circumstances for your network and alert you less frequently for false positives. Still, placing an IDS on the DMZ can give administrators early warning of attacks taking place on their network resources.
The rise of e-commerce and the increased demand of online transactions have increased the need for secure architectures and well-designed DMZs. E-commerce requires more attention to be paid to securing transaction information that flows between consumers and the sites they use, as well as between e-commerce businesses themselves. Customer names, addresses, order information, and especially financial data need greater care and handling to prevent unauthorized access. This greater care is accomplished through the creation of the specialized segments mentioned earlier (which are similar to the DMZ) called security zones. Other items such as the use of encryption and the use of secure protocols like SSL and transport layer security (TLS) are also important when designing a more secure architecture.
Security requirements for storing customer information and financial data are different from the requirements for storing routine, less sensitive information that businesses handle. Because this data requires processing and much of the processing is done over the Internet, more complicated network structures must be created. Many organizations choose to implement a multiple segment structure to better manage and secure their different types of business information.
This multisegment approach allows flexibility, because new segments with specific purposes and security requirements can be easily added to the model. In general, the two segments that are widely accepted are as follows:
A segment dedicated to information storage
A segment specifically for the processing of business information
Each of these two new segments has special security and operability concerns above and beyond those of the rest of the organizational intranet. In reality, everything comes down to dollars—what is it going to cost to implement a security solution versus what will it cost if the system is breached by attackers. Thus, the value of raw data is different than the value of the financial processing system. Each possible solution has its pluses and minuses, but in the end a balance is struck between cost versus expected results. Thus, the creation of different zones (segments) for different purposes. Note that in this example the Web and e-mail servers would likely receive the least amount of spending and security measures, which is not to say that they will be completely ignored, they just would not receive as much as the financial servers might.
Creation of multiple segments changes a network structure to look like the drawing in Figure 6.12.
Remember that by adding additional zones you are also adding additional overhead. In this scenario all traffic must traverse firewall rules to move between zones. The diagram shown in Figure 6.9 includes the following two new zones:
The data storage network
The financial processing network
The data storage zone is used to hold information that the e-commerce application requires, such as inventory databases, pricing information, ordering details, and other nonfinancial data. The Web servers in the DMZ segment serve as the interface to the customers; they access the servers in the other two segments to gather the required information and to process the users’ requests.
FIGURE 6.12
A Modern e-commerce Implementation
When an order is placed, the business information in these databases is updated to reflect the real-time sales and orders of the public. These business-sensitive database systems are protected from the Internet by the firewall, and they are restricted from general access by most of the systems in the protected network. This helps to protect the database information from unauthorized access by an insider or from accidental modification by an inexperienced user.
TEST DAY TIP You will not need to know how an e-commerce DMZ is set up to pass the Security+ exam; however, it is important to know this information for real-world security work.
The financial information from an order is transferred to the financial processing segment. Here, the systems validate the customer’s information and then process the payment requests to a credit card company, a bank, or a transaction clearinghouse. After the information has been processed, it is stored in the database for batch transfer into the protected network, or it is transferred in real time, depending on the setup. The financial segment is also protected from the Internet by the firewall, as well as from all other segments in the setup. This system of processing the data in a location separate from the user interface creates another layer that an attacker must penetrate to gather financial information about customers. In addition, the firewall protects the financial systems from access by all but specifically authorized users inside a company.
Access controls also regulate the way network communications are initiated. For example, if a financial network system can process credit information in a store-and-forward mode, it can batch those details for retrieval by a system from the protected network. To manage this situation, the firewall permits only systems from the protected network to initiate connections with the financial segment. This prevents an attacker from being able to directly access the protected network in the event of a compromise. On the other hand, if the financial system must use real-time transmissions or data from the computers on the protected network, the financial systems have to be able to initiate those communications. In this event, if a compromise occurs, the attacker can use the financial systems to attack the protected network through those same channels. It is always preferable that DMZ systems not initiate connections into more secure areas, but that systems with higher security requirements initiate those network connections. Keep this in mind as you design your network segments and the processes that drive your site.
TEST DAY TIP The phrase store-and-forward refers to a method of delivering transmissions in which the messages are temporarily held by an intermediary before being sent on to their final destination. Some switches and many e-mail servers use the store-and-forward method for data transfer.
EXAM WARNING DMZ design is covered on the Security+ exam. You must know the basics of DMZ placement and what components the DMZ divides.
In large installations, these segments may vary in placement, number, and/or implementation, but this serves to generally illustrate the ideas behind the process. An actual implementation may vary from this design. For example, an administrator may wish to place all the financial processing systems on the protected network. This is acceptable as long as the requisite security tools are in place to adequately secure the information. Other possible implementations include segmenting business information off an extension of the DMZ as well as discrete DMZ segments for development and testing. Specific technical requirements will impact actual deployment, so administrators may find that what they currently have in place on a network (or the need for a future solution) may deviate from the diagrams shown earlier. The bottom line is to ensure that systems are protected.
Some common problems do exist with multiple-zone networks. By their very nature they are complex to implement, protect, and manage. Firewall rule sets are often large, dynamic, and confusing, and the implementation can be arduous and resource intensive.
Creating and managing security controls such as firewall rules, IDS signatures, and user access regulations is a large task. These processes should be kept as simple as possible without compromising security or usability. It is best to start with deny-all strategies and permit only the services and network transactions required to make the site function, and then carefully manage the site’s performance making small changes to the access controls to more easily manage the rule sets. Using these guidelines, administrators should be able to quickly get the site up and running without creating obvious security holes in the systems.
EXAM WARNING The concept of a denial all strategy will be covered on the Security+ exam. A denial all strategy means that all services and ports are disabled by default, and then only the minimum level of service is activated as a valid business case is made for each service.
As a site grows and offers new features, new zones may have to be created. The abovementioned process should be repeated for creating the rule sets governing these new segments. As always, it is important to audit and inspect any changes and keep backups of the old rule sets in case they are needed again.
As long as services are hosted onsite in environments and the services have a need for accessibility from the Internet or from other organizations, the DMZs of the world will continue to be designed and deployed.
EXAM WARNING Make sure that you know the definitions and the differences between a firewall and a DMZ.
A subnet is a group of computers that have been logically grouped together and assigned a common network address. Subnets can be arranged in many ways in the network environment, and the one thing to understand is that a machine’s IP address dictates what subnet it is a member of. For the subnet to function appropriately all machines on the same subnet must be connected via the same Switch or Hub backbone and share the same network prefix in their IP address. A group of machines on the same subnet are able to send network traffic among them in just a single hop, and routers are used to pass network traffic between subnets and form the basis of subnet boundaries.
A VLAN can be thought of as the equivalent to a broadcast domain.
TEST DAY TIP A broadcast domain consists of a group of nodes (computers) that receive Layer 2 broadcasts sent by other members of the same group. Typically, broadcast domains are separated by creating additional network segments or by adding a router.
Do not confuse broadcast domains with collision domains. Collision domains refer specifically to Ethernet networks. The area of network cabling between Layer 2 devices is known as a collision domain. Layer 2 devices typically include switches that rely on the physical address (MAC address) of computers to route traffic.
VLANs are a way to segment a network, as discussed earlier. When thinking of a VLAN, think of taking a switch and physically cutting it into two or more pieces with an axe. Special software features found in newer, more expensive switches, allow administrators to physically split one physical switch into multiple logical switches, thus creating multiple network segments that are completely separate from one another.
The VLAN is thus a logical local area network that uses a basis other than a physical location to map the computers that belong to each separate VLAN (for example, each department within a company could comprise a separate VLAN, regardless of whether or not the department’s users are located in physical proximity). This allows administrators to manage these virtual networks individually for security and ease of configuration.
Let’s look at an example of using VLANs. There is an Engineering section consisting of 14 computers and a Research section consisting of 8 computers, all on the same physical subnet. Users typically communicate only with other systems within their respective sections. Both sections share the use of one Cisco Catalyst 2950 switch. To diminish the size of the necessary broadcast domain for each section, the administrator can create two VLANs, one for the Engineering section and one for the Research section. After creating the two VLANs, all broadcast traffic for each section will be isolated to its respective VLAN. But what happens when a node in the Engineering section needs to communicate with a node in the Research section? Do the two systems connect from within the Catalyst 2950 switch? No; this cannot occur because the two sections have been set up on two different VLANs. For traffic to be passed between VLANs (even when they are on the same switch) a router must be used.
Figure 6.13 graphically depicts the previous example of splitting one switch into two VLANs. Note that two switches can also be split into two VLANs or more, depending on the need. The following example shows how to split two switches into multiple VLANs with each VLAN acting as its own physically separated network segment. In reality, many more VLANs can be created; they are only limited by port density (the number of ports on a switch) and the feature set of the switch’s software.
Figure 6.13
Using VLANs to Segment Network Traffic
Each VLAN functions like a separate network due to the combination of hardware and software features built into the switch itself. Thus, the switch must be capable of supporting VLANs to use them. The following are typical characteristics of VLANs when implemented on a network:
Each VLAN is the logical equivalent of a physically separate network as far as traffic is concerned.
A VLAN can span multiple switches, limited only by imagination and the capabilities of the switches being used.
Trunks carry the traffic between each switch that is part of a VLAN. A trunk is defined as a point-to-point link from one switch to another switch. The purpose of a trunk is to carry the traffic of multiple VLANs over a single link.
Cisco switches, for example, use the Cisco proprietary interswitch link (ISL) and IEEE 802.1Q protocol as their trunking protocols.
EXAM WARNING Know that VLANs implement security at the switch level. If you are not on the same VLAN as another user on your network and access is not allowed, you can secure communications from such hosts.
A complete description of VLANs beyond the scope of the Security+ exam can be found at www.ciscopress.com/articles/article.asp?p=29803&rl=1. The IEEE 802.1Q standard can be downloaded at www.ieee802.org/1/pages/802.1Q.html.
NAT was developed because of the explosive growth of the Internet and the increase in home and business networks—the number of available IP addresses was simply not enough. A computer must have an IP address to communicate with other computers on the Internet. NAT allows a single device, such as a router, to act as an agent between the Internet and the local network. This device or router provides a pool of addresses to be used by your local network. Only a single, unique IP address is required to represent this entire group of computers. The outside world is unaware of this division and thinks that only one computer is connected. Common types of NAT include:
Static NAT Used by businesses to connect Web servers to the Internet
Dynamic NAT Larger businesses use this type of NAT because it can operate with a pool of public addresses.
Port Address Translation (PAT) Most home networks using DSL or cable modems use this type of NAT.
NAT is a feature of many routers, firewalls, and proxies. NAT has several benefits, one of which is its ability to hide the IP address and network design of the internal network. The ability to hide the internal network from the Internet reduces the risk of intruders gleaning information about the network and exploiting that information to gain access. If an intruder does not know the structure of a network, the network layout, the names and IP address of systems, and so on, it is very difficult to gain access to that network. NAT enables internal clients to use nonroutable IP addresses, such as the private IP addresses defined in RFC 1918, but still enables them to access Internet resources. The three ranges of IP addresses RFC 1918 reserved includes:
10.0.0.0 - 10.255.255.255 (10/8 prefix)
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
NAT can be used when there are many internal private IP addresses and there are only a few public IP addresses available to the organization. In this situation, the company can share the few public IP addresses among all the internal clients. NAT can also aid in security as outsiders cannot directly see internal IP addresses. Finally, NAT restricts traffic flow so that only traffic requested or initiated by an internal client can cross the NAT system from external networks.
When using NAT, the internal addresses are reassigned to private IP addresses and the internal network is identified on the NAT host system. Once NAT is configured, external malicious users are only able to access the IP address of the NAT host that is directly connected to the Internet, but they are not able to “see” any of the internal computers that go through the NAT host to access the Internet.
Damage and Defense
Deploying a NAT Solution
NAT is relatively easy to implement, and there are several ways to do so. Many broadband hardware devices (cable and DSL modems) are called cable/DSL “routers,” because they allow you to connect multiple computers. However, they are actually combination modem/NAT devices rather than routers, because they require only one external (public) IP address. You can also buy NAT devices that attach your basic cable or DSL modem to the internal network. Alternatively, the computer that is directly connected to a broadband modem can use NAT software to act as the NAT device itself. This can be an add-on software program or the NAT software that is built into some OSes. For example, Windows XP and Vista include a fully configurable NAT as part of its routing and remote access services. Even older versions of Microsoft products such as Windows 98SE, Me, and 2000 Professional include a “lite” version of NAT called Internet connection sharing (ICS).
For a quick, illustrated explanation of how NAT works with a broadband connection, see the HomeNetHelp article at www.homenethelp.com/web/explain/about-NAT.asp.
FIGURE 6.14
NAT Hides the Internal Addresses
When NAT is used to hide internal IP addresses (see Figure 6.14), it is sometimes called a NAT firewall; however, do not let the word firewall give you a false sense of security. NAT by itself solves only one piece of the security perimeter puzzle. A true firewall does much more than link private IP addresses to public ones, and vice versa.
Head of the Class
Public and Private Addressing
Certain IP address ranges are classified as private IP addresses, meaning they are not to be routed on the Internet. These addresses are intended only for use on private internal networks. There are three groups of private IP addresses under the IPv4 standard as outlined here:
10.0.0.0 - 10.255.255.255 (10/8 prefix)
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
The network segment shown in Figure 6.11 uses private IP addresses on the internal network from the 192.168.5.x subnet. The allowable addresses in this subnet would then be 192.168.5.1 through 192.168.5.254. The 192.168.5.255 address is considered to be a broadcast address—one that would be used if a computer needed to send a transmission to all other computers on that subnet. Typically, the gateway or router will occupy the first address in a given range (as is the case in Figure 6.11), where the router has been assigned the address of 192.168.5.1 on its LAN interface.
For a complete discussion on private IP addresses, see RFC 1918 at ftp://ftp.rfc-editor. org/in-notes/rfc1918.txt. The IANA maintains a current listing of all IPv4 IP address range assignments at www.iana.org/assignments/ipv4-address-space. You can also examine all of the special IPv4 IP address assignments at ftp://ftp.rfc-editor.org/in-notes/rfc3330.txt.
As seen in this chapter, hardening is an important process. Another way to harden the network is to use network access control (NAC). As a brief aside, there’s a bit of semantics that need to be dealt with. NAC is a technology and concept that has existed for several years. When Microsoft began to look at including a similar feature in Windows Vista and Windows Server 2008 they chose the term network access protection (NAP). The bottom line is that both NAC and NAP achieve the same goal—ensuring that the endpoint system is a valid system and meets specific health requirements (patches, anti-virus protection, system settings, and so forth) to be allowed on the network according to a defined policy. For the sake of this section we will use the term NAC in its most generic sense rather than based on a specific vendor’s interpretation. There are several different incarnations of NAC available. These include infrastructure-based NAC, endpoint-based NAC, and hardware-based NAC.
1. Infrastructure-based NAC requires an organization to be running the most current hardware and OSes. OSes, such as Microsoft Vista, have the ability to perform NAC.
2. Endpoint-based NAC requires the installation of software agents on each network client. These devices are then managed by a centralized management console.
3. Hardware-based NAC requires the installation of a network appliance. The appliance monitors for specific behavior and can limit device connectivity should noncompliant activity be detected.
NAC offers administrators a way to verify that devices meet certain health standards before they’re allowed to connect to the network. Laptops, desktop computers, or any device that doesn’t comply with predefined requirements, can be prevented from joining the network or can even be relegated to a controlled network where access is restricted until the device is brought up to the required security standards.
One area that is often overlooked in the IT security field is telecommunications. A company’s business can be just as easily disrupted by having its telecommunications disabled as it can by having its computer network disabled. That makes this an important area to be aware of when developing an overall security plan.
Typically, most small companies use a small number of dedicated telephone lines for both incoming and outgoing calls, which keeps the responsibility of providing telephone service on the service provider. In larger companies, however, having dedicated lines for hundreds or thousands of employees is both inefficient and expensive.
The solution to this problem is to install a Private Branch eXchange (PBX), which is a device that handles routing of internal and external telephone lines. This allows a company to have a limited number of external lines and an unlimited (depending on the resources of the PBX) number of internal lines. By limiting the number of external lines, a company is able to control the cost of telephone service while still providing for the communications needs of its employees. For example, a company may have 200 internal lines or extensions but only 20 external lines. When an employee needs to communicate outside of the company, one of the external lines is used, but when two employees communicate via the telephone system, the routing is done completely by the PBX and no external lines are used.
PBX systems offer a great cost benefit to large companies, but they also have their own vulnerabilities. Many PBXs are designed to be maintained by an off-site vendor, and therefore have some method of remote access available. This can be in the form of a modem or, on newer models, a connection to a LAN. The best practice is to disable these remote access methods until the vendor has been notified that they need to perform maintenance or prepare an update. This limits the susceptibility to direct remote access attacks.
PBXes are also vulnerable to DoS attacks against their external phone lines. There is also the possibility of them being taken over remotely and used to make unauthorized phone calls via the company’s outgoing lines. Voicemail capability can also be abused. Hackers who specialize in telephone systems, called phreakers, like to take control over voicemail boxes that use simple passwords, and change the passwords or the outgoing messages.
Many smaller organizations are now using PBXes for telephony needs. This is due to the availability of cheap or free PBX systems running software released under the GPL license. An example of this is the Asterisk open source PBX available at www.asterisk.org/. With the high availability of this type of software at low costs, it is natural for smaller companies to adopt these solutions. Software like this suffers from the same types of vulnerabilities as standard PBXes if not properly configured; therefore it should be closely examined as a security risk.
In today’s networking world, networks no longer have to be designed the same way. There are many options available as to how to physically and logically design a network. All of these options can be used to increase the security of the internal network by keeping untrusted and unauthorized users out. The usage of DMZs to segment traffic into a protected zone between external and internal firewalls helps prevent attacks against your Internet facing servers.
A NAT device can be used to hide the private intranet from the public Internet. NAT devices work by translating all private IP addresses into one or more public IP addresses, therefore making it look as if all traffic from the internal network is coming from one computer (or a small group of computers). The NAT device maintains a routing table of all connection requests, and therefore is able to ensure that all returning packets get directed to the correct originating host. Extranets can be established using VPN tunnels to provide secure access to intranet resources from different geographic locations. VPNs are also used to allow remote network users to securely connect back to the corporate network.
To additionally reduce the risk in your environment application and service hardening should be considered. Be familiar with the required ports for various services, so that you can uninstall or disable unused services that will reduce unnecessary exposure. Include evaluation of network services such as DNS and DHCP, and specific types of application services such as e-mail, databases, NNTP servers, and others.
IDSes are used to identify and respond to attacks on the network. Several types of IDSes exist, each with its own unique pros and cons. Which type you choose depends on your needs, and ultimately on your budget. An IPS is a newer type of IDS that can quickly respond to perceived attacks. Honeypots are advanced IDSes that can intelligently respond to attacks, actually enticing the attacker to select them over other real targets on the network. Honeypots can be used to distract attackers from real servers and keep them occupied while you collect information on the attack and the source of the attack.
After an attack has occurred, the most important thing to do is to collect all of the evidence of the attack and its methods. You will also want to take steps to ensure that the same type of attack cannot be successfully performed on the network in the future.
Eliminate unused and unnecessary protocols and services to limit exposure to attacks.
Create and build strong ACLs for control of devices and network operations.
Keep up with device-specific hotfixes, patches, and firmware upgrades to maintain high availability and security.
Intrusion Detection Systems can be deployed to alert administrators of unusual or suspicious activity on the network.
Honeypots and honeynets can be useful tools to redirect the attention of attacks to decoy systems to prevent damage to production components.
Firewalls can be deployed to segment the network and add additional security with firewall rules.
Follow best practices for hardening specific application-type servers such as e-mail, FTP, and Web servers.
Be aware of common network threats and take measures to prepare for them.
Application-specific fixes, patches, and updates are used in addition to OS and NOS fixes.
Create DMZs and establish security zones in your network design to isolate.
VLANs are virtual local area networks that are used to logically group machines that may not be on the same physical network.
NAT is a method used to map internal private IP addresses to external addresses, thus reducing the number of required external addresses.
Q: What protocols should I eliminate?
A: This depends on your system needs. Unnecessary protocols often include NetBEUI, IPX/SPX, and NetBIOS dependent functions. Do not forget to evaluate the underlying protocols, such as ICMP and IGMP, for removal as well.
Q: Is network security really important?
A: This depends on your environmental needs. In some circumstances security is highly regarded and a large amount of money and effort will be put into securing the environment. In other companies security is lower on the importance list and isn’t given as much consideration.
A: NAT is when you map external IP addresses to internal IP addresses. One benefit of utilizing NAT is that an organization can reduce its requirement for public IP addresses.
Q: What is a proxy server?
A: A proxy server is a device that sits between the Internet and the intranet and funnels traffic. It can provide access control and also document caching. Depending on the proxy server implementation they oftentimes have the capability to cache Web page content as well which makes browsing common sites faster, and they can publish internal Web site content to the Internet.
Q: How do I find out which port numbers are used by a specific application?
A: One of the easiest ways is to consult product documentation when it is available, but other ways include examining listening ports on the machine, utilizing a packet sniffer to capture data transmitted by the application, and viewing the configuration information in the application.
1. Your company is considering implementing a VLAN. As you have studied for you Security+ exam, you have learned that VLANs offer certain security benefits as they can segment network traffic. The organization would like to set up three separate VLANs in which there is one for management, one for manufacturing, and one for engineering. How would traffic move for the engineering to the management VLAN?
A. The traffic is passed directly as both VLANs are part of the same collision domain.
B. The traffic is passed directly as both VLANs are part of the same broadcast domain.
C. Traffic cannot move from the management to the engineering VLAN.
D. Traffic must be passed to the router and then back to the appropriate VLAN.
2. You have been asked to protect two Web servers from attack. You have also been tasked with making sure that the internal network is also secure. What type of design could be used to meet these goals while also protecting all of the organization?
A. Implement IPSec on the Web servers to provide encryption.
B. Create a DMZ and place the Web server in it while placing the intranet behind the internal firewall.
C. Place a honeypot on the internal network.
D. Remove the Cat 5 cabling and replace it with fiber-optic cabling.
3. You have been asked to put your Security+ certification skills to use by examining some network traffic. The traffic was from an internal host whose IP address falls into an RFC 1918 range and you must identify the correct address. Which of the following should you choose?
A. 127.0.0.1
B. 10.27.3.56
C. 129.12.14.2
D. 224.0.12.10
4. You have been running security scans against the DMZ Web server and have obtained the following results. The Web server is also the externally facing DNS server. How should these results be interpreted?
C:\>nmap -sT 192.168.1.2
Starting nmap V. 3.91
Interesting ports on (192.168.1.2):
(The 1,598 ports scanned but not shown below are in state: filtered)
Port |
State |
Service |
53/tcp |
Open |
DNS |
80/tcp |
Open |
http |
111/tcp |
Open |
sun rpc |
Nmap run completed -1 IP address (1 host up) scanned in 409 s.
A. Port 80 and 53 are expected but TCP port 111 should not be open
B. Port 80 and 111 should not be open but TCP port 53 should be open
C. UDP port 80 should be open to the DMZ
D. TCP port 25 should be open to the DMZ
5. You have been asked to use an existing router and utilize it as a firewall. Management would like you to use it to perform address translation and block some known bad IP addresses that previous attacks have originated from. With this in mind, which of the following statements are accurate?
A. You have been asked to perform NAT services
B. You have been asked to set up a proxy
C. You have been asked to set up stateful inspection
D. You have been asked to set up a packet filter
6. Which security control can best be described by the following? Because normal user behavior can change easily and readily, this security control system is prone to false positives where attacks may be reported based on changes to the norm that are “normal,” rather than representing real attacks.
B. Signature-based IDS
C. Honeypot
D. Honeynet
7. You have been asked to install a SQL database on the intranet and recommend ways to secure the data that will reside on this server. While traffic will be encrypted when it leaves the server, your company is concerned about potential attacks. With this in mind, which type of IDS should you recommend?
A. A network-based IDS with the sensor placed in the DMZ
B. A host-based IDS that is deployed on the SQL server
C. A network-based IDS with the sensor placed in the intranet
D. A host-based IDS that is deployed on a server in the DMZ
8. Your network is configured to use an IDS to monitor for attacks. The IDS is network-based and has several sensors located in the internal network and the DMZ. No alarm has sounded. You have been called in on a Friday night because someone is claiming their computer has been hacked. What can you surmise?
A. The misconfigured IDS recorded a positive event
B. The misconfigured IDS recorded a negative event
C. The misconfigured IDS recorded a false positive event
D. The misconfigured IDS recorded a false negative event
9. You have installed an IDS that is being used to actively match incoming packets against known attacks. Which of the following technologies is being used?
A. Stateful inspection
B. Protocol analysis
C. Anomaly detection
D. Pattern matching
10. You have been reading about the ways in which a network-based IDS can be attacked. Which of these methods would you describe as an attack where an attacker attempts to deliver the payload over multiple packets over long periods of time?
A. Evasion
B. IP fragmentation
C. Session splicing
D. Session hijacking
11. You have been asked to explore what would be the best type of IDS to deploy at your company site. Your company is deploying a new program that will be used internally for data mining. The IDS will need to access the data mining application’s log files and needs to be able to identify many types of attacks or suspicious activity. Which of the following would be the best option?
A. Network-based IDS that is located in the internal network
B. Host-based IDS
D. Network-based IDS that has sensors in the DMZ
12. You are about to install WinDump on your Windows computer. Which of the following should be the first item you install?
A. LibPcap
B. WinPcap
C. IDSCenter
D. A honeynet
13. You must choose what type of IDS to recommend to your company. You need an IDS that can be used to look into packets to determine their composition. What type of signature type do you require?
A. File-based
B. Context-based
C. Content-based
D. Active
14. You have decided to implement split horizon DNS. You install two instances of DNS, and place one in the DMZ and one in the LAN. Which of these two DNS servers will become authoritative for your domain namespace?
A. Both the DMZ- and the LAN-based servers will be authoritative for your domain namespace
B. Only the LAN-based DNS
C. Only the DMZ-based DNS
D. Neither, the ISP is the only one who can be authoritative for a domain namespace
15. One of your servers has a host-based IDS installation in place. The system has been generating many false positives and you would like to examine the network traffic that is going to and from the server. Which of the following tools is going to be able to successfully capture this data off the wire for you to analyze?
A. A protocol analyzer
B. An IDS snuffler
C. An NIDS system
D. A protocol stealer
1. D
2. B
3. B
4. A
5. D
6. A
7. B
8. D
9. D
10. C
11. C
12. B
13. C
14. A
15. A