Chapter 6
IN THIS CHAPTER
Designing secure networks
Working with secure network components
Securing network communications
Understanding network attacks and countermeasures
The Communication and Network Security domain requires a thorough understanding of network fundamentals, secure network design, concepts of network operation, networking technologies and network management techniques. This domain represents 14 percent of the CISSP certification exam.
A solid understanding of networking concepts and fundamentals is essential for creating a secure network architecture. This requires knowledge of network topologies, IP addressing, various networking protocols (including multilayer and converged protocols), wireless networks, communication security, and new and evolving networking trends, such as software-defined networks, micro-segmentation, and cloud computing.
Data networks are commonly classified as local area networks (LANs) and wide area networks (WANs). Although these are basic classifications, you should understand the fundamental distinctions between these two types of networks.
A local area network (LAN) is a data network that operates across a relatively small geographic area, such as a single building or floor. A LAN connects workstations, servers, printers, and other devices so that network resources, such as files and email, can be shared. Key characteristics of LANs include the following:
A wide area network (WAN) connects multiple LANs and other WANs by using telecommunications devices and facilities to form an internetwork. Key characteristics of WANs include the following:
Examples of WANs include
The OSI and TCP/IP models define standard protocols for network communication and interoperability by using a layered approach. This approach divides complex networking issues into simpler functional components that help the understanding, design, and development of networking solutions and provides the following specific advantages:
In 1984, the International Organization for Standardization (ISO) adopted the Open Systems Interconnection (OSI) Reference Model (or simply, the OSI model) to facilitate interoperability between network devices independent of the manufacturer.
The OSI model consists of seven distinct layers that describe how data is communicated between systems and applications on a computer network, as shown in Figure 6-1. These layers include
In the OSI model, data is passed from the highest layer (Application; Layer 7) downward through each layer to the lowest layer (Physical; Layer 1), and is then transmitted across the network medium to the destination node, where it’s passed upward from the lowest layer to the highest layer. Each layer communicates only with the layer immediately above and below it (adjacent layers). This communication is achieved through a process known as data encapsulation. Data encapsulation wraps protocol information from the layer immediately above in the data section of the layer immediately below. Figure 6-2 illustrates this process.
The Application Layer (Layer 7) is the highest layer of the OSI model. It supports the components that deal with the communication aspects of an application that requires network access, and it provides an interface to the user. So, both the Application Layer and the end-user interact directly with the application.
The Application Layer is responsible for the following:
Don’t confuse the Application Layer with software applications such as Microsoft Word or Excel. Applications that function at the Application Layer include
Secure HyperText Transfer Protocol (S-HTTP): S-HTTP is an Internet protocol that provides a method for secure communications with a web server. S-HTTP is a connectionless-oriented protocol that encapsulates data after security properties for the session have been successfully negotiated.
Do not confuse HTTPS and S-HTTP. They are two distinctly different protocols with several differences. For example, HTTPS encrypts an entire communications session and is commonly used in VPNs, whereas S-HTTP encrypts individual messages between a client and server pair.
The Presentation Layer (Layer 6) provides coding and conversion functions that are applied to data being presented to the Application Layer (Layer 7). These functions ensure that data sent from the Application Layer of one system are compatible with the Application Layer of the receiving system.
Tasks associated with this layer include
Some examples of Presentation Layer protocols include
The Session Layer (Layer 5) establishes, coordinates, and terminates communication sessions (service requests and service responses) between networked systems.
A communication session is divided into three distinct phases:
Some examples of Session Layer protocols include
Secure Shell (SSH and SSH-2): SSH provides a secure alternative to Telnet (discussed in the section “Application Layer (Layer 7)” later in this chapter) for remote access. SSH establishes an encrypted tunnel between the client and the server, and can also authenticate the client to the server. SSH can be used to protect the confidentiality and integrity of network communications. SSH-2 establishes an encrypted tunnel between the SSH client and SSH server and can also authenticate the client to the server. SSH version 1 is also widely used but has inherent vulnerabilities that are easily exploited.
SSH-2 (or simply SSH) is an Internet security application that provides secure remote access.
The Transport Layer (Layer 4) provides transparent, reliable data transport and end-to-end transmission control. The Transport Layer hides the details of the lower layer functions from the upper layers.
Specific Transport Layer functions include
Several important protocols defined at the Transport Layer include
Transmission Control Protocol (TCP): A full-duplex (capable of simultaneous transmission and reception), connection-oriented protocol that provides reliable delivery of packets across a network. A connection-oriented protocol requires a direct connection between two communicating devices before any data transfer occurs. In TCP, this connection is accomplished via a three-way handshake. The receiving device acknowledges packets, and packets are retransmitted if an error occurs. The following characteristics and features are associated with TCP:
TCP is a connection-oriented protocol.
A three-way handshake is the method used to establish a TCP connection. A PC attempting to establish a connection with a server initiates the connection by sending a TCP SYN (Synchronize) packet. This is the first part of the handshake. In the second part of the handshake, the server replies to the PC with a SYN ACK packet (Synchronize Acknowledgement). Finally, the PC completes the handshake by sending an ACK or SYN-ACK-ACK packet, acknowledging the server’s acknowledgement, and the data communications commence.
A socket is a logical endpoint on a system or device used to communicate over a network to another system or device (or even on the same device). A socket usually is expressed as an IP address and port number, such as 192.168.100.2:25.
User Datagram Protocol (UDP): A connectionless protocol that provides fast best-effort delivery of datagrams across a network. A connectionless protocol doesn’t guarantee delivery of transmitted packets (datagrams) and is thus considered unreliable. It doesn’t
Perform error checking or recovery.
A datagram is a self-contained unit of data that is capable of being routed between a source and a destination. Similar to a packet, which is used in the Internet Protocol (IP), datagrams are commonly used in UDP and other protocols.
The term Protocol Data Unit (PDU) is used to describe the unit of data used at a particular layer of a protocol. For instance, in OSI, the layer 1 PDU is a bit, layer 2’s PDU is a frame, layer 3’s is a packet, and layer 4’s is a segment or datagram, and layer 7’s PDU.
UDP is ideally suited for data that requires fast delivery, as long as that data isn’t sensitive to packet loss and doesn’t need to be fragmented. Examples of applications that use UDP include Domain Name System (DNS), Simple Network Management Protocol (SNMP), and streaming audio or video. The following characteristics and features are associated with UDP:
Jitter in streaming audio and video is caused by variations in the delay of received packets, which is a negative characteristic of UDP.
Sequenced Packet Exchange (SPX): The protocol used to guarantee data delivery in older Novell NetWare IPX/SPX networks. SPX sequences transmitted packets, reassembles received packets, confirms all packets are received, and requests retransmission of packets that aren’t received. SPX is to IPX as TCP is to IP, though it might be confusing because the order is stated as IPX/SPX, rather than SPX/IPX (as in TCP/IP): SPX and TCP are Layer 4 protocols, and IPX and IP are Layer 3 protocols. Just think of it as yang and yin, rather than yin and yang!
Several examples of connection-oriented and connectionless-oriented protocols are identified in Table 6-1.
TABLE 6-1 Connection-Oriented and Connectionless-Oriented Protocols
Protocol |
Layer |
Type |
TCP (Transmission Control Protocol) |
4 (Transport) |
Connection-oriented |
UDP (User Datagram Protocol) |
4 (Transport) |
Connectionless-oriented |
IP (Internet Protocol) |
3 (Network) |
Connectionless-oriented |
ICMP (Internet Control Message Protocol) |
3 (Network) |
Connectionless-oriented |
IPX (Internetwork Packet Exchange) |
3 (Network) |
Connectionless-oriented |
SPX (Sequenced Packet Exchange) |
4 (Transport) |
Connection-oriented |
The Network Layer (Layer 3) provides routing and related functions that enable data to be transported between systems on the same network or on interconnected networks (or internetworks). Routing protocols, such as the Routing Information Protocol (RIP), Open Shortest Path First (OSPF), and Border Gateway Protocol (BGP), are defined at this layer. Logical addressing of devices on the network is accomplished at this layer by using routed protocols, including the Internet Protocol (IP) and Internetwork Packet Exchange (IPX).
Routing protocols are defined at the Network Layer and specify how routers communicate with one another on a WAN. Routing protocols are classified as static or dynamic.
A static routing protocol requires an administrator to create and update routes manually on the router. If the route is down, the network is down. The router can’t reroute traffic dynamically to an alternate destination (unless a different route is specified manually). Also, if a given route is congested, but an alternate route is available and relatively fast, the router with static routes can’t route data dynamically over the faster route. Static routing is practical only in very small networks or for very limited, special-case routing scenarios (for example, a destination that’s reachable only via a single router). Despite the limitations of static routing, it has a few advantages, such as low bandwidth requirements (routing information isn’t broadcast across the network) and some built-in security (users can only get to destinations that are specified in the routing table).
A dynamic routing protocol can discover routes and determine the best route to a given destination at any given time. The routing table is periodically updated with current routing information. Dynamic routing protocols are further classified as link-state and distance-vector (for intra-domain routing) and path-vector (for inter-domain routing) protocols.
A distance-vector protocol makes routing decisions based on two factors: the distance (hop count or other metric) and vector (the egress router interface). It periodically informs its peers and/or neighbors of topology changes. Convergence, the time it takes for all routers in a network to update their routing tables with the most current information (such as link status changes), can be a significant problem for distance-vector protocols. Without convergence, some routers in a network may be unaware of topology changes, causing the router to send traffic to an invalid destination. During convergence, routing information is exchanged between routers, and the network slows down considerably.
Routing Information Protocol (RIP) is a distance-vector routing protocol that uses hop count as its routing metric. In order to prevent routing loops, in which packets effectively get stuck bouncing between various router nodes, RIP implements a hop limit of 15, which significantly limits the size of networks that RIP can support. After a data packet crosses 15 router nodes (hops) between a source and a destination, the destination is considered unreachable. In addition to hop limits, RIP employs three other mechanisms to prevent routing loops:
RIP uses UDP port 520 as its transport protocol and port, and thus is a connectionless-oriented protocol. Other disadvantages of RIP include slow convergence and insufficient security (RIPv1 has no authentication, and RIPv2 transmits passwords in cleartext). RIP is a legacy protocol, but it’s still in widespread use on networks today, despite its limitations, because of its simplicity.
A link-state protocol requires every router to calculate and maintain a complete map, or routing table, of the entire network. Routers that use a link-state protocol periodically transmit updates that contain information about adjacent connections (these are called link states) to all other routers in the network. Link-state protocols are computation-intensive but can calculate the most efficient route to a destination, taking into account numerous factors such as link speed, delay, load, reliability, and cost (an arbitrarily assigned weight or metric). Convergence occurs very rapidly (within seconds) with link-state protocols; distance-vector protocols usually take longer (several minutes, or even hours in very large networks). Two examples of link-state routing protocols are:
A path-vector protocol is similar in concept to a distance-vector protocol, but without the scalability issues associated with limited hop counts. Border Gateway Protocol (BGP) is an example of a path-vector protocol.
BGP is a path-vector routing protocol used between separate autonomous systems (ASs). It’s considered an Exterior Gateway Protocol (EGP) because it performs routing between separate autonomous systems. It’s the core protocol used by Internet service providers (ISPs), network service providers (NSPs), and on very large private IP networks. When BGP runs between autonomous systems (such as between ISPs), it’s called external BGP (eBGP). When BGP runs within an AS (such as on a private IP network), it’s called internal BGP (iBGP).
Routed protocols are Network Layer protocols, such as Internetwork Packet Exchange (IPX) and Internet Protocol (IP) which address packets with routing information and allow those packets to be transported across networks using routing protocols (discussed in the preceding section).
Internetwork Packet Exchange (IPX) is a connectionless protocol used primarily in older Novell NetWare networks for routing packets across the network. It’s part of the IPX/SPX (Internetwork Packet Exchange/Sequenced Packet Exchange) protocol suite, which is analogous to the TCP/IP suite.
Internet Protocol (IP) contains addressing information that enables packets to be routed. IP is part of the TCP/IP (Transmission Control Protocol/Internet Protocol) suite, which is the language of the Internet. IP has two primary responsibilities:
IP Version 4 (IPv4), which is currently the most commonly used version, uses a 32-bit logical IP address that’s divided into four 8-bit sections (octets) and consists of two main parts: the network number and the host number. The first four bits in an octet are known as the high-order bits and the last four bits in an octet are known as the low-order bits. The first bit in the octet is referred to as the most significant bit, and the last bit in the octet is referred to as the least significant bit. Each bit position represents its value (see Table 6-2) if the bit is “on” (1); otherwise, its value is zero (“off” or 0).
TABLE 6-2 Bit Position Values in an IPv4 Address
High-Order Bits |
Low-Order Bits |
||||||
Most significant bit |
Least significant bit |
||||||
128 |
64 |
32 |
16 |
8 |
4 |
2 |
1 |
Each octet contains a n8-bit number with a value of 0 to 255. Table 6-3 shows a partial list of octet values in binary notation.
TABLE 6-3 Binary Notation of Octet Values
Decimal |
Binary |
Decimal |
Binary |
Decimal |
Binary |
255 |
1111 1111 |
200 |
1100 1000 |
9 |
0000 1001 |
254 |
1111 1110 |
180 |
1011 0100 |
8 |
0000 1000 |
253 |
1111 1101 |
160 |
1010 0000 |
7 |
0000 0111 |
252 |
1111 1100 |
140 |
1000 1100 |
6 |
0000 0110 |
251 |
1111 1011 |
120 |
0111 1000 |
5 |
0000 0101 |
250 |
1111 1010 |
100 |
0110 0100 |
4 |
0000 0100 |
249 |
1111 1001 |
80 |
0101 0000 |
3 |
0000 0011 |
248 |
1111 1000 |
60 |
0011 1100 |
2 |
0000 0010 |
247 |
1111 0111 |
40 |
0010 1000 |
1 |
0000 0001 |
246 |
1111 0110 |
20 |
0001 0100 |
0 |
0000 0000 |
IPv4 addressing supports five different address classes, indicated by the high-order (leftmost) bits in the IP address, as listed in Table 6-4.
TABLE 6-4 IP Address Classes
Class |
Purpose |
High-Order Bits |
Address Range |
Maximum Number of Hosts |
A |
Large networks |
0 |
1 to 126 |
16,777,214 (224-2) |
B |
Medium networks |
10 |
128 to 191 |
65,534 (216-2) |
C |
Small networks |
110 |
192 to 223 |
254 (28-2) |
D |
Multicast |
1110 |
224 to 239 |
N/A |
E |
Experimental |
1111 |
240 to 254 |
N/A |
Several IPv4 address ranges are also reserved for use in private networks, including:
These addresses aren’t routable on the Internet and are thus often implemented behind firewalls and gateways by using Network Address Translation (NAT) to conserve IP addresses, mask the network architecture, and enhance security. NAT translates private, non-routable addresses on internal network devices to registered IP addresses when communication across the Internet is required. The widespread use of NAT and private network addresses) somewhat delayed the inevitable depletion of IPv4 addresses, which is limited to approximately 4.3 billion due to it’s 32-bit format (232 = 4,294,967,296 possible addresses). But the thing about inevitability is that it’s, well, inevitable. Factors such as the proliferation of mobile devices worldwide, always-on Internet connections, inefficient use of assigned IPv4 addresses, and the spectacular miscalculation of IBM’s Thomas Watson — who, in 1943, predicted that there would be a worldwide market for “maybe five computers” (he was no Nostradamus) — have led to the depletion of IPv4 addresses.
The Asia-Pacific Network Information Centre (APNIC) was the first regional Internet Registry to run out of IPv4 addresses, on April 15, 2011. Réseaux IP Européens Network Coordination Centre (RIPE NCC), the regional Internet registry for Europe exhausted its pool of IPv4 addresses on September 14, 2012, followed by the Latin America and Caribbean Network Information Centre (LACNIC) on June 10, 2014, and the American Registry for Internet Numbers (ARIN) on September 24, 2015. The African Network Information Centre (AFRINIC) was expected to deplete its IPv4 pools in 2018.
In 1998, the IETF formally defined the IP Version 6 (IPv6) specification as the replacement for IPv4. IPv6 uses a 128-bit hexadecimal IP address (versus 32 bits for IPv4) and incorporates additional functionality to provide security, multimedia support, plug-and-play compatibility, and backward compatibility with IPv4. The main reason for developing IPv6 was to provide infinitely more network addresses than are available with IPv4 addresses. Okay, it’s not infinite, but it is ginormous — 2128 or approximately 3.4 × 1038 (that’s 340 hundred undecillion) unique addresses!
IPv6 addresses consist of 32 hexadecimal numbers grouped into eight blocks (sometimes referred to as hextels) of four hexadecimal digits, separated by a colon. Remember: A hexadecimal digit is represented by 4 bits (see Table 6-5), so each hextel is 16 bits (four 4-bit hexadecimal digits) and eight 16-bit hextels equals 128 bits. An IPv6 address is further divided into two 64-bit segments: The first (also referred to as the “top” or “upper”) 64 bits represent the network part of the address, and the last (also referred to as the “bottom” or “lower”) 64 bits represent the node or interface part of the address. The network part is further subdivided into a 48-bit global network address and a 16-bit subnet. The node or interface part of the address is based on the IEEE Extended Unique Identifier (EUI-64) physical or media access control (MAC) address (discussed in the “Data Link Layer (Layer 2)” section later in this chapter) of the node or interface.
TABLE 6-5 Decimal, Hexadecimal, and Binary Notation
Decimal |
Hexadecimal |
Binary |
0 |
0 |
0000 |
1 |
1 |
0001 |
2 |
2 |
0010 |
3 |
3 |
0011 |
4 |
4 |
0100 |
5 |
5 |
0101 |
6 |
6 |
0110 |
7 |
7 |
0111 |
8 |
8 |
1000 |
9 |
9 |
1001 |
10 |
A |
1010 |
11 |
B |
1011 |
12 |
C |
1100 |
13 |
D |
1101 |
14 |
E |
1110 |
15 |
F |
1111 |
The basic format for an IPv6 address is:
xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
where x represents a hexadecimal digit (0–f).
The following is an example of an IPv6 address:
2001:0db8:0000:0000:0008:0800:200c:417a
There are several rules the IETF has defined to shorten an IPv6 address:
Security features in IPv6 include network-layer security via Internet Protocol Security (IPsec) and requirements defined in Request For Comments (RFC) 7112 to prevent fragmentation exploits in IPv6 headers.
Multilayer protocols are groups of protocols that are purpose-built for some type of specialized communications need. Multilayer protocols have their own schemes for encapsulation, just like TCP/IP itself.
One good example of a multilayer protocol is DNP3 (Distributed Network Protocol), which is used in industrial control systems (ICS) and supervisory control and data acquisition (SCADA) networks. DNP3 has a layer 2 framing layer, a layer 4 transport layer, and a layer 7 application layer.
DNP3’s original design lacks security features, such as authentication and encryption. Recent updates to the standard have introduced security protocols. Without security features, relatively simple attacks (such as eavesdropping, spoofing, and perhaps denial of service) can be easily carried out on specialized multiprotocol networks.
Converged protocols refers to an implementation of two or more protocols for a specific communications purpose. Some examples of converged protocols include
Software-defined networks, or SDN, represent the ability to create, configure, manage, secure, and monitor network elements rapidly and efficiently. SDN utilizes an open standards architecture that enables intelligent network functions, such as routing, switching, and load balancing (the overlay function), to be performed on virtual software that is installed on commodity network hardware (the physical underlay), similar to server virtualization. In SDN, network elements and network architectures are virtual; this enables organizations to quickly build and modify their networks and network elements.
Like other virtualization technologies, SDN requires policy, process, and discipline to manage it correctly, in order to avoid network sprawl (the phenomenon where undisciplined administrators bypass change control processes and unilaterally create virtual network elements).
Other protocols defined at the Network Layer include the Internet Control Message Protocol (ICMP) and Simple Key Management for Internet Protocols (SKIP).
ICMP is a Network Layer protocol that is used for network control and diagnostics. Commonly used ICMP commands include ping and traceroute. Although ICMP is very helpful in troubleshooting routing and connectivity issues in a network, it is also commonly used by attackers for network reconnaissance, device discovery, and denial-of-service (DoS) attacks (such as an ICMP flood).
SKIP is a Network Layer key management protocol used to share encryption keys. An advantage of SKIP is that it doesn’t require a prior communication session to be established before it sends encrypted keys or packets. However, SKIP is bandwidth-intensive because of the size of additional header information in encrypted packets.
The primary networking equipment defined at Layer 3 are routers and gateways.
Routers are intelligent devices that link dissimilar networks and use logical or physical addresses to forward data packets only to the destination network (or along the network path). Routers employ various routing algorithms (for example, RIP, OSPF, and BGP) to determine the best path to a destination, based on different variables that include bandwidth, cost, delay, and distance.
Gateways are created with software running on a computer (workstation or server) or router. Gateways link dissimilar programs and protocols by examining the entire layer 7 data packet so as to translate incompatibilities. For example, a gateway can be used to link an IP network to an IPX network or a Microsoft Exchange mail server to a Lotus Notes server (a mail gateway).
The Data Link Layer ensures that messages are delivered to the proper device across a physical network link. This layer also defines the networking protocol (for example, Ethernet, USB, and token ring) used to send and receive data between individual devices. The Data Link Layer formats messages from layers above into frames for transmission, handles point-to-point synchronization and error control, and can perform link encryption.
The IEEE 802 standards and protocols further divide the Data Link Layer into two sub-layers: the Logical Link Control (LLC) and Media Access Control (MAC) sub-layers (see Figure 6-3).
The LLC sub-layer operates between the Network Layer above and the MAC sub-layer below. The LLC sub-layer performs the following three functions:
The MAC sub-layer operates between the LLC sub-layer above and the Physical Layer below. It’s primarily responsible for framing and has the following three functions:
Common LAN protocols are defined at the Data Link (and Physical) Layer. They include the following:
LAN data transmissions are classified as
WLAN (wireless LAN) technologies, commonly known as Wi-Fi, function at the lower layers of the OSI Reference Model. WLAN protocols define how frames are transmitted over the air. See Table 6-6 for a description of the most common IEEE 802.11 WLAN standards.
TABLE 6-6 Wireless LAN Standards
Type |
Speed |
Description |
802.11a |
54 Mbps |
Operates at 5 GHz (less interference than at 2.4 GHz) |
802.11b |
11 Mbps |
Operates at 2.4 GHz (first widely used protocol) |
802.11g |
54 Mbps |
Operates at 2.4 GHz (backward-compatible with 802.11b) |
802.11n |
600 Mbps |
Operates at 5 GHz or 2.4 GHz |
802.11ac |
1 Gbps |
Operates at 5 GHz |
WLAN networks were first encrypted with the WEP (Wired Equivalent Privacy) protocol, which was soon proven to be insufficient. New standards of encryption include WPA (Wi-Fi protected access) and WPA2. WPA using TKIP (Temporal Key Integrity Protocol) is also considered insufficient; AES (Advanced Encryption Standard) should be used instead. Wireless Networks For Dummies, by our friends Barry Lewis and Peter T. Davis, is a great book for more information on wireless networks.
WAN technologies function at the lower three layers of the OSI Reference Model (the Physical, Data Link, and Network Layers), primarily at the Data Link Layer. WAN protocols define how frames are carried across a single data link between two devices. These protocols include
Circuit-switched networks: In a circuit-switched network, a dedicated physical circuit path is established, maintained, and terminated between the sender and receiver across a carrier network for each communications session (the call). This network type is used extensively in telephone company networks and functions similarly to a regular telephone call. Examples include
Integrated Services Digital Network (ISDN): ISDN is a communications protocol that operates over analog phone lines that have been converted to use digital signaling. ISDN lines are capable of transmitting both voice and data traffic. ISDN defines a B-channel for data, voice, and other services, and a D-channel for control and signaling information. Table 6-8 describes the two levels of ISDN service that are currently available.
With the introduction and widespread adoption of DSL and DOCSIS, ISDN has largely fallen out of favor in the United States and is no longer available in many areas.
Circuit-switched networks are ideally suited for always-on connections that experience constant traffic.
Packet-switched networks: In a packet-switched network, devices share bandwidth (by using statistical multiplexing) on communications links to transport packets between a sender and receiver across a carrier network. This type of network is more resilient to error and congestion than circuit-switched networks. We compare packet-switched and circuit-switched networks in Table 6-9.
Examples of packet-switched networks include
Packet-switched networks are ideally suited for on-demand connections that have bursty traffic.
TABLE 6-7 xDSL Examples
Type |
Characteristics |
Description |
ADSL and ADSL2 |
Downstream rate: 1.5 to 12 Mbps Upstream rate: 0.5 to 3.5 Mbps Operating range: Up to 14,400 ft |
Asymmetric Digital Subscriber Line; designed to deliver higher bandwidth downstream (as from a central office to a customer site) than upstream |
SDSL |
Downstream rate: 1.544 Mbps Upstream rate: 1.544 Mbps Operating range: Up to 10,000 ft |
Single-line Digital Subscriber Line; designed to deliver high bandwidth both upstream and downstream over a single copper twisted pair |
HDSL |
Downstream rate: 1.544 Mbps Upstream rate: 1.544 Mbps Operating range: Up to 12,000 ft |
High-rate Digital Subscriber Line; designed to deliver high bandwidth both upstream and downstream over two copper twisted pairs; commonly used to provide local access to T1 services |
VDSL |
Downstream rate: 13 to 52 Mbps Upstream rate: 1.5 to 2.3 Mbps Operating range: 1,000 to 4,500 ft |
Very high Data-rate Digital Subscriber Line; designed to deliver extremely high bandwidth over a single copper twisted pair; VDSL2 provides simultaneous upstream and downstream data rates in excess of 100 Mbps |
TABLE 6-8 ISDN Service Levels
Level |
Description |
Basic Rate Interface (BRI) |
One 16-Kbps D-channel and two 64-Kbps B-channels (maximum data rate of 128 Kbps) |
Primary Rate Interface (PRI) |
One 64-Kbps D-channel and either 23 64-Kbps B-channels (U.S.) or 30 64-Kbps B-channels (EU), with a maximum data rate of 1.544 Mbps (U.S.) or 2.048 Mbps (EU) |
TABLE 6-9 Circuit Switching versus Packet Switching
Circuit Switching |
Packet Switching |
Ideal for always-on connections, constant traffic, and voice communications |
Ideal for bursty traffic and data communications |
Connection-oriented |
Connectionless-oriented |
Fixed delays |
Variable delays |
TABLE 6-10 Common Telecommunications Circuits
Type |
Speed |
Description |
DS0 |
64 Kbps |
Digital Signal Level 0. Framing specification used in transmitting digital signals over a single channel at 64 Kbps on a T1 facility. |
DS1 |
1.544 Mbps or 2.048 Mbps |
Digital Signal Level 1. Framing specification used in transmitting digital signals at 1.544 Mbps on a T1 facility (U.S.) or at 2.048 Mbps on an E1 facility (EU). |
DS3 |
44.736 Mbps |
Digital Signal Level 3. Framing specification used in transmitting digital signals at 44.736 Mbps on a T3 facility. |
T1 |
1.544 Mbps |
Digital WAN carrier facility. Transmits DS1-formatted data at 1.544 Mbps (24 DS0 user channels at 64 Kbps each). |
T3 |
44.736 Mbps |
Digital WAN carrier facility. Transmits DS3-formatted data at 44.736 Mbps (672 DS0 user channels at 64 Kbps each). |
E1 |
2.048 Mbps |
Wide-area digital transmission scheme used primarily in Europe that carries data at a rate of 2.048 Mbps. |
E3 |
34.368 Mbps |
Wide-area digital transmission scheme used primarily in Europe that carries data at a rate of 34.368 Mbps (16 E1 signals). |
OC-1 |
51.84 Mbps |
SONET (Synchronous Optical Networking) Optical Carrier WAN specification |
OC-3 |
155.52 Mbps |
SONET |
OC-12 |
622.08 Mbps |
SONET |
OC-48 |
2.488 Gbps |
SONET |
OC-192 |
9.9 Gbps |
SONET |
OC-768 |
39 Gbps |
SONET |
WAN protocols and technologies are implemented over telecommunications circuits. Refer to Table 6-10 for a description of common telecommunications circuits and speeds.
Networking devices that operate at the Data Link Layer include bridges, switches, DTEs/DCEs, and wireless equipment:
Wireless Access Points (APs) are transceivers that connect wireless clients to the wired network. Access points are base stations for the wireless network. They’re essentially hubs (or routers) operating in half-duplex mode — they can only receive or transmit at a given time; they can’t do both at the same time (unless they have multiple antennas). Wireless access points use antennas to transmit and receive data. The four basic types of wireless antennas include
Client devices in a Wi-Fi network include desktop and laptop PCs, as well as mobile devices and other endpoints (such as smartphones, medical devices, barcode scanners, and many so-called “smart” devices such as thermostats and other home automation devices). Wireless network interface cards (WNICs), or wireless cards, come in a variety of form factors such as PCI adapters, PC cards, and USB adapters, or they are built into wireless-enabled devices, such as laptop PCs, tablets, and smartphones.
Access points and the wireless cards that connect to them must use the same WLAN 802.11 standard or be backward-compatible. See the section “WLAN technologies and protocols,” earlier in this chapter, for a list of the 802.11 specifications.
Access points (APs) can operate in one of four modes:
The Physical Layer sends and receives bits across the network medium (cabling or wireless links) from one device to another.
It specifies the electrical, mechanical, and functional requirements of the network, including network topology, cabling and connectors, and interface types, as well as the process for converting bits to electrical (or light) signals that can be transmitted across the physical medium. Various network topologies, made from copper or fiber-optic wires and cables, hubs, and other physical materials, comprise the Physical Layer.
There are four basic network topologies defined at the Physical Layer. Although there are many variations of the basic types — such as Fiber Distributed Data Interface (FDDI), star-bus (or tree), and star-ring — we stick to the basics here:
Cables carry the electrical or light signals that represent data between devices on a network. Data signaling is described by several characteristics, including type (see the sidebar “Analog and digital signaling,” in this chapter), control mechanism (see the sidebar “Asynchronous and synchronous communications,” in this chapter), and classification (either baseband or broadband). Baseband signaling uses a single channel for transmission of digital signals and is common in LANs that use twisted-pair cabling. Broadband signaling uses many channels over a range of frequencies for transmission of analog signals, including voice, video, and data. The four basic cable types used in networks include
Twinaxial cable. Twinaxial (also known as twinax) cable is very similar to coax cable, but it consists of two solid copper-wire cores, rather than a single core. Twinax is used to achieve high data transmission speeds (for example, 10 Gb Ethernet [abbreviated as GE, GbE or GigE]) over very short distances (for example, 10 meters) at a relatively low cost. Typical applications for twinax cabling include SANs and top-of-rack network switches that connect critical servers to a high-speed core. Other advantages of twinax cabling include lower transceiver latency (delay in transmitter/receiver devices) and power consumption (compared to 10 GbE twisted-pair cables), and low bit error ratios (BERs).
Bit error ratio (BER) is the ratio of incorrectly received bits to total received bits over a specified period of time.
Twisted-pair cable. Twisted-pair cable is the most popular LAN cable in use today. It’s lightweight, flexible, inexpensive, and easy to install. One easily recognized example of twisted-pair cable is common telephone wire.
Twisted-pair cable consists of four copper-wire pairs that are twisted together to improve the transmission quality of the cable by reducing crosstalk and attenuation. The tighter the twisted pairs, the better the transmission speed and quality.
Crosstalk occurs when a signal transmitted over one channel or circuit negatively affects the signal transmitted over another channel or circuit. An (ancient) example of crosstalk occurred over analog phone lines when you could hear parts of other conversations over the phone. Attenuation is the gradual loss of intensity of a wave (for example, electrical or light) while it travels over (or through) a medium.
Currently, ten categories of twisted-pair cabling exist, but only Cat 5/5e, Cat 6/6a, and Cat 7/7a cable are typically used for networking today (see Table 6-11).
Twisted-pair cable can be either unshielded (UTP) or shielded (STP). UTP cabling is more common because it’s easier to work with and less expensive than STP. STP is used when noise is a problem or when security is a major concern, and is popular in IBM rings. Noise is produced by external sources and can distort or otherwise impair the quality of a signal. Examples of noise include RFI and EMI from sources such as electrical motors, radio signals, fluorescent lights, microwave ovens, and electronic equipment. Shielded cabling also reduces electromagnetic emissions that may be intercepted by an attacker.
TEMPEST is a (previously classified) U.S. military term that refers to the study of electromagnetic emissions from computers and related equipment.
Twisted-pair cable is terminated with an RJ-type terminator. The three common types of RJ-type connectors are RJ-11, RJ-45, and RJ-49. Although these connectors are all similar in appearance (particularly RJ-45 and RJ-49), only RJ-45 connectors are used for LANs. RJ-11 connectors are used for analog phone lines, and RJ-49 connectors are commonly used for Integrated Services Digital Network (ISDN) lines and WAN interfaces.
TABLE 6-11 Commonly Used Twisted-Pair Cable Categories
Category |
Use and Speed |
Example |
5 (not a TIA/EIA standard) |
Data (up to 100 Mbps) |
Fast Ethernet |
5e |
Data (up to 1000 Mbps at 100 MHz) |
Gigabit Ethernet |
6 |
Data (up to 1000 Mbps at 250 MHz) |
Gigabit Ethernet |
6a |
Data (up to 10 Gbps at 500 MHz) |
10 Gigabit Ethernet |
7 |
Data (up to 10 Gbps at 600 MHz up to 100 meters) |
10 Gigabit Ethernet |
7a |
Data (up to 100 Gbps at 1000 MHz up to 15 meters) |
40 Gigabit Ethernet |
TABLE 6-12 Cable Types and Characteristics
Cable Type |
Ethernet Designation |
Maximum Length |
EMI/RFI Resistance |
RG58 (thinnet) |
10Base-2 |
185 m |
Good |
RG8/11 (thicknet) |
10Base-5 |
500 m |
Better |
UTP |
10Base-T 100Base-TX 1000Base-T 10GbE |
100 m |
Poor |
STP |
10Base-T 100Base-TX 1000Base-T 10GbE |
100 m |
Fair to good |
Fiber-optic |
100Base-F |
2,000 m |
Best (EFI and RFI have no effect on fiber-optic cable) |
The interface between the Data Terminal Equipment (DTE) and Data Communications Equipment (DCE), which we discuss in the following section, is specified at the Physical Layer.
Common interface standards include
Networking devices that operate at the Physical Layer include network interface cards (NICs), network media (cabling, connectors, and interfaces, all of which we discuss in the section “Cable and connector types,” earlier in this chapter), repeaters, and hubs.
Network interface cards (NICs) are used to connect a computer to the network. NICs may be integrated on a computer motherboard or installed as an adapter card, such as an ISA, PCI, or PC card. Similar to a NIC, a WIC (WAN interface card) contains a built-in CSU/DSU and is used to connect a router to a digital circuit. Variations of WICs include HWICs (high-speed WAN interface cards) and VWICs (voice WAN interface cards).
A repeater is a non-intelligent device that simply amplifies a signal to compensate for attenuation (signal loss) so that one can extend the length of the cable segment.
A hub (or concentrator) is used to connect multiple LAN devices together, such as servers and workstations. The two basic types of hubs are
A switch is used to connect multiple LAN devices together. Unlike a hub, a switch doesn’t send outgoing packets to all devices on the network, but instead sends packets only to actual destination devices.
The Transmission Control Protocol/Internet Protocol (TCP/IP) Model is similar to the OSI Reference Model. It was originally developed by the U.S. Department of Defense and actually preceded the OSI model. However, the TCP/IP model is not as widely used as a learning and troubleshooting tool as the OSI model today. The most notable difference between the TCP/IP model and the OSI model is that the TCP/IP model consists of only four layers, rather than seven (see Figure 6-4).
Communication between devices often passes over networks that have varying risks of eavesdropping and interference by adversaries. While the endpoints involved in a communication session may be protected, the communication itself might not be. For this reason, cryptography is often employed to make communication unreadable by anyone (or any thing) that may be able to intercept them. Like the courier running an encrypted message through a battlefield in ancient times, an encrypted message in the modern context of computers and the Internet cannot be read by others.
Because there are so many different contexts and types of cryptography in data communication, cryptography is discussed throughout this chapter. Chapter 5 contains an extended section on cryptography.
Network equipment, such as routers, switches, wireless access points, and other network components, must be securely operated and maintained. The CISSP candidate must understand general security principles and unique security considerations associated with different types of network equipment.
Network equipment, such as routers and switches (discussed earlier in this chapter), as well as firewalls, intrusion detection systems, wireless access points and other components (discussed in the following sections) must be securely deployed, operated, and maintained. Aspects of proper operation of hardware include
Network transmission media includes wired (for example, copper and fiber) and wireless. Wired transmission media is defined at the Physical Layer of the OSI model (discussed previously in this chapter). Wireless transmission media is defined at the Data Link Layer of the OSI model (discussed previously in this chapter). Additionally, the CISSP candidate must understand Wi-Fi security techniques and protocols.
Aside from the use of encryption to render any intercepted communications unreadable by unauthorized parties, it’s also important to protect communication media from eavesdropping and sabotage. Techniques available to protect wired network media include
Security on wireless networks, as with all security, is best implemented by using a defense-in-depth approach. Security techniques and protocols include broadcast of SSIDs, authentication, and encryption using WEP, WPA, and WPA2.
An SSID is a name (up to 32 characters) that uniquely identifies a wireless network. A wireless client must know the SSID to connect to the WLAN. However, most APs broadcast their SSID (or the SSID can be easily sniffed), so the security provided by an SSID is largely inconsequential.
As its name implies, WEP was originally conceived as a security protocol to provide the same level of confidentiality that wired networks have. However, significant weaknesses were quickly uncovered in the WEP protocol.
WEP uses an RC4 stream cipher for confidentiality and a CRC-32 checksum for integrity. WEP uses either a 40-bit or 104-bit key with a 24-bit initialization vector (IV) to form a 64-bit or 128-bit key. Because of the relatively short initialization vector used (and other flaws), WEP keys can be easily cracked by readily available software in a matter of minutes.
WEP supports two methods of authentication:
Despite its many security flaws, WEP is still used in both residential and business networks as the default security protocol. WEP security can be enhanced by using tunneling protocols such as IPsec and SSH, but other security protocols are available to enhance WLAN security, as discussed in the following section.
WPA and WPA2 provide significant security enhancements over WEP and were introduced as a quick fix to address the flaws in WEP while the 802.11i wireless security standard was being developed.
WPA uses the Temporal Key Integrity Protocol (TKIP) to address some of the encryption problems in WEP. TKIP combines a secret root key with the initialization vector by using a key-mixing function. WPA also implements a sequence counter to prevent replay attacks and a 64-bit message integrity check. Despite these improvements, WPA that uses TKIP is now considered insufficient because of some well-known attacks.
WPA and WPA2 also support various EAP extensions (see the section “Remote access,” later in this chapter) to further enhance WLAN security. These extensions include EAP-TLS (Transport Layer Security), EAP-TTLS (Tunneled Transport Layer Security), and Protected EAP (PEAPv0 and v1).
Further security enhancements were introduced in WPA2. WPA2 uses the AES-based algorithm Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP), which replaces TKIP and WEP to produce a WLAN protocol that is far more secure.
Network access control (NAC) devices include firewalls (and proxies), intrusion detection systems (IDSs) and intrusion prevention systems (IPSs).
A firewall controls traffic flow between a trusted network (such as a home network or corporate LAN) and an untrusted or public network (such as the Internet). A firewall can comprise hardware, software, or a combination of both hardware and software.
There are three basic classifications of firewalls: packet-filtering, circuit-level gateway, and application-level gateway.
A packet-filtering firewall (or screening router), one of the most basic (and inexpensive) types of firewalls, is ideally suited for a low-risk environment. A packet-filtering firewall permits or denies traffic based solely on the TCP, UDP, ICMP, and IP headers of the individual packets. It examines the traffic direction (inbound or outbound), the source and destination IP addresses, and the source and destination TCP or UDP port numbers. This information is compared with predefined rules that have been configured in an access control list (ACL) to determine whether each packet should be permitted or denied. A packet-filtering firewall typically operates at the Network Layer or Transport Layer of the OSI model. Some advantages of a packet-filtering firewall are
Disadvantages of packet-filtering firewalls are
A more advanced variation of the packet-filtering firewall is the dynamic packet-filtering firewall. This type of firewall supports dynamic modification of the firewall rule base by using context-based access control (CBAC) or reflexive ACLs — both of which create dynamic access list rules for individual sessions as they are established. For example, an ACL might be automatically created to allow a user working from the corporate network (inside the firewall) to connect to an FTP server outside the firewall in order to upload and download files between her PC and the FTP server. When the file transfer is completed, the ACL is automatically deleted from the firewall.
A circuit-level gateway controls access by maintaining state information about established connections. When a permitted connection is established between two hosts, a tunnel (or virtual circuit) is created for the session, allowing packets to flow freely between the two hosts without the need for further inspection of individual packets. This type of firewall operates at the Session Layer (Layer 5) of the OSI model.
Advantages of this type of firewall include
Disadvantages of this type of firewall include
A stateful inspection firewall is a type of circuit-level gateway that captures data packets at the Network Layer and then queues and analyzes (examines the state and context of) these packets at the upper layers of the OSI model.
An application-level (or Application Layer) gateway operates at the Application Layer of the OSI model, processing data packets for specific IP applications. This type of firewall is generally considered the most secure and is commonly implemented as a proxy server. In a proxy server, no direct communication between two hosts is permitted. Instead, data packets are intercepted by the proxy server, which analyzes the packet’s contents and — if permitted by the firewall rules — sends a copy of the original packet to the intended host.
Advantages of this type of firewall include
Disadvantages of this type of firewall include
A web application firewall (WAF) is used to protect a web server (or group of web servers) from various types of web application attacks such as script injection and buffer overflow attacks. A WAF examines the contents of each packet being sent to a web server and employs rules to determine whether each packet is considered routine and friendly, or hostile.
The basic firewall types that we discuss in the preceding sections may be implemented by using one of the firewall architectures described in the following sections. The four basic types of firewall architectures are screening router, dual-homed gateway, screened-host gateway, and screened-subnet.
A screening router is the most basic type of firewall architecture employed. An external router is placed between the untrusted and trusted networks, and a security policy is implemented by using ACLs. Although a router functions as a choke point between a trusted network and an untrusted network, an attacker — after gaining access to a host on the trusted network — may potentially be able to compromise the entire network.
Advantages of a screening router architecture include these:
Disadvantages of the screening router architecture include these:
Still, using a screening router architecture is better than using nothing.
Another common firewall architecture is the dual-homed gateway. A dual-homed gateway (or bastion host) is a system that has two network interfaces (NICs) and sits between an untrusted network and a trusted network. A bastion host is a general term often used to refer to proxies, gateways, firewalls, or any server that provides applications or services directly to an untrusted network. Because it’s often the target of attackers, a bastion host is sometimes referred to as a sacrificial lamb.
However, this term is misleading because a bastion host is typically a hardened system that employs robust security mechanisms. A dual-homed gateway is often connected to the untrusted network via an external screening router. The dual-homed gateway functions as a proxy server for the trusted network and may be configured to require user authentication. A dual-homed gateway offers a more fail-safe operation than a screening router does because, by default, data isn’t normally forwarded across the two interfaces. Advantages of the dual-homed gateway architecture include
Disadvantages of the dual-homed gateway architecture include
A screened-host gateway architecture employs an external screening router and an internal bastion host. The screening router is configured so that the bastion host is the only host accessible from the untrusted network (such as the Internet). The bastion host provides any required web services to the untrusted network, such as HTTP and FTP, as permitted by the security policy. Connections to the Internet from the trusted network are routed via an application proxy on the bastion host or directly through the screening router.
Here are some of the advantages of the screened-host gateway:
Here are some disadvantages of the screened-host gateway:
The screened-subnet is perhaps the most secure of the currently designed firewall architectures. The screened-subnet employs an external screening router, a dual-homed (or multi-homed) host, and a second internal screening router. This implements the concept of a network DMZ (or demilitarized zone). Publicly available services are placed on bastion hosts in the DMZ.
Advantages of the screened-subnet architecture include these:
Disadvantages of a screened-subnet architecture, compared to other firewall architectures, include these:
Next-generation firewalls (often termed next-gen firewalls or NGFWs) and unified threat management devices (often called UTMs) are similar terms describing firewalls with multiple functions, including combinations of the following security devices:
The main advantage of next-gen firewalls and UTM is greater simplicity. Rather than having to manage many separate security systems, all of these security functions are performed within a single device.
Intrusion detection is defined as real-time monitoring and analysis of network activity and data for potential vulnerabilities and attacks in progress. One major limitation of current intrusion-detection-system (IDS) technologies is the requirement to filter false alarms to prevent the operator (the system or security administrator) from being overwhelmed with data. IDSs are classified in many different ways, including active and passive, network-based and host-based, and knowledge-based and behavior-based.
Commonly known as an intrusion prevention system (IPS) or as an intrusion detection and prevention system (IDPS), an active IDS is a system that’s configured to automatically block suspected attacks in progress without requiring any intervention by an operator. IPS has the advantage of providing real-time corrective action in response to an attack, but it has many disadvantages as well. An IPS must be placed inline along a network boundary; thus the IPS itself is susceptible to attack. Also, if false alarms and legitimate traffic haven’t been properly identified and filtered, authorized users and applications may be improperly denied access. Finally, the IPS itself may be used to effect a Denial of Service (DoS) attack, which involves intentionally flooding the system with alarms that cause it to block connections until no connection or bandwidth is available.
A passive IDS is a system that’s configured to monitor and analyze network traffic activity and alert an operator to potential vulnerabilities and attacks. It can’t perform any protective or corrective functions on its own. The major advantages of passive IDS are that these systems can be easily and rapidly deployed and aren’t normally susceptible to attack themselves. Passive IDS is usually connected to a network segment via a tap (physical or virtual) or switched port analyzer (SPAN) port.
A network-based IDS (NIDS) usually consists of a network appliance (or sensor) that includes a Network Interface Card (NIC) operating in Promiscuous mode (meaning it listens to, or “sniffs,” all traffic on the network, not just traffic addressed to a specific host) and a separate management interface. The IDS is placed along a network segment or boundary, and it monitors all traffic on that segment.
A host-based IDS (HIDS) requires small programs (or agents) to be installed on the individual systems that are to be monitored. The agents monitor the operating system and write data to log files and/or trigger alarms. A host-based IDS can monitor only the individual host systems on which the agents are installed; it doesn’t monitor the entire network.
A knowledge-based (or signature-based) IDS references a database of previous attack profiles and known system vulnerabilities to identify active intrusion attempts. Knowledge-based IDSs are currently more common than behavior-based IDSs. Advantages of knowledge-based systems include
Disadvantages of knowledge-based systems include
A behavior-based (or statistical anomaly-based) IDS references a baseline or learned pattern of normal system activity to identify active intrusion attempts. Deviations from this baseline or pattern cause an alarm to be triggered. Advantages of behavior-based systems include that they
Disadvantages of behavior-based systems include
A web content filter is typically an inline device that monitors and controls internal users’ access to Internet web sites. Web content filters can be configured to block access to both specific web sites and categories of web sites (for example, blocking access to sites that discuss polka music).
Organizations that use web content filters to block access to categories of web sites are often trying to keep employees from accessing sites that are not related to work. The use of web content filters also helps to enforce policies and protect the organization from potential liability. (For example, blocking access to pornographic and hate-related websites helps to enforce sexual harassment and racial discrimination/safe working environment policies, and can help to demonstrate due diligence).
Web content filters typically employ large databases of websites that are constantly being evaluated and updated by the security vendor of the content filtering software. These databases often contain errors in classification, which will require policies and procedures for employees to request access to legitimate websites or access to blocked websites for legitimate work purposes. These processes often can be frustrating for employees, particularly if it takes more than a few minutes for the security team to respond to the request. An alternate policy used by many organizations is “trust but verify”. Websites are not blocked, but users are warned prior to navigating to a potentially suspicious, dangerous, offensive, or otherwise inappropriate website; the individual user must positively acknowledge that they understand the risk and that they are visiting the site for a legitimate purpose. The website visit is logged and reported; typically, appropriate security or human resources personnel will follow up with the employee, if necessary.
Tech savvy users often use various proxy software programs in an attempt to circumvent web content filters. Proxy software is a significant risk to enterprise security and should be explicitly forbidden by policy. Next-generation firewalls and certain advanced web content filters are capable of detecting proxy software in the enterprise.
Data loss prevention (DLP) refers to a class of security products that are designed to detect and (optionally) prevent the exfiltration of sensitive data over an organization’s network connections. DLP systems work by performing pattern matching (for example, XXX-XX-XXXX
representing a Social Security Number, or XXXX XXXX XXXX XXXX
representing a credit card number) against data transmitted over the network. Depending on the type of DLP system and its configuration, the DLP system can either generate an alert describing the suspected data exfiltration or block the transmission altogether.
There is another class of DLP products that are used to scan file servers and database management systems in search of sensitive data. The idea is that people sometimes extract sensitive data from sanctioned repositories and then make copies of that data and store it in less secure locations.
Cloud access security broker (CASB) systems are used to monitor and control end-user access to cloud-based services. For instance, if an organization uses Box.com for unstructured file storage, a CASB system can be configured to block end-user access to alternative storage services such as Dropbox and Skydrive.
Organizations generally use CASB systems to limit the exfiltration of sensitive information and steer personnel to officially sanctioned applications. They can be thought of as security policy enforcement points.
It's often said that security is only as strong as its weakest link. And that weakest link often is the endpoint. Endpoints, including desktop and laptop computers, smartphones, tablets, and other mobile equipment (such as medical devices, barcode scanners, and other so-called “smart” devices), have become very attractive targets for cybercriminals. Endpoints are particularly vulnerable to attack for many reasons, including:
At its most basic level, endpoint security consists of anti-malware (or antivirus) software. Signature-based software is the most common type of antivirus software used on endpoints. Signature-based antivirus software scans an endpoint’s hard drive and memory in real time and at scheduled times. If a known malware signature is detected, the software performs an action, such as:
Signature-based antivirus software must be kept up to date to be effective, and it can only detect known threats. The endpoint is vulnerable to any new “zero-day” malware threats until a signature is created by the software vendor and uploaded to the endpoint.
Application whitelisting is another common anti-malware approach used for endpoint protection. This approach requires a positive control model on the endpoint — only applications that have been explicitly authorized can be run on the endpoint. Trends such as “bring your own device” (BYOD) that allow end users to use their personal devices for work-related purposes make application whitelisting approaches difficult to implement in the enterprise. Another limitation of application whitelisting is that an application (such as Microsoft Word or Adobe Acrobat) that has already been whitelisted can be run on an endpoint, even if that application is exploited (for example, with a malicious Word document or Adobe PDF).
Behavior-based (also known as heuristics-based or anomaly-based) endpoint protection attempts to create a baseline of “normal” activity on the endpoint. Any unusual activity (as determined by the baseline) is detected and stopped. Unfortunately, behavior-based software is prone to high false positives and typically requires significant computing resources.
Container-based endpoint protection isolates any vulnerable processes running on an endpoint by creating virtual barriers around individual processes. If a malicious process is detected, the software kills the process before the malicious process can infect any legitimate processes on the endpoint. Container-based approaches typically require significant computing resources and extensive knowledge of any applications running on the endpoint.
In addition to anti-malware prevention, endpoint protection should include
Content distribution (or delivery) networks (CDNs) are large distributed networks of servers that cache web content, such as static web pages, downloadable objects, on-demand and streaming music and video, and web applications for subscriber organizations, and serve that content to Internet users over the most optimal network path available.
CDNs offload much of the performance demand on Internet-facing systems for subscriber organizations and many offer optional security services, such as distributed denial-of-service (DDoS) attack mitigation.
CDNs operate data centers throughout a large geographic region, or worldwide, and must ensure the security of their data center systems and networks for their customers. Service-level agreements (SLAs) and applicable regulatory compliance must be addressed when evaluating CDN providers.
Some CDN providers include optional web application firewall (WAF) capabilities that protect web servers from application layer attacks.
It’s often said in the information security profession: If an adversary obtains physical access to a target system, it’s game over. In other words, an adversary with physical access to a device is often able to take complete control of the device, to the detriment of its owner.
More than that, an adversary who gains physical access to a device can also use that device as a means to access other devices, systems, and data in an organization’s network.
The topic of physical access security is discussed in several areas of this book:
The CISSP exam requires knowledge of secure design principles and implementation of various communication technologies, including voice, email, Web, fax, multimedia collaboration, remote access, data, and virtualized networks.
PBX (Private Branch Exchange) switches, POTS (Plain Old Telephone Systems), and VoIP (Voice over Internet Protocol) switches are some of the most overlooked and costly aspects of a corporate telecommunications infrastructure. Many employees don’t think twice about using a company telephone system for extended personal use, including long-distance calls. Personal use of company-supplied mobile phones is another area of widespread abuse. Perhaps the simplest and most effective countermeasure against internal abuse is to publish and enforce a corporate telephone-use policy. Regular auditing of telephone records is also effective for deterring and detecting telephone abuse. Similarly, as both voice communications and the global workforce have become increasingly mobile, organizations need to define and implement appropriate bring your own device (BYOD), choose your own device (CYOD), or corporate owned personally enabled (COPE) mobile device policies.
In recent years, cloud communications (also known as Unified Communications as a Service, or UCaaS) has become a viable alternative to PBX and on-premises VoIP systems for many organizations, from small and midsize businesses to large enterprises. Many cloud communications providers offer the same advanced features and functionality of on-premises PBX and VoIP systems, with all the business and technical benefits of the cloud.
Similarly, over-the-top (OTT) services, such as Skype, Jabber, Vonage, Vimeo, and Zoom, are increasingly common in business communications.
Finally, mobile operators are introducing innovations such as Voice over Long-Term Evolution (VoLTE), Voice over Wi-Fi (VoWiFi), and Wi-Fi Calling. 5G networks will begin to be made commercially available in the U.S. beginning in 2019, enabling new opportunities — and security challenges — in mobile communications, the Internet of Things (IoT), and machine-to-machine (M2M) communications, among other applications.
Types of attacks on voice communications systems include
Email has emerged as one of the most important communication mediums in our global economy, with over 50 billion email messages sent worldwide every day. Unfortunately, spam and phishing account for as much as 85 percent of that email volume. Spam is more than a minor nuisance — it’s a serious security threat to all organizations worldwide.
The Simple Mail Transfer Protocol (SMTP) is used to send and receive email across the Internet. It operates on TCP/UDP port 25 and contains many well-known vulnerabilities. Most SMTP mail servers are configured by default to forward (or relay) all mail, regardless of whether the sender’s or recipient’s address is valid.
Failing to secure your organization’s mail servers may allow spammers to misuse your servers and bandwidth as an open relay to propagate their spam. The bad news is that you’ll eventually (it usually doesn’t take more than a few days) get blacklisted by a large number of organizations that maintain real-time blackhole lists (RBLs) against open relays, effectively preventing most (if not all) email communications from your organization reaching their intended recipients. It usually takes several months to get removed from those RBLs after you’ve been blacklisted, and it does significant damage to your organization’s communications infrastructure and credibility.
Failure to make a reasonable effort towards spam prevention in your organization is a failure of due diligence. An organization that fails to implement appropriate countermeasures may find itself a defendant in a sexual harassment lawsuit from an employee inundated with pornographic emails sent by a spammer to his or her corporate email address. Plus, the failure to block spam and phishing exposes an organization to attack. Over 90 percent of security breaches begin with phishing messages, so it makes good sense to block spam and phishing by any means available.
Other risks associated with spam email include
Countering these threats requires an arsenal of technical solutions and user-awareness efforts and is — at least, for now — a never-ending battle. Begin by securing your servers, end-user PCs, mobile devices, and IoT devices. Mail servers should always be placed in a DMZ or outsourced, and unnecessary or unused services should be disabled — and change that default relay setting! Most other servers, and almost all client PCs, should have port 25 disabled. Implement a spam filter or other secure mail gateway. Also, consider the following user-awareness tips:
Never unsubscribe or reply to spam email. Unsubscribe links in spam emails are often used to confirm the legitimacy of your email address, which can then be added to mass-mailing lists that are sold to other spammers. And, as tempting as it is to tell a spammer what you really think of his or her irresistible offer to enhance your social life or improve your financial portfolio, most spammers don’t actually read your replies and (unfortunately) aren’t likely to follow your suggestion that they jump off a cliff.
Although legitimate offers from well-known retailers or newsletters from professional organizations may be thought of as spam by many people, it’s likely that, at some point, a recipient of such a mass mailing actually signed up for that stuff — so it’s technically not spam. Everyone seems to want your email address whenever you fill out an application for something, and providing your email address often translates to an open invitation for them to tell you about every sale from here to eternity. In such cases, senders are required by U.S. law to provide an Unsubscribe hyperlink in their mass mailings, and clicking it does remove the recipient from future mailings.
Spam is only the tip of the iceberg. Cryptocurrency (such as Bitcoin) mining has become an enormous blight requiring vast amounts of computing power to verify cryptocurrency transactions and issue new cryptocurrency. Compromised computers and networks provide attackers with a free source of distributed computing power (as well as free electricity) to solve computationally complex cryptocurrency block chains and earn transaction fees and new cryptocurrency.
Other email security considerations include malicious code contained in attachments, lack of privacy, and lack of authentication. These considerations can be countered by implementing antivirus scanning software, encryption, and digital signatures, respectively.
Several applications employing various cryptographic techniques have been developed to provide confidentiality, integrity, authentication, non-repudiation, and access control for email communications. For example, Microsoft Office 365 Message Encryption (OME) is a popular solution for many organizations.
Pretty Good Privacy (PGP): PGP is a popular email encryption application. It provides confidentiality and authentication by using the IDEA Cipher for encryption and the RSA asymmetric system for digital signatures and secure key distribution. Instead of a central Certificate Authority (CA), PGP uses a decentralized trust model (in which the communicating parties implicitly trust each other) which is ideally suited for smaller groups to validate user identity (instead of using PKI infrastructure, which can be costly and difficult to maintain).
Today, two basic versions of PGP software are available: a commercial version from Symantec Corporation (www.symantec.com
), and an open-source version, GPG (www.gnupg.org
).
The two principal technologies that make up the World Wide Web are the HyperText Transport Protocol (HTTP) and the HyperText Markup Language (HTML). HTTP is the command-and-response language used by browsers to communicate with web servers, and HTML is the display language that defines the appearance of web pages.
HyperText Transport Protocol Secure (HTTPS) is the secure version of HTTP, which includes protocols for authenticating users (not used often) and web servers (used quite often) as well as for encrypting web traffic between web servers and end users’ browsers.
HTTP, HTTPS, and HTML5 are the means used to facilitate all sorts of high-value activities, such as online banking and business applications. It should be of no surprise, then, to know that these protocols are under constant attack by hackers. Some of the types of attacks are
These and other types of attacks have made web security testing a necessity. Many organizations that have web applications, especially ones that facilitate high-value activities (such as banking, travel, and information management), employ tools and other methods to make sure that no vulnerabilities exist that could permit malicious attacks to expose sensitive information or cause the application to malfunction.
Facsimile transmissions are often taken for granted, but they definitely present major security issues. Many organizations still use fax machines to regularly conduct business (including attorneys, Realtors, and pizza delivery restaurants, to name a few!), and multifunction printers often have built-in fax machines. Even if you don’t configure the fax capability on a multifunction printer, it can still be a security risk.
In many organizations, email-based fax services have replaced traditional fax machines. Email security concepts (discussed in the preceding section) should be applied in such cases.
A fax transmission, like any other electronic transmission, can be easily intercepted or re-created. General administrative and technical controls for fax security include
Multimedia collaboration includes remote meeting software, certain voice over Internet Protocol (VoIP) applications (see the earlier section, “Voice”), and instant messaging, among others.
Remote meeting (Skype, WebEx, Zoom, and GoToMeeting) software has become immensely popular and enables rich collaboration over the Internet. Potential security issues associated with remote meeting software include downloading and installing potentially vulnerable add-on components or other required software. Other security issues arise from the capabilities inherent to remote meeting software, such as remote desktop control, file sharing, sound, and video. An unauthorized user that connects to an endpoint via remote meeting software could potentially have access to all of these capabilities.
Instant messaging (IM) applications enable simple and convenient communications within an organization and can significantly boost productivity. However, IM has long been a favorite attack vector for cybercriminals. Users need to be aware that IM is no more secure than any other communication method. Communications can be intercepted (IMs are rarely encrypted) and malware can be spread via instant messages.
Remote access to corporate networks has become more ubiquitous over the past decade. Such trends such as telecommuting and mobile computing blur the distinction between work lives and personal lives for many people today. Safely enabling ubiquitous access to corporate network resources from any device requires extensive knowledge of various remote access security methods, protocols, and technologies.
Remote access security methods include restricted allowed addresses, geolocation, caller ID, callback, and multi-factor authentication.
Multi-factor authentication: Requiring users to authenticate with a userid and password, plus an additional factor such as a one-time passcode (for example sent to a mobile device via SMS text message), token, or biometric, reduces the risk of compromised login credentials.
One limitation of callback is that it can be easily defeated by using call forwarding.
Remote access security technologies include RAS servers that utilize various authentication protocols associated with PPP, RADIUS, and TACACS.
RADIUS: The Remote Authentication Dial-In User Service (RADIUS) protocol is an open-source, UDP-based (usually ports 1812 and 1813, and sometimes ports 1645 and 1646), client-server protocol, which provides authentication and accountability. A user provides username/password information to a RADIUS client by using PAP or CHAP.
The RADIUS client encrypts the password and sends the username and encrypted password to the RADIUS server for authentication.
Note: Passwords exchanged between the RADIUS client and the RADIUS server are encrypted, but passwords exchanged between the PC client and the RADIUS client aren’t necessarily encrypted — if using PAP authentication, for example. However, if the PC client happens to also be the RADIUS client, all password exchanges are encrypted, regardless of the authentication protocol being used.
A Virtual Private Network (VPN) creates a secure tunnel over a public network, such as the Internet. Encrypting the data as it’s transmitted across the VPN creates a secure tunnel. The two ends of a VPN are commonly implemented by using one of the following methods:
The Point-to-Point Tunneling Protocol (PPTP) was developed by Microsoft to enable the Point-to-Point Protocol (PPP) to be tunneled through a public network. PPTP uses native PPP authentication and encryption services (such as PAP, CHAP, and EAP). PPTP is commonly used for dial-up connections. PPTP operates at the Data Link Layer (Layer 2) of the OSI model and is designed for individual client-server connections.
The Layer 2 Forwarding Protocol (L2F) was developed by Cisco and provides similar functionality to PPTP. Like its name implies, L2F operates at the Data Link Layer of the OSI model and permits tunneling of Layer 2 WAN protocols such as HDLC and SLIP.
The Layer 2 Tunneling Protocol (L2TP) is an IETF standard that combines Microsoft (and others’) PPTP and Cisco L2F protocols. Like PPTP and L2F, L2TP operates at the Data Link Layer of the OSI model to create secure VPN connections for individual client-server connections. The L2TP addresses the following end-user requirements:
Internet Protocol Security (IPsec) is an IETF open standard for VPNs that operates at the Network Layer (Layer 3) of the OSI model. It’s the most popular and robust VPN protocol in use today. IPsec ensures confidentiality, integrity, and authenticity by using Layer 3 encryption and authentication to provide an end-to-end solution. IPsec operates in two modes:
The two main protocols used in IPsec are
Each pair of hosts communicating in an IPsec session must establish a security association.
A security association (SA) is a one-way connection between two communicating parties; thus, two SAs are required for each pair of communicating hosts. Additionally, each SA supports only a single protocol (AH or ESP). Therefore, using both an AH and an ESP between two communicating hosts will require a total of four SAs. An SA has three parameters that uniquely identify it in an IPsec session:
Key management is provided in IPsec by using the Internet Key Exchange (IKE). IKE is actually a combination of three complementary protocols: the Internet Security Association and Key Management Protocol (ISAKMP), the Secure Key Exchange Mechanism (SKEME), and the Oakley Key Exchange Protocol. IKE operates in three modes: Main mode, Aggressive mode, and Quick mode.
The Secure Sockets Layer (SSL) protocol, developed by Netscape Communications in 1994, provides session-based encryption and authentication for secure communication between clients and servers on the Internet. SSL operates at the Transport Layer (Layer 4) of the OSI model. SSL VPNs (using TLS 1.0 through 1.2) have rapidly gained widespread popularity and acceptance in recent years because of their ease of use and low cost. An SSL VPN requires no special client hardware or software (other than a web browser), and little or no client configuration. SSL VPNs provide secure access to web-enabled applications and thus are somewhat more granular in control — a user is granted access to a specific application, rather than to the entire private network. This granularity can also be considered a limitation of SSL VPNs; not all applications will work over an SSL VPN, and many convenient network functions (file and print sharing) may not be available over an SSL VPN.
SSL uses the RSA asymmetric key system; IDEA, DES, and 3DES symmetric key systems; and the MD5 hash function. The current version (published in 1996) is SSL 3.0. In 2014, a vulnerability that affects all block ciphers in SSL was discovered. The RC4 stream cipher used in SSL 3.0 is also vulnerable to attack. RFC 6176, published in 2011, deprecates and prohibits the use of SSL 2.0. Similarly, RFC 7568, published in 2015 deprecates and prohibits the use of SSL 3.0.
SSL 3.0 was standardized by the IETF in Transport Layer Security (TLS) 1.0 and released in 1999 with only minor modifications to the original SSL 3.0 specification. TLS 1.2 is the most current version of TLS, and TLS 1.3 at the time of this writing remains a draft standard, although the final specification is expected in 2018.
Network data communications are secured using a number of technologies and protocols.
Virtual LANs (VLANs) are used to logically segment a network, for example by department or resource. VLANs (see the sidebar “Fill-in-the-blank area networks (__AN)” earlier in this chapter) are configured on network switches and restrict VLAN access to devices that are connected to ports that are configured on the switch as VLAN members.
The Transport Layer Security/Secure Sockets Layer (TLS/SSL) protocol (discussed in the preceding section) is commonly used to encrypt network communications.
Virtualized networks are a big part of the cloud computing revolution that is sweeping the world. The earlier form of virtualized networks came in the form of virtual local area networks (VLANs), discussed earlier in this chapter. But the big advancement is in software-defined networking (SDN), software-defined security (SD-S), and network functions virtualization (NFV), in which network components such as routers, switches, firewalls, intrusion detection systems (IDSs), and more are no longer hardware appliances but virtual machines that live in virtual environments (see the next section).
Virtualization has been one of the hottest and most disruptive computing trends of the past decade, and is a key enabling technology in cloud computing. Virtualization technology emulates physical computing resources, such as desktop computers and servers, processors, memory, storage, networking, and individual applications. The core component of virtualization technology is the hypervisor which runs between a hardware kernel and an OS, and enables multiple “guest” virtual machines (VMs) to run on a single physical “host” machine.
Two commonly defined types of hypervisors are Type 1 (native or bare metal) hypervisors that run directly on host hardware, and Type 2 (hosted) hypervisors that run within an operating system environment (OSE).
In additional to virtualized servers, virtualization technology is increasingly being used for
Security in virtualized environments begins with the hypervisor. A compromised hypervisor can potentially give an attacker access to and control of an entire virtualized environment.
Operational security issues associated with virtualized environments include
A recent innovation in virtualization is known as containerization. In the same manner in which a hypervisor can facilitate the use of multiple operating system instances, containerization facilitates the use of multiple application instances within a single operating system. Each application executes in a container, which is isolated from other containers. Containerization is useful in environments where applications are designed to be run by themselves in a running operating system. Popular software containers include Docker and Kubernetes.
Most attacks against networks are Denial of Service (DoS) or Distributed Denial of Service (DDoS) attacks in which the objective is to consume a network’s bandwidth so that network services become unavailable. But several other types of attacks exist, some of which are discussed in the following sections.
With Bluetooth technology becoming wildly popular, several attack methods have evolved, including bluejacking (sending anonymous, unsolicited messages to Bluetooth-enabled devices) and bluesnarfing (stealing personal data, such as contacts, pictures, and calendar information from a Bluetooth-enabled phone). Even worse, in a bluesnarfing attack, information about your cellular phone (such as its serial number) can be downloaded, then used to clone your phone.
In an ICMP flood attack, large numbers of ICMP packets (usually Echo Request) are sent to the target network to consume available bandwidth and/or system resources. Because ICMP isn’t required for normal network operations, the easiest defense is to drop ICMP packets at the router or filter them at the firewall.
A Smurf attack is a variation of the ICMP flood attack. In a Smurf attack, ICMP Echo Request packets are sent to the broadcast address of a target network by using a spoofed IP address on the target network. The target, or bounce site, then transmits the ICMP Echo Request to all hosts on the network. Each host then responds with an Echo Reply packet, overwhelming the available bandwidth and/or system resources. Countermeasures against Smurf attacks include dropping ICMP packets at the router.
A Fraggle attack is a variant of a Smurf attack that uses UDP Echo packets (UDP port 7) rather than ICMP packets. Cisco routers can be configured to disable the TCP and UDP services (known as TCP and UDP small servers) that are most commonly used in Fraggle attacks.
There are various attacks that can be carried out against DNS servers, which are designed to cause targeted DNS servers to provide erroneous responses to end users, resulting in end users being sent to imposter systems (usually web sites). Defenses against DNS server attacks include DNS server hardening (including Domain Name System Security Extensions, or DNSSEC) and application firewalls.
A man-in-the-middle (MITM) attack consists of an attacker that attempts to alter communications between two parties through impersonation. A common MITM technique attacks the establishment of a TLS session, so that the attacker will be able to easily decrypt encrypted communications between the two endpoints (for example, on a coffee shop Wi-Fi network).
Defenses against MITM attacks include stronger authentication, implementation of DNSSEC, latency examination, and out-of-band verification.
IP spoofing involves altering a TCP packet so that it appears to be coming from a known, trusted source, thus giving the attacker access to the network.
Session hijacking typically involves a Wi-Fi network without encryption, where an attacker is able to intercept another user’s HTTP session cookie. The attacker then uses the same cookie to take over the victim user’s HTTP session. This has been demonstrated with the Firesheep Firefox extension.
In a SYN flood attack, TCP packets with a spoofed source address request a connection (SYN bit set) to the target network. The target responds with a SYN-ACK packet, but the spoofed source never replies. Half-open connections are incomplete communication sessions awaiting completion of the TCP three-way handshake. These connections can quickly overwhelm a system’s resources while the system waits for the half-open connections to time out, which causes the system to crash or otherwise become unusable.
SYN floods are countered on Cisco routers by using two features: TCP Intercept, which effectively proxies for the half-open connections; and Committed Access Rate (CAR), which limits the bandwidth available to certain types of traffic. Checkpoint’s FW-1 firewall has a feature known as SYN Defender that functions in way similar to the Cisco TCP Intercept feature. Other defenses include changing the default maximum number of TCP half-open connections and reducing the timeout period on networked systems.
In a Teardrop attack, the Length and Fragmentation offset fields of sequential IP packets are modified, causing some target systems to become confused and crash.
In a UDP flood attack, large numbers of UDP packets are sent to the target network to consume available bandwidth and/or system resources. UDP floods can generally be countered by dropping unnecessary UDP packets at the router. However, if the attack uses a required UDP port (such as DNS port 53), other countermeasures need to be employed.
Eavesdropping is the act of listening to network traffic, which is generally performed because an attacker wants to learn something about the communications session or its content. The attacker may be looking for login credentials being passed in cleartext, or other sensitive information of interest using SIPVicious, for example, to listen to VoIP traffic. Whatever the reason, we need to reduce attackers’ ability to learn from network traffic: Eavesdropping can be defeated through encryption (to protect sensitive content), as well as encapsulation (to conceal additional information about the communication).