Chapter 5
Communications and Network Security

How do we build trust and confidence into the globe-spanning communications that our businesses, our fortunes, and our very lives depend on? Whether by in-person conversation, videoconferencing, or the World Wide Web, people and businesses communicate. Communications, as we saw in earlier chapters, involves exchanging ideas to achieve a common pool of understanding—it is not just about data or information. Effective communication requires three basic ingredients: a system of symbols and protocols, a medium or a channel in which those protocols exchange symbols on behalf of senders and receivers, and trust. Not that we always trust every communications process 100%, nor do we need to!

We also have to grapple with the convergence of communications and computing technologies. People, their devices, and their ways of doing business no longer accept old-fashioned boundaries that used to exist between voice, video, TXT and SMS, data, or a myriad of other computer-enabled information services. This convergence transforms what we trust when we communicate and how we achieve that trust. As SSCPs, we need to know how to gauge the trustworthiness of a particular communications system, keep it operating at the required level of trust, and improve that trustworthiness if that's what our stakeholders need. Let's look in more detail at how communications security can be achieved and, based on that, get into the details of securing the network-based elements of our communications systems.

To do this, we'll need to grow the CIA trinity of earlier chapters—confidentiality, integrity, and availability—into a more comprehensive framework that adds two key ideas to our stack of security needs. This is just one way you'll start thinking in terms of protocol stacks—as system descriptors, as roadmaps for diagnosing problems, and as models of the threat and risk landscape.

Trusting Our Communications in a Converged World

It's useful to reflect a bit on the not-too-distant history of telecommunications, computing, and information security. Don't panic—we don't have to go all the way back to the invention of radio or the telegraph! Think back, though, to the times right after World War II and what the communications and information systems of that world were like. Competing private companies with competing technical approaches, and very different business models, often confounded users' needs to bring local communications systems into harmony with ones in another city, in another country, or on a distant continent. Radio and telephones didn't connect very well; mobile two-way radios and their landside systems were complex, temperamental, and expensive to operate. Computers didn't talk with each other, except via parcel post or courier delivery of magnetic tapes or boxes of punched cards. Mail was not electronic.

By the 1960s, however, many different factors were pushing each of the different communications technologies to somehow come together in ways that would provide greater capabilities, more flexibility, and growth potential, and at lower total cost of ownership. Communications satellites started to carry hundreds of voice-grade analog channels, or perhaps two or three broadcast-quality television signals. At the same time, military and commercial users needed better ways to secure the contents of messages, and even secure or obscure their routing (to defeat traffic analysis attacks). The computer industry centered on huge mainframe computers, which might cost a million dollars or more—and which sat idle many times each day, and especially over holiday weekends! Mobile communications users wanted two-way voice communication that didn't require suitcase-sized transceivers that filled the trunk of their cars.

Without going too far into the technical, economic, or political, what transformed all of these separate and distinct communications media into one world-spanning Web and Internet? In 1969, in close cooperation with these (and other) industries and academia, the U.S. Department of Defense Advanced Research Projects Agency started its ARPANet project. By some accounts, the scope of what it tried to achieve was audacious in the extreme. The result of ARPANet is all around us today, in the form of the Internet, cell phone technology, VOIP, streaming video, and everything we take for granted over the Web and the Internet. And so much more.

One simple idea illustrates the breadth and depth of this change. Before ARPANet, we all thought of communications in terms of calls we placed. We set up a circuit or a channel, had our conversation, then took the circuit down so that some other callers could use parts of it in their circuits. ARPANet's packet-based communications caused us all to forget about the channel, forget about the circuit, and focus on the messages themselves. (You'll see that this had both good and bad consequences for information security later in this chapter.)

One of the things we take for granted is the convergence of all of these technologies, and so many more, into what seems to us to be a seamless, cohesive, purposeful, reliable, and sometimes even secure communications infrastructure. The word convergence is used to sum up the technical, business, economic, political, social, and perceptual changes that brought so many different private businesses, public organizations, and international standards into a community of form, function, feature, and intent. What we sometimes ignore, to our peril, is how that convergence has drastically changed the ways in which SSCPs need to think about communications security, computing security, and information assurance.

Emblematic of this change might be the Chester Gould's cartoon character Dick Tracy and his wristwatch two-way radio, first introduced to American readers in 1946. It's credited with inspiring the invention of the smartphone, and perhaps even the smartwatches that are all but taken for granted today. What Gould's character didn't explore for us were the information security needs of a police force whose detectives had such devices—nor the physical, logical, and administrative techniques they'd need to use to keep their communications safe, secure, confidential, and reliable.

To keep those and any other communications trustworthy, think about some key ingredients that we find in any communications system or process:

  • Purpose or intent. Somebody has something they want to accomplish, whether it is ordering a pizza to be delivered or commanding troops into battle. This intention should shape the whole communication process. With a clear statement of intent, the sender can better identify who the target audience is, and whether the intention can be achieved by exchanging one key idea or a whole series of ideas woven together into some kind of story or narrative. This purpose or intent also contains a sense of who not to include in the communication, which may (or should) dictate choices about how the communication is accomplished and protected.
  • Senders and recipients. The actual people or groups on both ends of the conversation or the call; sometimes called the parties to the communication.
  • Protocols that shape how the conversation or communication can start, how it is conducted, and how it is brought to a close. Protocols include a choice of language, character or symbol set, and maybe even a restricted domain of ideas to communicate about. Protocols provide for ways to detect errors in transmission or receipt, and ways to confirm that the recipient both received and understood the message as sent. Other protocols might also verify whether the true purpose of the communication got across as well.
  • Message content, which is the ideas we wish to exchange encoded or represented in the chosen language, character or symbol sets, and protocols.
  • A communications medium, which is what makes transporting the message from one place to another possible. Communications media are physical—such as paper, sound waves, radio waves, electrical impulses sent down a wire, flashes of light or puffs of smoke, or almost anything else.

For example, a letter or holiday greeting might be printed or written on paper or a card, which is placed in an envelope and mailed to the recipient via a national or international postal system. Purpose, the communicating parties, the protocols, the content, and the medium all have to work together to convey “happy holidays,” “come home soon,” or “send lawyers, guns, and money” if the message is to get from sender to receiver with its meaning intact.

At the end of the day (or at the end of the call), both senders and receivers have two critical decisions to make: how much of what was communicated was trustworthy, and what if anything should they do as a result of that communication? The explicit content of what was exchanged has a bearing on these decisions, of course, but so does all of the subtext associated with the conversation. Subtext is about context: about “reading between the lines,” drawing inferences (or suggesting them) regarding what was not said by either party.

The risk that subtext can get it wrong is great! The “Hot Line” illustrates this potential for disaster. During the Cold War, the “Hot Line” communications system connected the U.S. national command authority and their counterparts in the Soviet Union. This system was created to reduce the risk of accidental misunderstandings that could lead to nuclear war between the two superpowers. Both parties insisted that this be a plain text teletype circuit, with messages simultaneously sent in English and Russian, to prevent either side from trying to read too much into the voice or mannerisms of translators and speakers at either end. People and organizations need to worry about getting the subtext wrong or missing it altogether. So far, as an SSCP, you won't have to worry about how to “secure the subtext.”

Communications security is about data in motion—as it is going to and from the endpoints and the other elements or nodes of our systems, such as servers. It's not about data at rest or data in use, per se. Chapter 8, “Hardware and Systems Security,” and Chapter 9, “Applications, Data, and Cloud Security,” will show you how to enhance the security of data at rest and in use, whether inside the system or at its endpoints. Chapter 11, “Business Continuity via Information Security and People Power,” will also look at how we keep the people layer of our systems communicating in effective, safe, secure, and reliable ways, both in their roles as users and managers of their company's IT infrastructures, but also as people performing their broader roles within the company or organization and its place in the market and in society at large.

CIANA+PS: Applying Security Needs to Networks

Chapter 2, “Information Security Fundamentals,” introduced the concepts of confidentiality, integrity, and availability as the three main attributes or elements of information security and assurance. We also saw that before we can implement plans and programs to achieve that triad, we have to identify what information must be protected from disclosure (kept confidential), its meaning kept intact and correct (ensure its integrity), and that it's where we need it, when we need it (that is, the information is available). As we dig further into what information security entails, we'll have to add four additional and very important attributes to our CIA triad: nonrepudiation, authentication, privacy, and safety.

To repudiate something means to attempt to deny an action that you've performed or something you said. You can also attempt to deny that you ever received a particular message or didn't see or notice that someone else performed an action. In most cases, we repudiate our own actions or the actions of others so as to attempt to deny responsibility for them. “They didn't have my informed consent,” we might claim; “I never got that letter,” or “I didn't see the traffic light turn yellow.” Thus, nonrepudiation is the characteristic of a communications system that prevents a user from claiming that they never sent or never received a particular message. This communications system characteristic sets limits on what senders or receivers can do by restricting or preventing any attempt by either party to repudiate a message, its content, or its meaning.

Authentication, in this context, also pertains to senders and receivers. Authentication (or authenticity) is the verification that the sender or receiver is who they claim to be, and then the further validation that they have been granted permission to use that communications system. Authentication might also go further by validating that a particular sender has been granted the privilege of communicating with a particular sender, regarding the content or intent of the message itself. These privileges—use of the system, and connection with a particular party—can also be defined with further restrictions, as we'll see later in Chapter 6, “Identity and Access Control.” Authentication as a process has one more “A” associated with it, and that is accountability. This requires that the system keep records of who attempts to access the system, who was authenticated to use it, and what communications or exchanges of messages they had with whom.

Adding safety to our security needs mnemonic reminds us that whether it is via operational technologies or not, far more modern IT systems can directly place people or property at risk of damage, injury, or death than many of us realize. We've already seen one death attributed to a ransomware attack (which crippled hospitals in Germany in 2019) and attempts to contaminate drinking water supplies in several countries. Cybercrime has also demonstrated the need for greater awareness of the need to protect the privacy of individuals. Initially, this was seen as protecting the ways in which personally identifying information (PII) was gathered, stored, used, and shared; increasingly, the need to protect a person's location and the data about what they are doing on the Internet is growing in urgency and importance. (Some analysts refer to this as “the death of the third-party cookie,” signifying the sea change in online tracking and advertising systems that we saw begin in 2021.)

Thus CIANA+PS: confidentiality, integrity, availability, nonrepudiation, authentication, privacy, and safety. As we'll see in this chapter, networks and their protocols provide significant support to the first five characteristics; safety and privacy are (so far) largely left to the care of applications, programs, and organizational practices that run on top of the Internet's protocol stack. As a result, throughout this chapter, we'll primarily refer to network security needs via CIANA and bring privacy and safety in where it makes sense.

Recall from earlier chapters that CIANA+PS crystallizes our understanding of what information needs what kinds of protection. Most businesses and organizations find that it takes several different but related thought processes to bring this all together in ways that their IT staff and information security team can appreciate and carry out. Several key sets of ideas directly relate to, or help set, the information classification guidelines that should drive the implementation of information risk reduction efforts:

  • Strategic plans define long-term goals and objectives, identify key markets or target audiences, and focus on strategic relationships with key stakeholders and partners.
  • The business impact analysis (BIA) links high-priority strategic goals, objectives, and outcomes with the business logic, processes, and information assets vital to achieving those outcomes.
  • A communications strategy guides how the organization talks with its stakeholders, customers, staff, and other target audiences so that mutual understanding leads to behaviors that support achieving the organization's strategic goals.
  • Risk management plans, particularly information and IT risk management plans, provide the translation of strategic thinking into near-term tactical planning.

The net result should be that the organization combines those four viewpoints into a cohesive and effective information risk management plan, which provides the foundation for “all things CIANA+PS” that the information security team needs to carry out. This drives the ways that SSCPs and others on that information security team conduct vulnerability assessments, choose mitigation techniques and controls, configure and operate them, and monitor them for effectiveness.

Threat Modeling for Communications Systems

With that integrated picture of information security needs, it's time to do some threat modeling of our communications systems and processes. Chapter 4, “Operationalizing Risk Mitigation,” introduced the concepts of threat modeling and the use of boundaries or threat surfaces to segregate parts of our systems from each other and from the outside world. Let's take a quick review of the basics:

  • Notionally, the total CIANA+PS security needs of information assets inside a threat surface is greater than what actors, subjects, or systems elements outside of that boundary should enjoy.
  • Subjects access objects to use or change them; objects are information assets (or people or processes) that exist for subjects to use, invoke, or otherwise interact with. A person reads from a file, possibly by invoking a display process that accesses that file, and presents it on their endpoint device's display screen. In that case, the display process is both an object (to the person invoking it) and a subject (as it accesses the file).
  • The threat surface is a boundary that encapsulates objects that require a degree of protection to meet their CIANA+PS needs.
  • Controlled or trusted paths are deliberately created by the system designers and builders and provide a channel or gateway that subjects on one side of the threat surface use to access objects on the other side. Such paths or portals should contain features that authenticate subjects prior to granting access.

Note that this subject-object access can be bidirectional; there are security concerns in both reading and writing across a security boundary or threat surface. We'll save the theory and practice of that for Chapter 6.

The threat surface thinks of the problem from the defensive perspective: what do I need to protect and defend from attack? By contrast, threat modeling also defines the attack surface as the set of entities, information assets, features, or elements that are the focus of reconnaissance, intrusion, manipulation, and misuse, as part of an attack on an information system. Typically, attack surfaces are at the level of vendor-developed systems or applications; thus, Microsoft Office Pro 2021 is one attack surface, while Microsoft Office 365 Home is another. Other attack surfaces can be specific operating systems, or the hardware and firmware packages that are our network hardware elements. Even a network intrusion detection system (NIDS) can be an attack surface!

Applying these concepts to the total set of organizational communications processes and systems could be a daunting task for an SSCP. Let's peel that onion a layer at a time, though, by separating it into two major domains: that which runs on the internal computer networks and systems, and that which is really people-to-people in nature. We'll work with the people-to-people more closely in Chapter 11.

For now, let's combine this concept of threat modeling with the most commonly used sets of protocols, or protocol stacks, that we use in tying our computers, communications, and endpoints together.

Internet Systems Concepts

As an SSCP, you'll need to focus your thinking about networks and security to one particular kind of networks—the ones that link together most of the computers and communications systems that businesses, governments, and people use. This is “the Internet,” capitalized as a proper name. It's almost everywhere; almost everybody uses it, somehow, in their day-to-day work or leisure pursuits. It is what the World Wide Web (also a proper noun) runs on. It's where we create most of the value of e-commerce, and where most of the information security threats expose people and business to loss or damage. This section will introduce the basic concepts of the Internet and its protocols; then, layer by layer, we'll look at more of their innermost secrets, their common vulnerabilities, and some potential countermeasures you might need to use. The OSI 7-layer reference model will be our framework and guide along the way, as it reveals some critical ideas about vulnerabilities and countermeasures you'll need to appreciate.

Communications and network systems designers talk about protocol stacks as the layers or nested sets of different protocols that work together to define and deliver a set of services to users. An individual protocol or layer defines the specific characteristics, the form, features, and functions that make up that protocol or layer. For example, almost since the first telephone services were made available to the public, The Bell Telephone Company in the U.S. defined a set of connection standards for basic voice-grade telephone service; today, one such standard is the RJ-11 physical and electrical connector for four-wire telephone services. The RJ-11 connection standard says nothing about dial tones, pulse (rotary dial), or Touch-Tone dual-tone multiple frequency signaling, or how connections are initiated, established, used, and then taken down as part of making a “telephone call” between parties. Other protocols define services at those layers. The “stack” starts with the lowest level, usually the physical interconnect standard, and layers each successively higher-level standard onto those below it. These higher-level standards can go on almost forever; think of how “reverse the charges,” advanced billing features, or many caller ID features need to depend on lower-level services being defined and working properly, and you've got the idea of a protocol stack.

This is an example of using layers of abstraction to build up complex and powerful systems from subsystems or components. Each component is abstracted, reducing it to just what happens at the interface—how you request services of it, provide inputs to it, and get services or outputs from it. What happens behind that curtain is (or should be) none of your concern, as the external service user. (The service builder has to fully specify how the service behaves internally so that it can fulfill what's required of it.) One important design imperative with stacks of protocols is to isolate the impact of changes; changes in physical transmission of signals should not affect the way applications work with their users, nor should adding a new application require a change in that physical media.

A protocol stack is a document—a set of ideas or design standards. Designers and builders implement the protocol stack into the right set of hardware, software, and procedural tasks (done by people or others). These implementations present the features of the protocol stack as services that can be requested by subjects (people or software tasks).

Datagrams and Protocol Data Units

First, let's introduce the concept of a datagram, which is a common term when talking about communications and network protocols. A datagram is the unit of information used by a protocol layer or a function within it. It's the unit of measure of information in each individual transfer. Each layer of the protocol stack takes the datagram it receives from the layers above it and repackages it as necessary to achieve the desired results. Sending a message via flashlights (or an Aldiss lamp, for those of the sea services) illustrates the datagram concept:

  • An on/off flash of the light, or a flash of a different duration, is one bit's worth of information; the datagrams at the lamp level are bits.
  • If the message being sent is encoded in Morse code, then that code dictates a sequence of short and long pulses for each datagram that represents a letter, digit, or other symbol.
  • Higher layers in the protocol would then define sequences of handshakes to verify sender and receiver, indicate what kind of data is about to be sent, and specify how to acknowledge or request retransmission. Each of those sequences might have one or more message in it, and each of those messages would be a datagram at that level of the protocol.
  • Finally, the captain of one of those two ships dictates a particular message to be sent to the other ship, and that message, captain-to-captain, is itself a datagram.

Note, however, another usage of this word. The User Datagram Protocol (UDP) is an alternate data communications protocol to Transmission Control Protocol, and both of these are at the same level (Layer 3, Internetworking) of the TCP/IP stack. And to add to the terminological confusion, the OSI model (as we'll see in a moment) uses protocol data unit (PDU) to refer to the unit of measure of the data sent in a single protocol unit and datagram to UDP. Be careful not to confuse UDP and PDU!

Table 5.1 may help you avoid some of this confusion by placing the OSI and TCP/IP stacks side by side. We'll examine each layer in greater detail in a few moments.

TABLE 5.1  OSI and TCP/IP side by side

Types of layers Typical protocols OSI layer OSI protocol data unit name TCP/IP layer TCP/IP datagram name
Host layers HTTP, HTTPS, SMTP, IMAP, SNMP, POP3, FTP, … 7. Application Data (Outside of TCP/IP model scope) Data
Characters, MPEG, SSL/TLS, compression, S/MIME, … 6. Presentation
NetBIOS, SAP, session handshaking connections 5. Session
TCP, UDP 4. Transport Segment, except:
UDP: datagram
Transport Segment
Media layers IPv4 / IPv6 IP address, ICMP, IPSec, ARP, MPLS, … 3. Network Packet Network (or Internetworking) Packet
Ethernet, 802.1, PPP, ATM, Fibre Channel, FDDI, MAC address 2. Link Frame Data Link Frame
Cables, connectors, 10BaseT, 802.11x, ISDN, T1, … 1. Physical Symbol Physical Bits

Handshakes

We'll start with a simple but commonplace example that reveals the role of handshaking to control and direct how the Internet handles our data communications needs. A handshake is a sequence of small, simple communications that we send and receive, such as hello and goodbye, ask and reply, or acknowledge or not-acknowledge, which control and carry out the communications we need. Handshakes are defined in the protocols we agree to use. Let's look at a simple file transfer to a server that I want to do via File Transfer Protocol (FTP) to illustrate this:

  1. I ask my laptop to run the file transfer client app.
  2. Now that it's running, my FTP client app asks the OS to connect to the FTP server.
  3. The FTP server accepts my FTP client's connection request.
  4. My FTP client requests to upload a file to a designated folder in the directory tree on that server.
  5. The FTP server accepts the request, and says “start sending” to my FTP client.
  6. My client sends a chunk of data to the server; the server acknowledges receipt, or requests a retransmission if it encounters an error.
  7. My client signals the server that the file has been fully uploaded, and requests the server to mark the received file as closed, updating its directories to reflect this new file.
  8. My client informs me of successfully completing the upload.
  9. With no more files to transfer, I exit the FTP app.

It's interesting to note that the Internet was first created to facilitate things like simple file transfers between computer centers; email was created as a higher-level protocol that used FTP to send and receive small files that were the email notes themselves.

To make this work, we need ways of physically and logically connecting end-user computers (or smartphones or smart toasters) to servers that can support those endpoints with functions and data that users want and need. What this all quickly turned into is the kind of infrastructure we have today:

  • End-user devices (much like “endpoints” in our systems) hand off data to the network for transmission, receive data from other users via the network, and monitor the progress of the communications they care about. In most systems, a network interface card (NIC, or chip), acts as the go-between. (We'll look at this in detail later.)
  • An Internet point of presence is a physical place at which a local Internet service provider (ISP) brings a physical connection from the Internet backbone to the user's NIC. Contractually, the user owns and is responsible for maintaining their own equipment and connections to the point of presence, and the ISP owns and maintains from there to the Internet backbone. Typically, a modem or combination modem/router device performs both the physical and logical transformation of what the user's equipment needs in the way of data signaling into what the ISP's side needs to see.
  • The Internet backbone is a mesh of Internet working nodes and high-capacity, long-distance communications circuits that connect them to each other and to the ISPs.

The physical connections handle the electronic (or electro-optical) signaling that the devices themselves need to communicate with each other. The logical connections are how the right pair of endpoints—the user NIC and the server or other endpoint NIC—get connected with each other, rather than with some other device “out there” in the wilds of the Internet. This happens through address resolution and name resolution.

Packets and Encapsulation

Note in that FTP example earlier how the file I uploaded was broken into a series of chunks, or packets, rather than sent in one contiguous block of data. Each packet is sent across the Internet by itself (wrapped in header and trailer information that identifies the sender, recipient, and other important information we'll go into later). Breaking a large file into packets allows smarter trade-offs between actual throughput rate and error rates and recovery strategies. (Rather than resend the entire file because line noise corrupted one or two bytes, we might need to resend just the one corrupted packet.) However, since sending each packet requires a certain amount of handshake overhead to package, address, route, send, receive, unpack, and acknowledge, the smaller the packet size, the less efficient the overall communications system can be.

Sending a file by breaking it up into packets has an interesting consequence: if each packet has a unique serial number as part of its header, as long as the receiving application can put the packets back together in the proper order, we don't need to care what order they are sent in or arrive in. So if the receiver requested a retransmission of packet number 41, it can still receive and process packet 42, or even several more, while waiting for the sender to retransmit it.

Right away we see a key feature of packet-based communications systems: we have to add information to each packet in order to tell both the recipient and the next layer in the protocol stack what to do with it! In our FTP example earlier, we start by breaking the file up into fixed-length chunks, or packets, of data—but we've got to wrap them with data that says where it's from, where it's going, and the packet sequence number. That data goes in a header (data preceding the actual segment data itself), and new end-to-end error correcting checksums are put into a new trailer. This creates a new datagram at this level of the protocol stack. That new, longer datagram is given to the first layer of the protocol stack. That layer probably has to do something to it; that means it will encapsulate the datagram it was given by adding another header and trailer. At the receiver, each layer of the protocol unwraps the datagram it receives from the lower layer (by processing the information in its header and trailer, and then removing them), and passes this shorter datagram up to the next layer. Sometimes, the datagram from a higher layer in a protocol stack will be referred to as the payload for the next layer down. Figure 5.1 shows this in action.

Schematic illustration of Wrapping: layer-by-layer encapsulation

FIGURE 5.1 Wrapping: layer-by-layer encapsulation

The flow of wrapping, as shown in Figure 5.1, illustrates how a higher-layer protocol logically communicates with its opposite number in another system by having to first wrap and pass its datagrams to lower-layer protocols in its own stack. It's not until the Physical layer connections that signals actually move from one system to another. (Note that this even holds true for two virtual machines talking to each other over a software-defined network that connects them, even if they're running on the same bare metal host!) In OSI 7-layer reference model terminology, this means that layer n of the stack takes the service data unit (SDU) it receives from layer n+1, processes and wraps the SDU with its layer- specific header and footer to produce the datagram at its layer, and passes this new datagram as an SDU to the next layer down in the stack.

We'll see what these headers look like, layer by layer, in a bit.

Addressing, Routing, and Switching

In plain old telephone systems (POTS), your phone number uniquely identified the pair of wires that came from the telephone company's central office switches to your house. If you moved, you got a new phone number, or the phone company had to physically disconnect your old house's pair of wires from its switch at that number's terminal, and hook up your new house's phone line instead. From the start (thanks in large part to the people from Bell Laboratories and other telephone companies working with the ARPANet team), we knew we needed something more dynamic, adaptable, and easier to use. What they developed was a way to define both a logical address (the IP or Internet Protocol address), the physical address or identity of each NIC in each device (its media access control or MAC address), and a way to map from one to the other while allowing a device to be in one place today and another place tomorrow. From its earliest ARPANet days until the mid-1990s, the Internet Assigned Numbers Authority (IANA) handled the assignment of IP addresses and address ranges to users and organizations who requested them.

Routing is the process of determining what path or set of paths to use to send a set of data from one endpoint device through the network to another. In POTS, the route of the call was static—once you set up the circuit, it stayed up until the call was completed, unless a malfunction interrupted the call. The Internet, by contrast, does not route calls—it routes individually addressed packets from sender to recipient. If a link or a series of communications nodes in the Internet itself go down, senders and receivers do not notice; subsequent packets will be dynamically rerouted to working connections and nodes. This also allows a node (or a link) to say “no” to some packets as part of load-leveling and traffic management schemes. The Internet (via its protocol stack) handles routing as a distributed, loosely coupled, and dynamic process—every node on the Internet maintains a variety of data that help it decide which of the nodes it's connected to should handle a particular packet that it wants to forward to the ultimate recipient (no matter how many intermediate nodes it must pass through to get there).

Switching is the process used by one node to receive data on one of its input ports and choose which output port to send the data to. (If a particular device has only one input and one output, the only switching it can do is to pass the data through or deny it passage.) A simple switch depends on the incoming data stream to explicitly state which path to send the data out on; a router, by contrast, uses routing information and routing algorithms to decide what to tell its built-in switch to properly route each incoming packet.

Another way to find and communicate with someone is to know their name and then somehow look that name up in a directory. By the mid-1980s, the Internet was making extensive use of such naming conventions, creating the Domain Name System (DNS). A domain name consists of sets of characters joined by periods (or “dots”); “bbc.co.uk” illustrates the higher-level domain “.co.uk” for commercial entities in the United Kingdom, and “bbc” is the name itself. Taken together that makes a fully qualified domain name. The DNS consists of a set of servers that resolve domain names into IP addresses, registrars that assign and issue both IP addresses and the domain names associated with them to parties who want them, and the regulatory processes that administer all of that.

Network Segmentation

Segmentation is the process of breaking a large network into smaller ones. “The Internet” acts as if it is one gigantic network, but it's not. It's actually many millions of internet segments that come together at many different points to provide seamless service. An internet segment (sometimes called “an internet,” lowercase) is a network of devices that communicate using TCP/IP and thus support the OSI 7-layer reference model. This segmentation can happen at any of the three lower layers of our protocol stacks, as we'll see in a bit. Devices within a network segment can communicate with each other, but which layer the segments connect on, and what kind of device implements that connection, can restrict the outside world to seeing the connection device (such as a router) and not the nodes on the subnet below it.

Segmentation of a large internet into multiple, smaller network segments provides a number of practical benefits, which affect the choice of how to join segments and at which layer of the protocol stack. The switch or router that runs the segment, and its connection with the next higher segment, are two single points of failure for the segment. If the device fails or the cable is damaged, no device on that segment can communicate with the other devices or the outside world. This can also help isolate other segments from failure of routers or switches, cables, or errors (or attacks) that are flooding a segment with traffic.

Subnets are different than network segments. We'll take a deep dive into the fine art of subnetting after we've looked at the overall protocol stack.

URLs and the Web

In 1990, Tim Berners-Lee, a researcher at CERN in Switzerland, confronted the problem that researchers were having: they could not find and use what they already knew or discovered, because they could not effectively keep track of everything they wrote and where they put it! CERN was drowning in its own data. Berners-Lee wanted to take the much older idea of a hyperlinked or hypertext-based document one step further. Instead of just having links to points within the document, he wanted to have documents be able to point to other documents anywhere on the Internet. This required that several new ingredients be added to the Internet:

  • A unique way of naming a document that included where it could be found on the Internet, which came to be called a locator
  • Ways to embed those unique names into another document, where the document's creator wanted the links to be (rather than just in a list at the end, for example)
  • A means of identifying a computer on the Internet as one that stored such documents and would make them available as a service
  • Directory systems and tools that could collect the addresses or names of those document servers
  • Keyword search capabilities that could identify what documents on a server contained which keywords
  • Applications that an individual user could run that could query multiple servers to see if they had documents the user might want, and then present those documents to the user to view, download, or use in other ways
  • Protocols that could tie all of those moving parts together in sensible, scalable, and maintainable ways

By 1991, new words entered our vernacular: webpage, Hypertext Transfer Protocol (HTTP), Web browser, Web crawler, and URL, to name a few. Today, all of that has become so commonplace, so ubiquitous, that it's easy to overlook just how many powerfully innovative ideas had to come together all at once. Knowing when to use the right uniform resource locators (URLs) became more important than understanding IP addresses. URLs provide us with an unambiguous way to identify a protocol, a server on the network, and a specific asset on that server. Additionally, a URL as a command line can contain values to be passed as variables to a process running on the server. By 1998, the business of growing and regulating both IP addresses and domain names grew to the point that a new nonprofit, nongovernmental organization was created, the Internet Corporation for Assigned Names and Numbers (ICANN, pronounced “eye-can”).

The rapid acceptance of the World Wide Web and the HTTP concepts and protocols that empowered it demonstrates a vital idea: the layered, keep-it-simple approach embodied in the TCP/IP protocol stack and the OSI 7-layer model works. Those stacks give us a strong but simple foundation on which we can build virtually any information service we can imagine.

Topologies

I would consider putting in drawings with each topology. Some people are visual learners & need to see it to understand it.

The brief introduction (or review) of networking fundamentals we've had thus far brings us to ask an important question: how do we hook all of those network devices and endpoints together? We clearly cannot build one switch with a million ports on it, but we can use the logical design of the Internet protocols to let us build more practical, modular subsystem elements and then connect them in various ways to achieve what we need.

A topology, to network designers and engineers, is the basic logical geometry by which different elements of a network connect together. Topologies consist of nodes and the links that connect them. Experience (and much mathematical study!) gives us some simple, fundamental topologies to use as building blocks for larger systems:

  • Point-to-point is the simplest topology: two nodes, with one link between them. This is sometimes called peer-to-peer if the two nodes have relatively the same set of privileges and responsibilities with respect to each other (that is, neither node is in control of the other). If one node fails, or the connection fails, the network cannot function; whether the other node continues to function normally or has to abnormally terminate processes is strictly up to that node (and its designers and users).
  • Bus topologies or networks connect multiple nodes together, one after the other, in series, as shown in Figure 5.2. The bus provides the infrastructure for sending signals to all of the nodes, and for sending addressing information (sometimes called device select) that allows each node to know when to listen to the data and when to ignore it. Well-formed bus designs should not require each node to process data or control signals in order to pass them on to the next node on the bus. Backplanes are a familiar implementation of this; for example, the industry-standard PCI bus provides a number of slots that can take almost any PCI-compatible device (in any slot). A hot-swap bus has special design features that allow one device to be powered off and removed without requiring the bus, other devices, or the overall system to be shut down. These are extensively used in storage subsystems. Bus systems typically are limited in length, rarely exceeding three meters overall.
Schematic illustration of Bus topology

FIGURE 5.2 Bus topology

  • Ring networks are a series of point-to-point-to-point connections, with the last node on the chain looped back to connect to the first, as shown in Figure 5.3. As point-to-point connections, each node has to be functioning properly in order to do its job of passing data on to the next node on the ring. This does allow ring systems nodes to provide signal conditioning that can boost the effective length of the overall ring (if each link has a maximum 10 meter length, then 10 nodes could span a total length of 50 meters out and back). Nodes and connections all have to work in order for the ring to function. Rings are designed to provide either a unidirectional or bidirectional flow of control and data.
Schematic illustration of Ring network topology

FIGURE 5.3 Ring network topology

  • Star networks have one central node that is connected to multiple other nodes via point-to-point connections. Unlike a point-to-point network, the node in the center has to provide (at least some) services to control and administer the network. The central node is therefore a server (since it provides services to others on the star network), and the other nodes are all clients of that server. This is shown in Figure 5.4.
Schematic illustration of Star (or tree) network topology

FIGURE 5.4 Star (or tree) network topology

  • Mesh networks in general provide multiple point-to-point connections between some or all of the nodes in the mesh, as shown in Figure 5.5. Mesh designs can be uniform (all nodes have point-to-point connections to all other nodes), or contain subsets of nodes with different degrees of interconnection. As a result, mesh designs can have a variety of client-server, server-to-server, or peer-to-peer relationships built into them. One mesh architecture you probably use every day is the mobile phone system, with its cellular design based on a mesh of base stations providing the connectivity needed. Mesh designs are used in datacenters, since they provide multiple paths between multiple CPUs, storage controllers, or Internet-facing communications gateways. Mesh designs are also fundamental to supercomputer designs, for the same reason. Mesh designs tend to be very robust, since normal TCP/IP alternate routing can allow traffic to continue to flow if one or a number of nodes or connections fail; at worst, overall throughput of the mesh and its set of nodes may decrease until repairs can be made.
Schematic illustration of Mesh network topology (fully connected)

FIGURE 5.5 Mesh network topology (fully connected)

With these in mind, a typical SOHO (small office/home office) network at a coffee house that provides Wi-Fi for its customers might use a mix of the following topology elements:

  • A simple mesh of two point-to-point connections via ISPs to the Internet to provide a high degree of availability
  • Point-to-point from that mesh to a firewall system
  • Star connections to support three subnets: one for retail systems, one for store administration, and one for customer or staff Wi-Fi access. Each of these would be its own star network.

“Best Effort” and Trusting Designs

The fundamental design paradigm of TCP/IP and OSI 7-layer stacks is that they deliver “best-effort” services. In contract law and systems engineering, a best efforts basis sets expectations for services being requested and delivered; the server will do what is reasonable and prudent, but will not go “beyond the call of duty” to make sure that the service is performed, day or night, rain or shine! There are no guarantees. Nothing asserts that if your device's firmware does things the “wrong” way its errors will keep it from connecting, getting traffic sent and received correctly, or doing any other network function. Nothing also guarantees that your traffic will go where you want it to, and nowhere else, that it will not be seen by anybody else along the way, and will not suffer any corruption of content. Yes, each individual packet does have parity and error correction and detection checksums built into it. These may (no guarantees!) cause any piece of hardware along the route to reject the packet as “in error,” and request the sender retransmit it. An Internet node or the NIC in your endpoint might or might not detect conflicts in the way that fields within the packet's wrappers are set; it may or may not be smart enough to ask for a resend, or pass back some kind of error code and a request that the sender try again.

Think about the idea of routing a segment in a best-effort way: the first node that receives the segment will try to figure out which node to forward it on to, so that the packet has a pretty good chance of getting to the recipient in a reasonable amount of time. But this depends on ways of one node asking other nodes if they know or recognize the address, or know some other node that does.

The protocols do define a number of standardized error codes that relate to the most commonly known errors, such as attempting to send traffic to an address that is unknown and unresolvable. A wealth of information is available about what might cause such errors, how participants might work to resolve them, and what a recommended strategy is to recover from one when it occurs. What this means is that the burden for managing the work that we want to accomplish by means of using the Internet is not the Internet's responsibility. That burden of plan, do, check, and act is allocated to the higher-level functions within application programs, operating systems, and NIC hardware and device drivers that are using these protocols, or the people and business logic that actually invokes those applications in the first place.

In many respects, the core of TCP/IP is a trusting design. The designers (and the Internet) trust that equipment, services, and people using it will behave properly, follow the rules, and use the protocols in the spirit and manner in which they were written. Internet users and their equipment are expected to cooperate with each other, as each spends a fragment of their time, CPU power, memory, or money to help many other users achieve what they need.

One consequence we need to face head on of this trusting, cooperative, best-efforts nature of our Internet: security becomes an add-on. We'll see how to add it on, layer by layer, later in this chapter.

Two Protocol Stacks, One Internet

Let's look at two different protocol stacks for computer systems networks. Both are published, public domain standards; both are widely adopted around the world. The “geekiest” of the standards is TCP/IP, the Transmission Control Protocol over Internet Protocol standard (two layers of the stack right there!). Its four layers define how we build up networks from the physical interconnections up to what it calls the Transport layer, where the heavy lifting of turning a file transfer into Internet traffic starts to take place. TCP/IP also defines and provides homes for many of the other protocols that make addressing, routing, naming, and service delivery happen.

By contrast, the OSI 7-layer reference model is perhaps the most “getting-business-done” of the two stacks. It focuses on getting the day-to-day business and organizational tasks done that really are why we wanted to internetwork computers in the first place. This is readily apparent when we start with its topmost, or application, layer. We use application programs to handle personal, business, government, and military activities—those applications certainly need the operating systems that they depend on for services, but no one does their online banking just using Windows 10 or Red Hat Linux alone!

Many network engineers and technicians may thoroughly understand the TCP/IP model, since they use it every day, but they have little or no understanding of the OSI 7-layer model. They often see it as too abstract or too conceptual to have any real utility in the day-to-day world of network administration or network security. Nothing could be further from the truth! As you'll see, the OSI's top three levels provide powerful ways for you to think about information systems security—beyond just keeping the networks secure. In fact, many of the most troublesome information security threats that SSCPs must deal with occur at the upper layers of the OSI 7-layer reference model—beyond the scope of what TCP/IP concerns itself with. As an SSCP, you need a solid understanding of how TCP/IP works—how its protocols for device and port addressing and mapping, routing, and delivery, and network management all play together. You will also need an equally thorough understanding of the OSI 7-layer model, how it contrasts with TCP/IP, and what happens in its top three layers. Taken together, these two protocols provide the infrastructure of all of our communications and computing systems. Understanding them is the key to understanding why and how networks can be vulnerable—and provides the clues you need to choose the right best ways to secure those networks.

Complementary, Not Competing, Frameworks

Both the TCP/IP protocol stack and the OSI 7-layer reference model grew out of efforts in the 1960s and 70s to continue to evolve and expand both the capabilities of computer networks and their usefulness. While it all started with the ARPANet project in the United States, international business, other governments, and universities worked diligently to develop compatible and complementary network architectures, technologies, and systems. By the early 1970s, commercial, academic, military, and government-sponsored research networks were already using many of these technologies, quite often at handsome profits.

Transmission Control Protocol over Internet Protocol (TCP/IP) was developed during the 1970s, based on original ARPANet protocols and a variety of competing (and in some cases conflicting) systems developed in private industry and in other countries. From 1978 to 1992, these ideas were merged together to become the published TCP/IP standard; ARPANet was officially migrated to this standard on January 1, 1993; since this protocol became known as “the Internet protocol,” that date is as good a date to declare as the “birth of the Internet” as any. TCP/IP is defined as consisting of four basic layers. (We'll see why that “over” is in the name in a moment.)

The decade of the 1970s continued to be one of incredible innovation. It saw significant competition between ideas, standards, and design paradigms in almost every aspect of computing and communications. In trying to dominate their markets, many mainframe computer manufacturers and telephone companies set de facto standards that all but made it impossible (contractually) for any other company to make equipment that could plug into their systems and networks. Internationally, this was closing some markets while opening others. Although the courts were dismantling these near-monopolistic barriers to innovation in the United States, two different international organizations, the International Organization for Standardization (ISO) and the International Telegraph and Telephone Consultative Committee (CCITT), both worked on ways to expand the TCP/IP protocol stack to embrace higher-level functions that business, industry, and government felt were needed. By 1984, this led to the publication of the International Telecommunications Union (ITU, the renamed CCITT) Standard X.200 and ISO Standard 7498.

This new standard had two major components, and here is where some of the confusion among network engineers and IT professionals begins. The first component was the Basic Reference Model, which is an abstract (or conceptual) model of what computer networking is and how it works. This became known as the Open Systems Interconnection Reference Model, sometimes known as the OSI 7-layer model. (Since ISO subsequently developed more reference models in the open systems interconnection family, it's preferable to refer to this one as the OSI 7-layer reference model to avoid confusion.) The other major component was a whole series of highly detailed technical standards.

In many respects, both TCP/IP and the OSI 7-layer reference model largely agree on what happens in the first four layers of their model. But while TCP/IP doesn't address how things get done beyond its top layer, the OSI reference model does. Its three top layers are all dealing with information stored in computers as bits and bytes, representing both the data that needs to be sent over the network and the addressing and control information needed to make that happen. The bottommost layer has to transform computer representations of data and control into the actual signaling needed to transmit and receive across the network. (We'll look at each layer in greater depth in subsequent sections as we examine its potential vulnerabilities.)

Let's use the OSI 7-layer reference model, starting at the physical level, as our roadmap and guide through internetworking. Table 5.2 shows a simplified side-by-side comparison of the OSI and TCP/IP models and illustrates how the OSI model's seven layers fit within a typical organization's use of computer networks. You'll note the topmost layer is “layer 8,” the layer of people, business, purpose, and intent. (Note that there are many such informal “definitions” of the layers above layer 7, some humorous, some useful to think about using.) As we go through these layers, layer by layer, you'll see where TCP/IP differs in its approach, its naming conventions, or just where it and OSI have different points of view. With a good overview of the protocols layer by layer, we'll look in greater detail at topics that SSCPs know more about, or know how to do with great skill and cunning!

TABLE 5.2  OSI 7-layer model and TCP/IP 4-layer model in context

System components OSI layer TCP/IP, protocols and services (examples) Key address element Datagrams are called… Role in the information architecture
People Name, building and room, email address, phone number, … Files, reports, memos, conversations, … Company data, information assets
Application software + people processes, gateways 7 – Application HTTP, email, FTP, … URL, IP address + port Upper-layer data Implement business logic and processes
6 – Presentation SSL/TSL, MIME, MPEG, compression
5 – Session
Load balancers, gateways 4 – Transport TCP, UDP IP address + port Segments Implement connectivity with clients, partners, suppliers, …
Routers, OS software 3 – Network IPv4, IPv6, IPSec, ICMP, … IP address + port Packets
Switches, hubs, routers 2 – Data Link 802.1X, PPP, … MAC address Frames
Cables, antenna, … 1 - Physical Physical connection Bits

Layer 1: The Physical Layer

Layer 1, the Physical layer, is very much the same in both TCP/IP and the OSI 7-layer model. The same standards are used in both. It typically consists of hardware devices and electrical devices that transform computer data into signals, move the signals to other nodes, and transform received signals back into computer data. Layer 1 is usually embedded in the NIC and provides the physical handshake between one NIC and its connections to the rest of the network. It does this by a variety of services, including the following:

  • Transmission media control, controlling the circuits that drive the radio, fiber-optic, or electrical cable transmitters and receivers. This verifies that the fiber or cable or Wi-Fi system is up and operating and ready to send or receive. In point-to-point wired systems, this is the function that tells the operating system that “a network cable may have come unplugged,” for example. (Note that this can be called media control or medium control; since most NICs and their associated interface circuits probably support only one kind of media, you might think that medium is the preferred term. Both are used interchangeably.)
  • Collision detection and avoidance manages the transmitter to prevent it from interfering with other simultaneous transmissions by other nodes. (Think of this as waiting until the other people stop talking before you start!)
  • The physical plug, socket, connector, or other mechanical device that is used to connect the NIC to the network transmission medium. The most standard form of such interconnection uses a Bell System RJ-45 connector and eight-wire cabling as the transmission medium for electrical signals. The eight wires are twisted together in pairs (for noise cancellation reasons) and can be with or without a layer of metalized Mylar foil to provide further shielding from the electromagnetic noise from power lines, radio signals, or other cabling nearby. Thus, these systems use either UTP (unshielded twisted pair) or STP (shielded twisted pair) to achieve speed, quality, and distance needs.
  • Interface with the Data Link layer, managing the handoff of datagrams between the media control elements and the Data Link layer's functions

Multiple standards, such as the IEEE 802 series, document the details of the various physical connections and the media used at this layer.

At Layer 1, the datagram is the bit. The details of how different media turn bits (or handfuls of bits) into modulated signals to place onto wires, fibers, radio waves, or light waves are (thankfully!) beyond the scope of what SSCPs need to deal with. That said, it's worth considering that at Layer 1, addresses don't really matter! For wired (or fibered) systems, it's that physical path from one device to the next that gets the bits where they need to go; that receiving device has to receive all of the bits, unwrap them, and use Layer 2 logic to determine if that set of bits was addressed to it.

This also demonstrates a powerful advantage of this layers-of-abstraction model: nearly everything interesting that needs to happen to turn the user's data (our payload) into transmittable, receivable physical signals can happen with absolutely zero knowledge of how that transmission or reception actually happens! This means that changing out a 10BaseT physical media with Cat6 Ethernet gives your systems as much as a thousand-time increase in throughput, with no changes needed at the network address, protocol, or applications layers. (At most, very low-level device driver settings might need to be configured via operating systems functions, as part of such an upgrade.)

It's also worth pointing out that the physical domain defines both the collision domain and the physical segment. A collision domain is the physical or electronic space in which multiple devices are competing for each other's attention; if their signals out-shout each other, some kind of collision detection and avoidance is needed to keep things working properly. For wired (or fiber-connected) networks, all of the nodes connected by the same cable or fiber are in the same collision domain; for wireless connections, all receivers that can detect a specific transmitter are in that transmitter's collision domain. (If you think that suggests that typical Wi-Fi usage means lots of overlapping collision domains, you'd be right!) At the physical level, that connection is also known as a segment. But don't get confused: we segment (chop into logical pieces) a network into logical sub-networks, which we'll call subnets, at either Layer 2 or Layer 3, but not at Layer 1.

Layer 2: The Data Link Layer

Layer 2, the Data Link layer, performs the data transfer from node to node of the network. As with Layer 2 in TCP/IP, it manages the logical connection between the nodes (over the link provided by Layer 1), provides flow control, and handles error correction in many cases. At this layer, the datagram is known as a frame, and frames consist of the data passed to Layer 2 by the higher layer, plus addressing and control information.

The IEEE 802 series of standards further refine the concept of what Layer 2 in OSI delivers by setting forth two sublayers:

  • The Media Access Control (MAC) sublayer uses the unique MAC addresses of the NICs involved in the connection as part of controlling individual device access to the network and how devices use network services. The MAC layer grants permission to devices to transmit data as a result.
  • The Logical Link Control (LLC) sublayer links the MAC sublayer to higher-level protocols by encapsulating their respective PDUs in additional header/trailer fields. LLC can also provide frame synchronization and additional error correction.

The MAC address is a 48-bit address, typically written (for humans) as six octets—six 8-bit binary numbers, usually written as two-digit hexadecimal numbers separated by dashes, colons, or no separator at all. For example, 3A-7C-FF-29-01-05 is the same 48-bit address as 3A7CFF290105. Standards dictate that the first 24 bits (first three hex digit pairs) are the organizational identifier of the NIC's manufacturer, and 24 bits (remaining three hex digit pairs) are a NIC-specific address. The IEEE assigns the organizational identifier, and the manufacturer assigns NIC numbers as it sees fit. Each 24-bit field represents over 16.7 million possibilities, which for a time seemed to be more than enough addresses; not anymore. Part of IPv6 is the adoption of a larger, 64-bit MAC address, and the protocols to allow devices with 48-bit MAC addresses to participate in IPv6 networks successfully.

Note that one of the bits in the first octet (in the organizational identifier) flags whether that MAC address is universally or locally administered. Many NICs have features that allow the local systems administrator to overwrite the manufacturer-provided MAC address with one of their own choosing. This does provide the end user organization with a great capability to manage devices by using their own internal MAC addressing schemes, but it can be misused to allow one NIC to impersonate another one (so-called MAC address spoofing).

Let's take a closer look at the structure of a frame, shown in Figure 5.6. As mentioned, the payload is the set of bits given to Layer 2 by Layer 3 (or a layer-spanning protocol) to be sent to another device on the network. Conceptually, each frame consists of:

  • A preamble, which is a 56-bit series of alternating 1s and 0s. This synchronization pattern helps serial data receivers ensure that they are receiving a frame and not a series of noise bits.
  • The Start Frame Delimiter (SFD), which signals to the receiver that the preamble is over and that the real frame data is about to start. Different media require different SFD patterns.
  • The destination MAC address.
  • The source MAC address.
  • The Ether Type field, which indicates either the length of the payload in octets or the protocol type that is encapsulated in the frame's payload.
  • The payload data, of variable length (depending on the Ether Type field).
  • A Frame Check Sequence (FCS), which provides a checksum across the entire frame, to support error detection.

The inter-packet gap is a period of dead space on the media, which helps transmitters and receivers manage the link and helps signify the end of the previous frame and the start of the next. It is not, specifically, a part of either frame, and it can be of variable length.

Schematic illustration of Data Link layer frame format

FIGURE 5.6 Data Link layer frame format

Layer 2 devices include bridges, modems, NICs, and switches that don't use IP addresses (thus called Layer 2 switches). Firewalls make their first useful appearance at Layer 2, performing rule-based and behavior-based packet scanning and filtering. Datacenter designs can make effective use of Layer 2 firewalls.

Layer 3: The Network Layer

Layer 3, the Network layer, is defined in the OSI model as the place where variable-length sequences of fixed-length packets (that make up what the user or higher protocols want sent and received) are transmitted (or received). Routing and switching happens at Layer 3. Logical paths between two hosts are created; data packets are routed and forwarded to destinations; packet sequencing, congestion control, and error handling occur here. Layer 3 is where we see a lot of the Internet's “best efforts” design thinking at work, or perhaps, not at work; it is left to individual designers who build implementations of the protocols to decide how Layer 3–like functions in their architecture will handle errors at the Network layer and below.

ISO 7498/4 also defines a number of network management and administration functions that (conceptually) reside at Layer 3. These protocols provide greater support to routing, managing multicast groups, address assignment (at the Network layer), and other status information and error handling capabilities. Note that it is the job of the payload—the datagrams being carried by the protocols—that make these functions belong to the Network layer, and not the protocol that carries or implements them.

The most common device we see at Layer 3 is the router; combination bridge-routers, or brouters, are also in use (bridging together two or more Wi-Fi LAN segments, for example). Layer 3 switches are those that can deal with IP addresses. Firewalls also are a part of the Layer 3 landscape.

Layer 3 uses a packet. Packets start with a packet header, which contains a number of fields of interest to us; see Figure 5.7. For now, let's focus on the IP version 4 format, which has been in use since the 1970s and thus is almost universally used:

  • Both the source and destination address fields are 32-bit IPv4 addresses.
  • The Identification field, Flags, and Fragment Offset participate in error detection and reassembly of packet fragments.
  • The Time To Live (or TTL) field keeps a packet from floating around the Internet forever. Each router or gateway that processes the packet decrements the TTL field, and if its value hits zero, the packet is discarded rather than passed on. If that happens, the router or gateway is supposed to send an Internet Control Message Protocol (ICMP) packet to the originator with fields set to indicate which packet didn't live long enough to get where it was supposed to go. (The tracert function uses TTL in order to determine what path packets are taking as they go from sender to receiver.)
  • The Protocol field indicates whether the packet is using ICMP, TCP, Exterior Gateway, IPv6, or Interior Gateway Routing Protocol (IGRP).
  • Finally, the data (or payload) portion.
Schematic illustration of IPv4 packet format

FIGURE 5.7 IPv4 packet format

You'll note that we went from MAC addresses at Layer 2, to IP addresses at Layer 3. This requires the use of Address Resolution Protocol (ARP), one of several protocols that span multiple layers. We'll look at those together after we examine Layer 7.

Layer 4: The Transport Layer

Now that we've climbed up to Layer 4, things start to get a bit more complicated. This layer is the home to many protocols that are used to transport data between systems; one such protocol, the Transport Control Protocol, gave its name (TCP) to the entire protocol stack! Let's first look at what the layer does, and then focus on some of the more important transport protocols.

Layer 4, the Transport layer, is where variable-length data from higher-level protocols or from applications gets broken down into a series of fixed-length packets; it also provides quality of service, greater reliability through additional flow control, and other features. In TCP/IP, Layer 4 is where TCP and UDP work; the OSI reference model goes on to define five different connection-mode transport protocols (named TP0 through TP4), each supporting a variety of capabilities. It's also at Layer 4 that we start to see tunneling protocols come into play.

Transport layer protocols primarily work with ports. Ports are software-defined labels for the connections between two processes, usually ones that are running on two different computers. The source and destination port, plus the protocol identification and other protocol-related information, are contained in that protocol's header. Each protocol defines what fields are needed in its header and prescribes required and optional actions that receiving nodes should take based on header information, errors in transmission, or other conditions. Ports are typically bidirectional, using the same port number on sender and receiver to establish the connection. Some protocols may use multiple port numbers simultaneously.

Over time, the use of certain port numbers for certain protocols became standardized. Important ports that SSCPs should recognize when they see them are shown in Table 5.3, which also has a brief description of each protocol.

TABLE 5.3  Common TCP/IP ports and protocols

Protocol TCP/UDP Port number Description
File Transfer Protocol (FTP) TCP 20/21 FTP control is handled on TCP port 21, and its data transfer can use TCP port 20 as well as dynamic ports, depending on the specific configuration.
Secure Shell (SSH) TCP 22 Used to manage network devices securely at the command level; secure alternative to Telnet, which does not support secure connections.
Telnet TCP 23 Teletype-like unsecure command line interface used to manage network device.
Simple Mail Transfer Protocol (SMTP) TCP 25 Transfers mail (email) between mail servers, and between end user (client) and mail server.
Domain Name System (DNS) TCP/UDP 53 Resolves domain names into IP addresses for network routing. Hierarchical, using top-level domain servers (.com, .org, etc.) that support lower-tier servers for public name resolution. DNS servers can also be set up in private networks.
Dynamic Host Configuration Protocol (DHCP) UDP 67/68 DHCP is used on networks that do not use static IP address assignment (almost all of them).
Trivial File Transfer Protocol (TFTP) UDP 69 TFTP offers a method of file transfer without the session establishment requirements that FTP has; using UDP instead of TCP, the receiving device must verify complete and correct transfer. TFTP is typically used by devices to upgrade software and firmware.
Hypertext Transfer Protocol (HTTP) TCP 80 HTTP is the main protocol that is used by Web browsers and is thus used by any client that uses files located on these servers.
Post Office Protocol (POP) v3 TCP 110 POP version 3 provides client–server email services, including transfer of complete inbox (or other folder) contents to the client.
Network Time Protocol (NTP) UDP 123 One of the most overlooked protocols is NTP. NTP is used to synchronize the devices on the Internet. Most secure services simply will not support devices whose clocks are too far out of sync, for example.
NetBIOS TCP/UDP 137/138/139 NetBIOS (more correctly, NETBIOS over TCP/IP, or NBT) has long been the central protocol used to interconnect Microsoft Windows machines.
Internet Message Access Protocol (IMAP) TCP 143 IMAP version 3 is the second of the main protocols used to retrieve mail from a server. While POP has wider support, IMAP supports a wider array of remote mailbox operations that can be helpful to users.
Simple Network Management Protocol (SNMP) TCP/UDP 161/162 SNMP is used by network administrators as a method of network management. SNMP can monitor, configure, and control network devices. SNMP traps can be set to notify a central server when specific actions are occurring.
Border Gateway Protocol (BGP) TCP 179 BGP is used on the public Internet and by ISPs to maintain very large routing tables and traffic processing, which involve millions of entries to search, manage, and maintain every moment of the day.
Lightweight Directory Access Protocol (LDAP) TCP/UDP 389 LDAP provides a mechanism of accessing and maintaining distributed directory information. LDAP is based on the ITU-T X.500 standard but has been simplified and altered to work over TCP/IP networks.
Hypertext Transfer Protocol over SSL/TLS (HTTPS) TCP 443 HTTPS is used in conjunction with HTTP to provide the same services but does it using a secure connection that is provided by either SSL or TLS.
Lightweight Directory Access Protocol over TLS/SSL (LDAPS) TCP/UDP 636 LDAPS provides the same function as LDAP but over a secure connection that is provided by either SSL or TLS.
FTP over TLS/SSL
(RFC 4217)
TCP 989/990 FTP over TLS/SSL uses the FTP protocol, which is then secured using either SSL or TLS.

It's good to note at this point that as we move down the protocol stack, each successive layer adds additional addressing, routing, and control information to the data payload it received from the layer above it. This is done by encapsulating or wrapping its own header around what it's given by the layers of the protocol stack or the application-layer socket call that asks for its service. Thus, the datagram produced at the Transport layer contains the protocol-specific header and the payload data. This is passed to the Network layer, along with the required address information and other fields; the Network layer puts that information into its IPv4 (or IPv6) header, sets the Protocol field accordingly, appends the datagram it just received from the Transport layer, and passes that on to the Data Link layer. (And so on…)

Most of the protocols that use Layer 4 either use TCP/IP as a stateful or connection- oriented way of transferring data or use UDP, which is stateless and not connection oriented. TCP bundles its data and headers into segments (not to be confused with segments at Layer 1), whereas UDP and some other Transport layer protocols call their bundles datagrams:

  • Stateful communications processes have sender and receiver go through a sequence of steps, and sender and receiver have to keep track of which step the other has initiated, successfully completed, or asked for a retry on. Each of those steps is often called the state of the process at the sender or receiver. Stateful processes require an unambiguous identification of sender and recipient, and some kind of protocols for error detection and requests for retransmission, which a connection provides.
  • Stateless communication processes do not require sender and receiver to know where the other is in the process. This means that the sender does not need a connection, does not need to service retransmission requests, and may not even need to validate who the listeners are. Broadcast traffic is typically both stateless and connectionless.

Layer 4 devices include gateways (which can bridge dissimilar network architectures together, and route traffic between them) and firewalls.

From here on up, the two protocol stacks conceptually diverge. TCP/IP as a standard stops at Layer 4 and allocates to users, applications, and other unspecified higher-order logic the tasks of managing what traffic to transport and how to make business or organizational sense of what's getting transported. The OSI 7-layer reference model continues to add further layers of abstraction, and for one very good reason: because each layer adds clarity when taking business processes into the Internet or into the cloud (which you get to through the Internet, of course). That clarity aids the design process and the development of sound operational procedures; it is also a great help when trying to diagnose and debug problems.

We also see that from here on out, almost all functions except perhaps that of the firewall and the gateway are hosted either in operating systems or applications software, which of course is running on servers or endpoint devices.

Layer 5: The Session Layer

Layer 5, the Session layer, is where the overall dialogue or flow of handshakes is controlled in order to support a logically related series of tasks that require data exchange. Sessions typically require initiation, ongoing operation, adjournment, and termination; many require checkpointing to allow for graceful fallback and recovery to earlier points within the session. Think of logging onto your bank's webpages to do some online banking; from the moment you start to log on, you're initiating a session; a session can contain many transactions as steps you seek to perform; finally, you log off (or time out or disconnect) and end the session. Sessions may be also need full-duplex (simultaneous activity in both directions), half-duplex (activity from one party to the other, a formal turnaround, and then activity in the other way), or simplex (activity in one direction only). Making a bank deposit requires half-duplex operation: the bank has to completely process the deposit steps, then update your account balance, before it can turn the dialogue around and update the display of account information on your endpoint. The OSI model also defines Layer 5 as responsible for gracefully bringing sessions to a close and for providing session checkpoint and recovery capabilities (if any are implemented in a particular session's design).

Newer protocols at the Session layer include Session Description Protocol (SDP) and Session Initiation Protocol (SIP). These and related protocols are extensively used with VOIP (voice over IP) services. Another important protocol at this layer is Real-Time Transport Protocol (RTP). RTP was initially designed to satisfy the demands for smoother delivery of streaming multimedia services and rides over UDP (at the Transport layer). Other important uses are in air traffic control and data management systems, where delivery of flight tracking information must take place in a broadcast or multicast fashion but be in real time—imagine the impact (pardon the pun) if flight tracking updates on an inbound flight consistently come in even as little as a minute late!

Layer 6: The Presentation Layer

Layer 6, the Presentation layer, supports the mapping of data in terms and formats used by applications into terms and formats needed by the lower-level protocols in the stack. The Presentation layer handles protocol-level encryption and decryption of data (protecting data in motion), translates data from representational formats that applications use into formats better suited to protocol use, and can interpret semantical or metadata about applications data into terms and formats that can be sent via the Internet.

This layer was created to consolidate both the thinking and design of protocols to handle the wide differences in the ways that 1970s-era systems formatted, displayed, and used data. Different character sets, such as EBCIDIC, ASCII, or FIELDATA, used different number of bits; they represented the same character, such as an uppercase A, by different sets of bits. Byte sizes were different on different manufacturers' minicomputers and mainframes. The presentation of data to the user, and the interaction with the user, could take many forms: a simple chat, a batch file input and printed output of the results, or a predefined on-screen form with specified fields for data display and edit. Such a form is one example of a data structure that presentation must consider; others would be a list of data items retrieved by a query, such as “all flights from San Diego to Minneapolis on Tuesday morning.”

Sending or receiving such a data structure represents the need to serialize and deserialize data for transmission purposes. To the application program, this table, list, or form may be a series of values stored in an array of memory locations. Serializing requires an algorithm that has to first “walk” the data structure, field by field, row by row; retrieve the data values; and output a list of coordinates (rows and fields) and values. Deserializing uses the same algorithm to take an input list of coordinates and values and build up the data structure that the application needs.

There are several sublayers and protocols that programmers can use to achieve an effective Presentation-layer interface between applications on the one hand and the Session layer and the rest of the protocol stack on the other. HTTP is an excellent example of such a protocol.

NetBIOS (the Network Basic Input/Output System) and Server Message Block (SMB) are also very important to consider at the Presentation layer. NetBIOS is actually an application programming interface (API) rather than a formal protocol per se. From its roots in IBM's initial development of the personal computer, NetBIOS now runs over TCP/IP (or NBT, if you can handle one more acronym!) or any other transport mechanism. Both NetBIOS and SMB allow programs to communicate with each other, whether they are on the same host or different hosts on a network.

Keep in mind that many of the cross-layer protocols, apps, and older protocols involved with file transfer, email, and network-attached file systems and storage resources (such as the Common Internet File System [CIFS] protocol) all “play through” Layer 6.

Layer 7: The Application Layer

Layer 7, the Application layer, is where most end users and their endpoints interact with and are closest to the Internet, you might say. Applications such as Web browsers, VOIP or video streaming clients, email clients, and games use their internal logic to translate user actions—data input field-by-field or selection and action commands click-by-click into application-specific sets of data to transfer via the rest of the protocol stack to a designated recipient address. Multiple protocols, such as FTP and HTTP, are in use at the Application layer, yet the logic that must determine what data to pass from user to distant endpoint and back to user all resides in the application programs themselves. None of the protocols, by themselves, make those decisions for us.

There are various mnemonics to help remember the seven OSI layers. Two common mnemonics, and their correspondence with the OSI protocol stack, are shown in Figure 5.8. Depending upon your tastes, you can use:

  • “Please Do Not Throw Sausage Pizza Away”
  • “All People Seem to Need Data Processing”
Schematic illustration of Easy OSI mnemonics

FIGURE 5.8 Easy OSI mnemonics

Look back to Figure 5.1, which demonstrates the OSI reference model in action, in simplified terms, by starting with data a user enters into an application program's data entry screen. The name and phone number entered probably need other information to go with them from this client to the server so that the server knows what to do with these values; the application must pass all of this required information to the Presentation layer, which stuffs it into different fields in a larger datagram structure, encrypting it if required.

Cross-Layer Protocols and Services

But wait…remember, both TCP/IP and the OSI reference model are models, models that define and describe in varying degrees of specificity and generality. OSI and TCP/IP both must support some important functions that cross layers, and without these, it's not clear if the Internet would work very well at all! The most important of these are:

  • Dynamic Host Configuration Protocol (DHCP) assigns IPv4 (and later IPv6) addresses to new devices as they join the network. This set of handshakes allows DHCP to accept or reject new devices based on a variety of rules and conditions that administrators can use to restrict a network. DHCP servers allow subscriber devices to lease an IP address, for a specific period of time (or indefinitely); as the expiration time reaches its half-life, the subscribing device requests a renewal.
  • Address Resolution Protocol (ARP) is a discovery protocol, by which a network device determines the corresponding IP address for a given MAC address by (quite literally) asking other network devices for it. On each device, ARP maintains in its cache a list of IP address and MAC address pairs. Failing to find the address there, ARP seeks to find either the DHCP that assigned that IP address, or some other network device whose ARP cache knows the desired address.

    ARP has several variations that are worth being knowing a bit about:

    • Reverse ARP (RARP), which lets a machine request its IP address from other machines on the LAN segment. RARP preceded the creation of DHCP, and is considered obsolete by many networking specialists. It is, however, showing up as a component of some modern protocols such as Cisco's Overlay Transport Virtualization (OTV).
    • Inverse ARP (InARP), similar to RARP, is very useful in configuring remote devices.
    • Proxy ARP allows subnets joined via a router to still resolve MAC addresses, by having the router act as proxy.
    • Gratuitous ARP supports advanced networking scenarios in a variety of ways. Properly used, they can detect IP address conflicts, and can help update the ARP tables in other machines on the network. Gratuitous ARPs are also sent by NICs and other interfaces as they power up or reset, in order to preload their own ARP tables.
  • Domain Name Service (DNS) works at Layer 4 and Layer 7 by attempting to resolve a domain name (such as isc2.org) into its IP address. The search starts with the requesting device's local DNS cache, then seeks “up the chain” to find either a device that knows of the requested domain, or a domain name server that has that information. Layer 3 has no connection to DNS.
  • Network management functions have to cut across every layer of the protocol stacks, providing configuration, inspection, and control functions. These functions provide the services that allow user programs like ipconfig to instantiate, initiate, terminate, or monitor communications devices and activities. Simple Network Management Protocol (SNMP) is quite prevalent in the TCP/IP community; Common Management Information Protocol (CMIP) and its associated Common Management Information Service (CMIS) are more recognized in OSI communities.
  • Cross MAC and PHY (or physical) scheduling is vital when dealing with wireless networks. Since timing of wireless data exchanges can vary considerably (mobile devices are often moving!), being able to schedule packets and frames can help make such networks achieve better throughput and be more energy-efficient. (Mobile customers and their device batteries appreciate that.)
  • Network Address Translation (NAT), sometimes known as Port Address Translation (PAT), IP masquerading, NAT overload, and many-to-one NAT, all provide ways of allowing a routing function to edit a packet to change (translate) one set of IP addresses for another. Originally, this was thought to make it easier to move a device from one part of your network to another without having to change its IP address. As we became more aware of the IPv4 address space being exhausted, NAT became an incredibly popular workaround, a way to sidestep running out of IP addresses. Although it lives at Layer 3, NAT won't work right if it cannot reach into the other layers of the stack (and the traffic) as it needs to.

IP and Security

As stated, the original design of the Internet assumed a trustworthy environment; it also had to cope with a generation of computing equipment that just did not have the processing speed or power, or the memory capacity, to deal with effective security, especially if that involved significant encryption and decryption. Designers believed that other layers of functionality beyond the basic IP stack could address those issues, to meet specific user needs, such as by encrypting the contents of a file before handing it to an application like FTP for transmission over the Internet. Rapid expansion of the Internet into business and academic use, and into international markets, quickly demonstrated that the innocent days of a trusting Internet were over. In the late 1980s and early 1990s, work sponsored by the U.S. National Security Agency, U.S. Naval Research Laboratory, Columbia University, and Bell Labs came together to create Internet Protocol Security, or IPsec as it came to be known.

IPSec provides an open and extensible architecture that consists of a number of protocols and features used to provide greater levels of message confidentiality, integrity, authentication, and nonrepudiation protection:

  • The IP Security Authentication Header (AH) protocol uses a secure hash and secret key to provide connectionless integrity and a degree of IP address authentication.
  • Encapsulating Security Payloads (ESP) by means of encryption supports confidentiality, connectionless integrity, and anti-replay protection, and authenticates the originator of the data (thus providing a degree of nonrepudiation).

Security associations (or SAs) bundle together the algorithms and data used in securing the payloads. ISAKMP, the Internet Security Association and Key Management Protocol, for example, provided the structure, framework, and mechanisms for key exchange and authentication. IPSec implementations depend upon authenticated keying materials. Since IPSec preceded the development and deployment of PKI, it had to develop its own infrastructure and processes to support users in meeting their key management needs. This could be either via Internet Key Exchange (IKE and IKEv2), the Kerberized Internet Negotiation of Keys (KINK, using Kerberos services), or using an IPSECKEY DNS record exchange.

The mechanics of how to implement and manage IPSec are beyond the scope of the SSCP exam itself; however, SSCPs do need to be aware of IPSec and appreciate its place in the evolution of Internet security.

IPSec was an optional add-in for IPv4 but is a mandatory component of IPv6. IPSec functions at Layer 3 of the protocol stacks, as an internetworking protocol suite; contrast this with TLS, for example, which works at Layer 4 as a transport protocol.

Layers or Planes?

If you stand alongside of those protocol stacks and think in more general terms, you'll quickly recognize that every device, every protocol, and every service has a role to play in the three major functions we need networks to achieve: movement of data, control of that data flow, and management of the network itself. If you were to draw out those flows on separate sheets of paper, you'd see how each provides a powerful approach to use when designing the network, improving its performance, resolving problems with the network, and protecting it. This gives rise to the three planes that network engineers speak of quite frequently:

  • The data plane is the set of functions, processes, and protocols that move or forward frames and packets from one interface to another.
  • The control plane provides all of the processes, functions, and protocols for switching, routing, address resolution, and related activities.
  • The management plane contains all of the processes, functions, and protocols that administrators use to manage, configure, and control the network.

Hardware designers use these concepts extensively as they translate the protocol stacks into real router, switch, or gateway devices. For example, the movement of data itself ought to be as fast and efficient as possible, either by specifically designed high-speed hardware, firmware, or software. Control functions, which are the heart of all the routing protocols, still need to run pretty efficiently, but this will often be done by using separate circuits and data paths within the hardware. System management functions involve either collecting statistical and performance data or issuing directives to devices, and for system integrity reasons, designers often put these in separate paths in hardware and software as well.

As we saw with the OSI reference model, the concept of separating network activity into data, control, and management planes is both a sound theoretical idea and a tangible design feature in the devices and networks all around us. (Beware of geeks who think that these planes, like the 7-layer model, are just some nice ideas!)

Network Architectures

Except in the most trivial case of a single point-to-point connection, networks usually consist of multiple networks joined together. The Internet (capitalized) is the one unified, globe-spanning network; it consists of multiple internetworking segments running IP that are tied together in various ways. For convenience and for localizing one's reference, network engineers, security professionals, and end users talk about the following network types:

  • A network segment is any set of interconnected network client devices using IP for communications. Segments are joined to other segments via switches or routers.
  • An intranet is an internet segment that is under the operational and physical (or logical) control and administration of an end user organization. Intranets are joined to each other and to the Internet via switches or routers.
  • A local area network (LAN) is an instance of an intranet, which connects client devices together. Typically, these are within the immediate, physical local area of each other (hence the name). Thus, the term usually refers to a network segment, or set of network segments, that provides network services to an area, floor, or other portion of a building or facility.
  • An extranet is the name given to a network segment that joins any number of intranets together. Extranets facilitate the secure sharing of resources between groups of people or organizations, each of which own or administer the intranet that they offer up as part of the extranet. Virtual private networks (VPNs) and software-defined network (SDN) tools are typically used to create and manage extranets.
  • A wide area network (WAN) connects many LANs, WANs, or other networks together across a large geographic area, such as a region or nation. The name has also become somewhat more generalized, sometimes used to mean “anything on the outward-facing side of the router” (as shown by the “WAN” label on a typical router, indicating “plug the cable from the ISP into this jack”).
  • A campus area network (CAN), also known as a company or corporate area network, is a group of LANs managed together, typically by one organization. Individual LANs are often in separate buildings, with a centralized network management and operations center providing Internet access to the CAN. CANs are often built with open source, software-defined network systems.
  • A metropolitan area network (MAN) is a set of LANs joined together across a city or other broad area. MANs can include or support extranets.
  • A personal area network (PAN) is localized to the devices carried on or implanted within a specific individual. Smartphones, fitness trackers, implanted medical devices, and even implanted RFID chips can be elements of a typical PAN. Some of these devices are Internet-capable, others must use some other technology such as Bluetooth, Near Field Communications, an interface cable, or other techniques to get to one of the PAN devices that can support an IP connection to a LAN. This is usually done with the person's smartphone via Wi-Fi or mobile phone data services.

DMZs and Botnets

The network architectures described earlier are ones that are created by using network devices (virtual or real) to define IP segments and join them together into larger collections of segments. Two specific use cases worth looking at are the use of perimeter networks and botnets.

A perimeter network, sometimes called a bastion network, is a network segment that provides an isolation layer between two or more sets of interconnected network segments. These are often used to create a buffer or barrier between LANs that have dissimilar security needs and use hardened bastion servers, firewalls, and other techniques to restrict both inward-flowing and outward-flowing traffic to ensure that security policies are enforced.

This gives rise to the concept of a demilitarized zone (DMZ), which is that perimeter network. Outside of the DMZ is the rest of the Internet-connected world; inside the DMZ are network segments and systems that need better protection. The network elements that make up an organization's DMZ belong to it and are administered by it, but they are public-facing assets. Public-facing Web servers, for example, are outside of the DMZ and do not require each Web user to have their identity authenticated in order to access their content. Data flows between systems in the DMZ, and those within the protected bastion (within the DMZ) must be carefully constructed and managed to prevent the creation or discovery (and subsequent use) of covert paths, which would be providing connections into the secure systems that are not detected or prevented by access controls. Outbound data flows should either be in suitably protected form (such as by encryption or via VPN or other means) or be prohibited from crossing the DMZ.

Web servers, for example, have to face the WAN side of the DMZ, but also have to face inward to be able to send and receive trustworthy data and service requests to more secure assets.

A botnet, sometimes called a grid network, refers to any collection of systems that operate together in a coordinated fashion to achieve a common purpose. Typically, a botnet is constructed by using software agents installed on each server, endpoint, or other device, and those agents then participate in a command and control process that rides on top of the network connections used by those devices. Botnets typically have a central command and control node that plans, organizes, and directs the activities of the nodes on the botnet. Botnets are often created by users to combine processing resources together to work on problems too large or too complex for one single server or client endpoint. A typical example would be the SETI (search for extraterrestrial intelligence) project, which is a crowd-sourced science activity. In effect, a botnet is something like a user-created cloud, dynamically created to suit their needs.

A zombie botnet, by contrast, is a botnet created without the knowledge, consent, or cooperation of the users of some or all of the server or endpoint devices that are directed by the zombie botnet controller. It's fair to state that all zombie botnets are malicious in intention, as they involve the theft of services from the systems brought together by the bot herder (the person creating and using the zombie botnet). The zombie botnet serves the herder's purposes, at the expense of the individual systems' owners. But not all botnets are zombie botnets, of course, and thus not all botnets are malicious in intent.

Software-Defined Networks

As the name suggests, software-defined networks (SDNs) use network management and virtualization tools to completely define the network in software. SDNs are most commonly used in cloud systems deployments, where they provide the infrastructure that lets multiple virtual machines communicate with each other in standard TCP/IP or OSI reference model terms. Cloud-hosted SDNs don't have their own real Physical layer, for they depend on the services of the bare metal environment that is hosting them to provide these. That said, the protocol stacks at Layer 1 still have to interact with device drivers that are the “last software port of call,” you might say, before entering the land of physical hardware and electrical signals.

It might be natural at this point to think that all but the smallest and simplest of networks are software defined, since as administrators we use software tools to configure the devices on the network. This is true, but in a somewhat trivial sense. Imagine a small business network with perhaps a dozen servers, including dedicated DNS, DHCP, remote access control, network storage, and print servers. It might have several Wi-Fi access points and use another dozen routers to segment the network and support it across different floors of a building or different buildings in a small office park. Each of these devices is configured first at the physical level (you connect it with cables to other devices); then, you use its built-in firmware functions via a Web browser or terminal link to configure it by setting its control parameters. That's a lot of individual devices to configure! Network management systems can provide integrated ways to define the network and remotely configure many of those devices.

Virtual Private Networks

Virtual private networks (VPNs) were developed initially to provide businesses and organizations a way to bring geographically separate LAN segments together into one larger private network. Prior to using VPN technologies, the company would have to use private communications channels, such as leased high-capacity phone circuits or microwave relays, as the physical communications media and technologies within the Physical layer of this extended private network. (Dial-up connections via modem were also examples of early VPN systems.) In effect, that leased circuit tunneled under the public switched telecommunications network; it was a circuit that stayed connected all the time, rather than one that was established, used, and torn down on a per-call basis.

VPNs tunnel under the Internet using a combination of Layer 2 and Layer 3 services. They provide a secure, encrypted channel between VPN connection “landing points” (not to be confused with endpoints in the laptop, phone, or IoT device sense!). As a Layer 2 service, the VPN receives every frame or packet from higher up in the protocol stack, encrypts it, wraps it in its own routing information, and lets the Internet carry it off to the other end of the tunnel. At the receiving end of the tunnel, the VPN service unwraps the payload, decrypts it, and passes it up the stack. Servers and services at each end of the tunnel have the normal responsibilities of routing payloads to the right elements of that local system, including forwarding them on to LAN or WAN addresses as each packet needs.

Most VPN solutions use one or more of the following security protocols:

  • IPSec
  • TLS
  • Datagram Transport Layer Security (DTLS)
  • Microsoft Point-to-Point Encryption, or Microsoft Secure Socket Tunneling Protocol, used with Point-to-Point Tunneling Protocol
  • Secure Shell VPN
  • Multiprotocol Label Switching (MPLS)
  • Other proprietary protocols and services

Mobile device users (and systems administrators who need to support mobile users) are increasingly turning to VPN solutions to provide greater security.

On the one hand, VPNs bring some powerful security advantages home, both to business and individual VPN customers alike. From the point in your local systems where the VPN starts tunneling, on to the tunnel's landing point, PKI-driven encryption is preventing anybody from knowing what you're trying to accomplish with that data stream. The only traffic analysis they can glean from monitoring your data is that you connect to a VPN landing point.

On the other hand, this transfers your trust to your VPN service provider and the people who own and manage it. You have to be confident that their business model, their security policies (administrative and logical), and their reputation support your CIANA needs. One might rightly be suspicious of a VPN provider with “free forever” offers with no clear up-sell strategy; if they don't have a way of making honest money with what they are doing, due diligence requires you to think twice before trusting them.

Do keep in mind that if your VPN landing point server fails, so does your VPN. Many SOHO VPN clients will allow the user to configure the automatic use of alternate landing sites, but this can still involve service interruptions of tens of seconds.

Wireless Network Technologies

Wireless network systems are the history of the Internet in miniature: first, let's make them easy to use, right out of the shrink-wrap! Then, we'll worry about why they're not secure and whether we should do something about that.

In one respect, it's probably true to say that wireless data communication is first and foremost a Layer 1 or Physical layer set of opportunities, constraints, issues, and potential security vulnerabilities. Multiple technologies, such as Wi-Fi, Bluetooth, NFC, and infrared and visible light LED and photodiode systems, all are important and growing parts of the way organizations use their network infrastructures. (Keep an eye open for Li-Fi as the next technology to break our trains of thought. Li-Fi is the use of high-frequency light pulses from LEDs used in normal room or aircraft cabin illumination systems.)

Note that mobile devices that use cellular phone systems to access your networks present a mixed bag of access and security issues. These access your systems via your ISP's connection to the Internet, and must then connect up via your remote access control capabilities (such as RADIUS). But at the same time, the devices themselves may be connecting via Wi-Fi or other means in their local service area; you may be inheriting the security weaknesses of a distant Wi-Fi cafe or airport hotspot without knowing it.

Regardless of the technologies used, wireless systems are either a part of our networks, or they are not. These devices either use our TCP/IP protocols, starting with the physical layer on up, or use their own link-specific sets of protocols. Broadly speaking, though, no matter what protocol stack or interface (or interfaces, plural!) they are using, the same risk management and mitigation processes should be engaged to protect the organization's information infrastructures.

Key considerations include the following:

  • Access control and identity management, both for the device and the user(s) via that device.
  • Location tracking and management; it might be too risky, for example, to allow an otherwise authorized user to access company systems from a heretofore unknown or not-yet-approved location.
  • Link protection, from the physical connection on up, including appropriate use of secure protocols to protect authentication and payload data.
  • Congestion and traffic management.
  • Software and hardware configuration management and control, for both the mobile device's operating system and any installed applications.

Wireless capabilities accelerate the convergence of communications, computing, control, and collaboration. This convergence breaks down the mental and conceptual barrier that have defined personal roles, tasks, and organizational boundaries in “classical” IT architectures. The dramatic increase in OT systems merging in with IT ones—whether that's industrial-scale applications of SCADA, Common Industry Protocol (CIP) for ICS, or IoT in a highly virtual organization—is further challenging our security planning concepts of what “normal” is or should be.

Let's look at a few of these technologies, and then consider their security needs and implications.

Wi-Fi

Wi-Fi, which actually does not mean “wireless fidelity,” is probably the most prevalent and pervasive wireless radio technology currently in use. Let's focus a moment longer on protecting the data link between the endpoint device (such as a user's smartphone, laptop, smartwatch, etc.) and the wireless access point, which manages how, when, and which wireless subscriber devices can connect at Layer 1 and above. (Note that a wireless access point can also be a wireless device itself!) Let's look at wireless security protocols:

  • Wired Equivalency Protocol (WEP) was the first attempt at securing Wi-Fi. As the name suggests, it was a compromise intended to make some security easier to achieve, but it proved to have far too many security flaws and was easily circumvented by attackers. Its encryption was vulnerable to passive attacks, such as traffic analysis. Unauthorized mobile stations could easily use a known plaintext attack or other means to trick the WEP access point, leading to decrypting the traffic. Perhaps more seriously, it was demonstrated that about a day's worth of intercepted traffic could build a dictionary (or rainbow table) with which real-time automated decryption could be done by the attacker. Avoid its use altogether if you can.
  • Wi-Fi Protected Access (WPA) was an interim replacement while the IEEE 802.11i standard was in development. It used preshared encryption keys (PSKs, sometimes called “WPA Personal”) while providing Temporal Key Integrity Protocol (TKIP, pronounced “tee-kip”) for encryption. WPA Enterprise uses more robust encryption, an authentication server, or PKI certificates in the process.
  • Wi-Fi Protected Access Version 2 (WPA2) took this the next step when IEEE 802.11i was released in 2004. Among other improvements, WPA2 brings Advanced Encryption Standard (AES) algorithms into use. Attackers found ways to break WPA2 as well, partly due to backward-compatibility features built into it; and although firmware fixes were rolled out to remedy the problem, the need for further improvements was clear.
  • Wi-Fi Protected Access 3 (WPA3) was released by the Wi-Fi Alliance in 2018, and it is now the mandatory form of protection for Wi-Fi networks. It no longer supports WPA2 devices, and has eliminated the PSK process for its encryption. In 2019, WPA3's simultaneous authentication of equals (SAE) or Dragonfly handshake was shown by researchers to be vulnerable (to the so-called Dragonblood attack). More firmware updates were provided to the field, which so far seem to have restored faith and confidence in WPA3.

Bluetooth

Bluetooth is a short-range wireless radio interface standard, designed to support wireless mice, keyboards, or other devices, typically within 1 to 10 meters of the host computer they are being used with. Bluetooth is also used to support data synchronization between smartwatches and fitness trackers with smartphones. Bluetooth has its own protocol stack, with one set of protocols for the controller (the time-critical radio elements) and another set for the host. There are 15 protocols altogether. Bluetooth does not operate over Internet Protocol networks.

In contrast with Wi-Fi, Bluetooth has four security modes:

  • Mode 1, Unsecure, bypasses any built-in authentication and encryption (at host or device). This does not prevent other nearby Bluetooth devices from pairing up with a host. This mode is supported only through Bluetooth Version 2.0 plus Enhanced Data Rate (EDR) and should not be used with later versions of Bluetooth.
  • Mode 2, centralized security management, which provides some degree of authorization, authentication, and encryption of traffic between the devices.
  • Mode 3, device pairing, looks to the remote device to initiate encryption-based security using a separate secret link (secret to the paired devices). This too is supported only by version 2.0 + EDR systems.
  • Mode 4, key exchange, supports more advanced encryption algorithms, such as elliptic-curve Diffie-Hellman.

Bluetooth is prone to a number of security concerns, such as these:

  • Bluejacking, the hijacking of a Bluetooth link to get the attacker's data onto an otherwise trusted device
  • Bluebugging, by which attackers can remotely access a smartphone's unprotected

    Bluetooth link and use it as an eavesdropping platform, collect data from it, or operate it remotely

  • Bluesnarfing, the theft of information from a wireless device through a Bluetooth connection
  • Car whispering, which uses software to allow hackers to send and receive audio from a Bluetooth-enabled car entertainment system

Given these concerns, it's probably best that your mobile device management solution understand the vulnerabilities inherent in Bluetooth, and ensure that each mobile device you allow onto your networks (or your business premises!) can be secured against exploitations targeted at its Bluetooth link.

Near-Field Communication

Near-field communication (NFC) provides a secure radio-frequency communications channel that works for devices within about 4 cm (1.6 inches) of each other. Designed to meet the needs of contactless, card-less payment and debit authorizations, NFC uses secure on-device data storage and existing radio frequency identification (RFID) standards to carry out data transfers (such as phone-to-phone file sharing) or payment processing transactions.

Multiple standards organizations work on different aspects of NFC and its application to problems within the purview of each body.

NFC is susceptible to man-in-the-middle attacks at the physical Data Link layer and is also susceptible to high-gain antenna interception. Relay attacks, similar to man-in-the-middle, are also possible. NFC as a standard does not include encryption, but like TCP/IP, it will allow for applications to layer on encrypted protection for data and routing information.

IP Addresses, DHCP, and Subnets

Now that we've got an idea of how the layers fit together conceptually, let's look at some of the details of how IP addressing gets implemented within an organization's network and within the Internet as a whole. First, we'll look at this in IPv4 terms. We'll then highlight the differences that come with IPv6, which is rapidly becoming the go-to addressing standard for larger enterprise systems. Recall that an IPv4 address field is a 32-bit number, represented as four octets (8-bit chunks) written usually as base 10 numbers, which IPv6 increases to a 128-bit field.

Let's start “out there” in the Internet, where we see two kinds of addresses: static and dynamic. Static IP addresses are assigned once to a device, and they remain unchanged; thus, 8.8.8.8 has been the main IP address for Google since, well, ever, and it probably always will be. The advantage of a static IP address for a server or webpage is that virtually every layer of ARP and DNS cache on the Internet will know it; it will be quicker and easier to find. By contrast, a dynamic IP address is assigned each time that device connects to the network. ISPs most often use dynamic assignment of IP addresses to subscriber equipment, since this allows them to manage a pool of addresses better. Your subscriber equipment (your modem, router, PC, or laptop) then need a DHCP server to assign them an address.

It's this use of DHCP, by the way, that means that almost everybody's SOHO router can use the same IP address on the LAN side, such as 192.168.2.1 or 192.168.1.1. The router connects on one side (the wide area network [WAN]) to the Internet by way of your ISP, and on the other side to the devices on its local network segment. Devices on the LAN segment can see other devices on that segment, but they cannot see “out the WAN side,” you might say, without using network address translation, which we'll look at in a moment.

DHCP Leases: IPv4 and IPv6

The process of assigning a dynamic IP address to a device is known as leasing. DHCP handles the mechanics of this process, but has to do it in two very different fashions given the differences between the two different versions of IP. Two different protocols, DHCPv4 and DHCPv6, provide for the dynamic assignment of an IP address (32 bit or 128 bit, respectively) in these situations. Both protocols are providing some common features:

  • IP to device address binding: DHCPv4 uses the device's MAC address, but DHCPv6 uses a temporary or privacy address instead.
  • Lease expiration: This allows the DHCP server to limit the number of active leases it is managing (and hence the number of active address mappings, from IP to connected device, that it is servicing).
  • Lease recovery: Connections can get dropped, and devices can go through unplanned or deliberate reboots. Lease recovery can also be a useful feature, if the same device still requires the same connection from the DHCP server and the network it serves. Either way, it's convenient to not have to terminate one lease and re-issue another, if the previous lease can be recovered.

DHCPv4 has to also manage address reuse, which for many organizations with Class C networks is necessary given the small size of its available address pool. As a result, since 1998 the DHCP protocol (now renamed DHCPv4) and its handshake has been the standard for dynamic IP address assignment. This DORA Dance, as it's affectionately known, consists of the following steps:

  1. DISCOVER: A client device seeking network access broadcasts a DHCPDISCOVER request packet onto its LAN connection. (Most NICs will do this automatically upon being reset, when they recognize they have no assigned IP address.)
  2. OFFER: Any DHCP server receiving the request will, at its option, respond with a DHCPOFFER packet.
  3. REQUEST: The client device chooses which of the offers to accept (which generally is the first one it receives in a given time window) and replies to the selected one with a DHCPREQUEST packet.
  4. ACKNOWLEDGE: The DHCP server responds with its DHCPACK packet, which includes the leased IP address and a time-to-live (countdown timer) field. Having now received its lease, the client can use IP address–based protocols to connect to other network devices. The server can also send a DHCPNACK address in certain circumstances, when it chooses to reject a request. One common scenario involves a mobile client device that (naturally) asks for an address lease when at home and then when at work. The client's request will request the same IP address it had been previously assigned, and if that address is not available (if it's on a different subnet perhaps), the server must decline.

DHCPv4 was designed in simpler times, and the notion of binding the IP address to a device's MAC address was not seen as a matter of concern. As the IETF grappled with IPv6, they realized that its much larger address space meant that address reuse was no longer necessary, which might give rise to a more or less permanent assignment of an IP address to each device. This might seem simple in theory, but in practice, it has privacy and user device localization issues that the IETF would rather not cast into concrete with DHCPv6. Instead, they defined the 128-bit IPv6 address field to consist of an upper 80 bits for the network number, with the lower 48 bits being determined by the device or virtual entity itself.

DCHPv6 instead uses what IETF called a stateless automatic address configuration (SLAAC) process. This defines the 128-bit IPv6 address field as consisting of an upper 80 bits for the network number, with the lower 48 bits being determined by the device or virtual entity itself. Most devices that support IPv6 (which is becoming the majority of devices on the market and the Internet) have the built-in capability of generating this privacy address field automatically, typically on a daily basis. They will then ignore traffic that is addressed to their previous privacy address after a predetermined time period, usually one week. Instead of DORA, DHCPv6's four-step handshake consists of the Solicit, Advertise, Request, and Reply process steps, which accomplish much the same as DORA does but for 128-bit addresses. An additional protocol, the neighbor discovery protocol (NDP) is used to determine what other devices might be on the same network with the client seeking an IP address lease so that it can then use the duplicate address detection (DAD) process to avoid trying to reuse a pseudorandom 48-bit temporary address portion that is still in use on that network.

IPv4 Address Classes

IPv4's addressing scheme was developed with classes of addresses in mind. These were originally designed to be able to split the octets so that one set represented a node within a network, while the other octets were used to define very large, large, and small networks. At the time (1970s), this was thought to make it easier for humans manage IP addresses. Over time, this has proven impractical. Despite this, IPv4 address class nomenclature remains a fixed part of our network landscape, and SSCPs need to be familiar with the defined address classes:

  • Class A addresses used the first octet to define such very large networks (at most 128 of them), using 0 in the first bit to signify Class A address or some other address type. IBM, for example, might have required all 24 bits' worth of the other octets to assign IP addresses to all of its nodes. Think of Class A addresses as looking like <net>.<node>.<node>.<node>.
  • Class B addresses used two octets for the network identifier and two for the node, or <net>.<net>.<node>.<node>. The first 2 bits of the address would be 10.
  • Class C addresses used the first three octets for the network identifier: <net>.<net>.<net>.node, giving smaller organizations networks of at most 256 addresses; the first 3 bits of the first octet were 110.
  • Class D and Class E addresses were reserved for experimental and other purposes.

These address classes are summarized in Table 5.4.

TABLE 5.4  IPv4 address classes

Class Leading bits Size of Network Number field Size of Node Number field Number of networks Number of nodes per network Start address End address
A 0 8 24 128 16,777,216 0.0.0.0 127.255.255.255
B 10 16 16 16,384 65,536 128.0.0.0 191.255.255.255
C 110 24 8 2,097,152 256 192.0.0.0 223.255.255.255

There are, as you might expect, some special cases to keep in mind:

  • 127.0.0.1 is commonly known as the loopback address, which apps can use for testing the local IP protocol stack. Packets addressed to the local loopback are sent only from one part of the stack to another (“looped back” on the stack), rather than out onto the Physical layer of the network. Note that this means the entire range of the addresses starting with 127 are so reserved, so you could use any of them.
  • 169.254.0.0 is called the link local address, which is used to auto-assign an IP address when there is no DHCP server that responds. In many cases, systems that are using the link local address suggest that the DHCP server has failed to connect with them, for some reason.

In Windows systems this is known as the Auto-IP Address (APIPA) because it is generated by Windows when a DHCP server does not respond to requests; regardless of what you call it, it's good to recognize this IP address when trying to diagnose why you've got no Internet connection.

The node address of 255 is reserved for broadcast use. Broadcast messages go to all nodes on the specified network; thus, sending a message to 192.168.2.255 sends it to all nodes on the 192.168.2 network, and sending it to 192.168.255.255 sends it to a lot more nodes! Broadcast messages are blocked by routers from traveling out onto their WAN side. By contrast, multicasting can provide ways to allow a router to send messages to other nodes beyond a router, using the address range of 224. 255.255.255 to 239.255.255.255. Unicasting is what happens when we do not use 255 as part of the node address field—the message goes only to the specific address. Although the SSCP exam won't ask about the details of setting up and managing broadcasts and multicasts, you should be aware of what these terms mean and recognize the address ranges involved.

Subnetting in IPv4

Subnetting seems to confuse people easily, but in real life, we deal with sets and subsets of things all the time. We rent an apartment, and it has a street address, but the building is further broken down into individual sub-addresses known as the apartment number. This makes postal mail delivery, emergency services, and just day-to-day navigation by the residents easier. Telephone area codes primarily divide a country into geographic regions, and the next few digits of a phone number (the city code or exchange) divide the area code's map further. This, too, is a convenience feature, but first for the designers and operators of early phone networks and switches. (Phone number portability is rapidly erasing this correspondence of phone number to location.)

Subnetting allows network designers and administrators ways to logically group a set of devices together in ways that make sense to the organization. Suppose your company's main Class B IP address is 163.241, meaning you've got 16 bits' worth of node addresses to assign. If you use them all, you have one subgroup, 0.0 to 254.254 (remember that broadcast address!). Conversely:

  • Using the last two bits gives you three subgroups.
  • Using the last octet gives you 127 subgroups.
  • And so on.

Designing our company's network to support subgroups requires we know three things: our address class, the number of subgroups we want, and the number of nodes in each subgroup. This lets us start to create our subnet masks. A subnet mask, written in IP address format, shows which bit positions (starting from the right or least significant bit) are allocated to the node number within a subnet. For example, a mask of 255.255.255.0 says that the last 8 bits are used for the node numbers within each of 254 possible subnets (if this were a Class B address). Another subnet mask might be 255.255.255.128, indicating two subnets on a Class C address, with up to 127 nodes on each subnet. (Subnets do not have to be defined on byte or octet boundaries, after all.)

Subnets are defined using the full range of values available for the given number of bits (minus 2 for addresses 0 and 255). Thus, if we require 11 nodes on each subnet, we still need to use 4 bits for the subnet portion of the address, giving us address 0, node addresses 1 through 11, and 15 for all-bits-on; two addresses are therefore unused.

This did get cumbersome after a while, and in 1993, Classless Inter-Domain Routing (CIDR) was introduced to help simplify both the notation and the calculation of subnets. CIDR appends the number of subnet address bits to the main IP address. For example, 192.168.1.168/24 shows that 24 bits are assigned for the network address, and the remaining 8 bits are therefore available for the node-within-subnet address. (Caution: don't get those backward!) Table 5.5 shows some examples to illustrate.

TABLE 5.5  Address classes and CIDR

Class Number of network bits Number of node bits Subnet mask CIDR notation
A 9 23 255.128.0.0 /9
B 17 15 255.255.128.0 /17
C 28 4 255.255.255.240 /28

Unless you're designing the network, most of what you need to do with subnets is to recognize subnets when you see them and interpret both the subnet masks and the CIDR notation, if present, to help you figure things out. CIDR counts bits starting with the leftmost bit of the IP address; it counts left to right. What's left after you run out of CIDR are the number of bits to use to assign addresses to nodes on the subnet (minus 2).

Before we can look at subnetting in IPv6, we first have to deal with the key changes to the Internet that the new version 6 is bringing in.

Running Out of Addresses?

By the early 1990s, it was clear that the IP address system then in use would not be able to keep up with the anticipated explosive growth in the numbers of devices attempting to connect to the Internet. At that point, Version 4 of the protocol (or IPv4 as it's known) used a 32-bit address field, represented in the familiar four-octet address notation (such as 192.168.2.11). That could only handle about 4.3 billion unique addresses; by 2012, we already had 8 billion devices connected to the Internet, and had invented additional protocols such as NAT to help cope. According to the IETF, 2011 was the year we started to see address pool exhaustion become realit; one by one, four of the five Regional Internet Registries (RIRs) exhausted their allocation of address blocks not reserved for IPv6 transition between April 2011 and September 2015. Although individual ISPs continue to recycle IP addresses no longer used by subscribers, the bottom of the bucket has been reached. Moving to IPv6 is becoming imperative. IPv4 also had a number of other faults that needed to be resolved. Let's see what the road to that future looks like.

IPv4 vs. IPv6: Important Differences and Options

Over the years we've used it, we've noticed that the design of IPv4 has a number of shortcomings to it. It did not have security built into it; its address space was limited, and even with workarounds like NAT, we still don't have enough addresses to handle the explosive demand for IoT devices. (Another whole class of Internet users are the robots, smart software agents, with or without their hardware that let them interact with the physical world. Robots are using the Internet to learn from each other's experiences in accomplishing different tasks.)

IPv6 brings a number of much-needed improvements to our network infrastructures:

  • Dramatic increase in the size of the IP address field, allowing over 18 quintillion (a billion billions) nodes on each of 18 quintillion networks. Using 64-bit address fields each for network and node addresses provides for a billion networks of a billion nodes or hosts on each network.
  • More efficient routing, since ISPs and backbone service providers can use hierarchical arrangements of routing tables, while reducing if not eliminating fragmentation by better use of information about maximum transmission unit size.
  • More efficient packet processing by eliminating the IP-level checksum (which proved to be redundant given most Transport layer protocols).
  • Directed data flows, which is more of a multicast rather than a broadcast flow. This can make broad distribution of streaming multimedia (sports events, movies, etc.) much more efficient.
  • Simplified network configuration, using new autoconfigure capabilities.
  • Simplified end-to-end connectivity at the IP layer by eliminating NAT. This can make services such as VOIP and quality of service more capable.
  • Security is greatly enhanced, which may allow for greater use of ICMP (since most firewalls block IPv4 ICMP traffic as a security precaution). IPSec as defined in IPv4 becomes a mandatory part of IPv6 as a result.

This giant leap of changes from IPv4 to IPv6 stands to make IPv6 the clear winner, over time, and is comparable to the leap from analog video on VHS to digital video. To send a VHS tape over the Internet, you must first convert its analog audio, video, chroma, and synchronization information into bits, and package (encode) those bits into a file using any of a wide range of digital video encoders such as MP4. The resulting MP4 file can then transit the Internet.

IPv6 was published in draft in 1996 and became an official Internet standard in 2017. The problem is that IPv6 is not backward compatible with IPv4; you cannot just flow IPv4 packets onto a purely IPv6 network and expect anything useful to happen. Everything about IPv6 packages the user data differently and flows it differently, requiring different implementations of the basic layers of the TCP/IP protocol stack. Figure 5.9 shows how these differences affect both the size and structure of the IP Network layer header.

Schematic illustration of Changes to the packet header from IPv4 to IPv6

FIGURE 5.9 Changes to the packet header from IPv4 to IPv6

For organizations setting up brand-new network infrastructures, there's a lot to be gained by going directly to an IPv6 implementation. Such systems may still have to deal with legacy devices that operate only in IPv4, such as “bring your own devices” users. Organizations trying to transition their existing IPv4 networks to IPv6 may find it worth the effort to use a variety of “dual-rail” approaches to effectively run both IPv4 and IPv6 at the same time on the same systems:

  • Dual stack, in which your network hardware and management systems run both protocols simultaneously, over the same Physical layer.
  • Tunnel, by encapsulating one protocol's packets within the other's structure. Usually, this is done by encapsulating IPv6 packets inside IPv4 packets.
  • NAT-PT, or network address translation–protocol translation, but this seems best done with Application layer gateways.
  • Dual-stack Application layer gateways, supported by almost all major operating systems and equipment vendors, provide a somewhat smoother transition from IPv4 to IPv6.
  • MAC address increases from EUI-48 to EUI-64 (48 to 64 bit).

With each passing month, SSCPs will need to know more about IPv6 and the changes it is heralding for personal and organizational Internet use. This is our future!

CIANA Layer by Layer

We've come a long way thus far in showing you how Internet protocols work, which should give you both the concepts and some of the details you'll need to rise to the real challenge of this chapter. As an SSCP, after all, you are not here to learn how to design, build, and administer networks—you're here to learn how to keep networks safe, secure, and reliable!

As we look at vulnerabilities and possible exploits at each layer, keep in mind the concept of the attack surface. This is the layer of functionality and features, usually in software, that an attacker has to interact with, defeating or disrupting its normal operation as part of a reconnaissance or penetration attempt. This is why so many attacks that involve lower layers of the OSI or TCP/IP stacks actually start with attacks on applications, because apps can often provide the entry path the attacker needs to exploit.

For all layers, it is imperative that your organization have a well-documented and well-controlled information technology baseline, so that it knows what boxes, software, systems, connections, and services it has or uses, down to the specifics about make, model, and version! This is your starting point to find the Common Vulnerabilities and Exposures (CVE) data about all of those systems elements.

It's time now to put our white hats firmly back on, grab our vulnerability modeling and assessment notes from Chapter 4, and see how the OSI 7-layer reference model can also be our roadmap from the physical realities of our networks up through the Application Layer—and beyond!

CIANA at Layer 1: Physical

In all technologies we have in use today, data transmission at its root has to use a physical medium that carries the datagrams from Point A to Point B. Despite what Marshall McLuhan said, when it comes to data transmission, the medium is not the message. (McLuhan was probably speaking about messages at Layer 7…) And if you can do something in the physical world, something else can interfere with it, block it, disrupt or distort it.

Or…somebody else can snoop your message traffic, at the physical level, as part of their target reconnaissance, characterization, and profiling efforts.

Vulnerabilities

In Chapter 8, you'll work with a broader class of physical systems, their vulnerabilities, and some high-payoff countermeasures. That said, let's take a closer look at the Physical layer from the perspective of reliable and secure data transmission and receipt. We need to consider two kinds of physical transmission: conduction and radiation.

  • Electrical wires, fiber optics, even water pipes provide physical channels through which electrons, photons, or pulses of water (or air) can travel. Modems turn those flows into streams of datagrams (1s or 0s, and in some cases synchronization patterns or S-tones).
  • Radiated signals in most data communications are either radio waves (such as Wi-Fi or microwave) or light (using lasers, flashlights, etc.). Radiated signals travel through air, the vacuum of space, and solid objects.

Conducted and radiated signals are easy prey to a few other problems:

  • Spoofing happens when another transmitter acts in ways to get a receiver to mistake it as the anticipated sender. This can happen accidentally, such as when the RFI (radio frequency interference) from a lightning strike is misinterpreted by an electronic device as some kind of command or data input. More often, spoofing is deliberate.
  • Large electrical motors, and electric power systems, can generate electromagnetic interference (EMI); this tends to be very low frequency but can still disrupt some Layer 1 activities.
  • Interception happens when a third party is able to covertly receive and decode the signals being sent, without interrupting the flow from sender to receiver.
  • Jamming occurs when a stronger signal (generated deliberately, accidentally, or naturally) drowns out the signal from the transmitter.

Finally, consider the physical vulnerabilities of the Layer 1 equipment itself—the NIC, the computer it's in, the router and modem, the cabling, and fiber optic elements that make Layer 1 possible. Even the free space that Wi-Fi or LiFi (LEDs used as part of medium data rate communications systems) are part of the system! The walls of a building or vehicle can obstruct or obscure radiated signals, and every electrical system in the area can generate interference. Even other electrical power customers in the same local grid service area can cause electrical power quality problems that can cause modems, routers, switches, or even laptops and desktops to suffer a variety of momentary interruptions.

All of these kinds of service disruptions at Layer 1 can for the most part be either intermittent, even bursty in nature, or they can last for minutes, hours, or even days.

The Exploiter's Tool Kit

For hostile (deliberate) threat actors, the common attack tools at Layer 1 start with physical access to your systems:

  • Cable taps (passive or with active repeaters)
  • Cables plugged into unused jacks on your switches, routers, or modems
  • Tampering with your local electrical power supply system

Wi-Fi reconnaissance can be easily conducted from a smartphone app, and this can reveal exploitable weaknesses in your systems at Layer 1 and above. This can aid an attacker in tuning their own Wi-Fi attack equipment to the right channel and pointing it in the right spots in your Wi-Fi coverage patterns, to find potential attack vectors.

Countermeasure Options

Without getting far too technical (for an SSCP or for the exam), the basics of the medium should provide some degree of protection against some source of interference, disruption, or interception. Signal cables can be contained in rigid pipes, and these are buried in the ground or embedded in concrete walls. This reduces the effect of RFI while also reducing the chance of the cable being cut or tapped into. Radio communications systems can be designed to use frequency bands, encoding techniques, and other measures that reduce accidental or deliberate interference or disruption. Placing Layer 1 (and other) communications systems elements within physically secured, environmentally stabilized physical spaces should always be part of your risk mitigation thinking.

This also is part of placing your physical infrastructure under effective configuration management and change control.

Power conditioning equipment can also alleviate many hard-to-identify problems. Not every electronic device behaves well when its AC power comes with bursts of noise, or with voltage drops or spikes that aren't severe enough to cause a shutdown (or a blown surge suppressor). Some consumer or SOHO routers, and some cable or fiber modems provided by ISPs to end users, can suffer from such problems. Overheating can also cause such equipment to perform erratically.

Note that most IPS and IDS products and approaches don't have any real way to reach down into Layer 1 to detect an intrusion. What you're left with is the old-fashioned approach of inspection and audit of the physical systems against a controlled, well- documented baseline.

Residual Risk

In general terms, the untreated Layer 1 risks end up being passed on to Layer 2 and above in the protocol stacks, either as interruptions of service, datagram errors, faulty address and control information, or increased retry rates leading to decreased throughput. Monitoring and analysis of monitoring data may help you identify an incipient problem, especially if you're getting a lot of red flags from higher layers in the protocol stack.

Perhaps the worst residual risk at Layer 1 is that you won't detect trespass at this level. Internet-empowered systems can lull us into complacency; they can let us stop caring about where a particular Cat 5 or Cat 6 cable actually goes, because we're too worried about authorized users doing the wrong thing or unauthorized users hacking into our systems or our apps. True, the vast majority of attacks happen remotely and involve no physical access to your Layer 1 systems or activities.

How would you make sure that you're not the exception to that rule?

CIANA at Layer 2: Data Link

Attackers at this level have somehow found their way past your logical safeguards on the Physical layer. Perhaps they've recognized the manufacturer's default broadcast SSID of your wireless router, used that to find common vulnerabilities and exploits information, and are now attacking it with one or more of those exploits to see if they can spoof their way into your internet. Note how some of the attack surfaces involve layer-spanning protocols like ARP or DHCP, so we'll address them here first.

Vulnerabilities and Assessment

A number of known vulnerabilities in Layer 2 systems elements can lead to a variety of attack patterns, such as:

  • MAC address–related attacks, MAC spoofing (command line accessible), CAM (content addressable memory) table overflow
  • DHCP lease-based denial of service attack (also called IP pool starvation attack)
  • ARP attacks, attacker sending IP/MAC pairs to falsify IP address for known MAC, or vice versa
  • VLAN attacks: VLAN hopping via falsified (spoofed) VLAN IDs in packets
  • Denial of service by looping packets, as a spanning tree protocol (STP) attack
  • Reconnaissance attacks against Data Link layer discovery protocols
  • SSID spoofing as part of man-in-the-middle attacks

These may lead to denial or disruption of service or degraded service (if your network systems have to spend a lot of time and resources detecting such attacks and preventing them). They may also provide an avenue for the attacker to further penetrate your systems and achieve a Layer 3 access. Attacks at this layer can also enable an attacker to reach out through your network's nodes and attack other systems.

Countermeasure Options

A variety of steps can be taken to help disrupt the kill chain, either by disrupting the attacker's reconnaissance efforts or the intrusion attempts themselves:

  • Secure your network against external sniffers via encryption.
  • Use SSH instead of unsecure remote login, remote shell, etc.
  • Ensure maximum use of SSL/TLS.
  • Use secured versions of email protocols, such as S/MIME or PGP.
  • Use network switching techniques, such as dynamic ARP inspection or rate limiting of ARP packets.
  • Control when networks are operating in promiscuous mode.
  • Use allowed listing of known, trusted MAC addresses.
  • Use blocked listing of suspected hostile MAC addresses.
  • Use honeynets to spot potential DNS snooping.
  • Do latency checks, which may reveal that a potential or suspect attacker is in fact monitoring your network.
  • Monitor what processes and users are actually using network monitoring tools, such as Netmon, on your systems; when in doubt, one of those might be serving an intruder!

Residual Risk

Probably the most worrisome residual risk of an unresolved Layer 2 vulnerability is that an intruder has now found a way to gain Layer 3 access or beyond on your network.

CIANA at Layer 3: Network

One of the things to keep in mind about IP is that it is a connectionless and therefore stateless protocol. By itself, it does not provide any kind of authentication. Spoofing IP packets, launching denial of service attacks, or other attacks have quite literally become the child's play of script kiddies worldwide. ICMP, the other major protocol at this layer, is also pretty easy to use to gather reconnaissance information or to launch attacks with.

Attacks at any layer of the protocol stacks can be either hit-and-run or very persistent. The hit-and-run attacker may need to inject only a few bad packets to achieve their desired results. This can make them very hard to detect. The persistent threat requires more continuous action be taken to accomplish the attack.

Vulnerabilities and Assessment

Typical attacks seen at this level, which exploit known common vulnerabilities or just the nature of IP networks, can include:

  • IP spoofing.
  • Routing (RIP) attacks.
  • ICMP attacks, including Smurf attacks, which use ICMP packets in a DDoS attack against the victim's spoofed IP address.
  • Ping flood.
  • Ping of Death attack (ICMP datagram exceeding maximum size: if the system is vulnerable to this, it will crash); most modern OSs are no longer vulnerable.
  • Teardrop attack (false offset information into fragmented packets: causes empty or overlapping spots during reassembly, leading to receive system/app instability).
  • Packet sniffing reconnaissance.

Countermeasure Options

First on your list of countermeasure strategies should be to implement IPSec if you've not already done so for your IPv4 networks. Whether you deploy IPSec in tunnel mode or transport mode (or both) should be driven by your organization's impact assessment and CIANA needs. Other options to consider include these:

  • Securing ICMP
  • Securing routers and routing protocols with packet filtering (and the ACLs this requires)
  • Provide ACL protection against address spoofing

Residual Risk

For the most part, strong protection via router ACLs and firewall rules, combined with a solid IPSec implementation, should leave you pretty secure at this layer. You'll need to do a fair bit of ongoing traffic analysis yourself, combined with monitoring and analysis of the event logs from this layer of your defense, to make sure.

The other thing to keep in mind is that attacks at higher levels of the protocol stack could wend their way down to surreptitious manipulation, misuse, or outright disruption of your Layer 3 systems.

CIANA at Layer 4: Transport

Layer 4 is where packet sniffers, protocol analyzers, and network mapping tools pay big dividends for the black hats. For the white hats, the same tools—and the skill and cunning needed to understand and exploit what those tools can reveal—are essential in vulnerability assessment, systems characterization and fingerprinting, active defense, and incident detection and response. Although it's beyond the scope of the SSCP exam or this book to make you a protocol wizard, it's not beyond the scope of the SSCP's ongoing duties to take on, understand, and master what happens at the Transport layer.

Let's take a closer look.

Vulnerabilities and Assessment

How much of this applies to your site or organization?

  • SYN flood (can defend with SYN cookies)
  • Injection attacks (guessing/forcing reset of sequence numbers to jump your packet in ahead of a legitimate one); also called TCP hijacking
  • Opt-Ack attack (attacker convinces target to send quickly, in essence a self-inflicted DoS)
  • TLS attacks (tend to be attacks on compression, cryptographics, etc.)
  • Bypass of proper certificate use for mobile apps
  • TCP port scans, host sweeps, or other network mapping as part of reconnaissance
  • OS and application fingerprinting, as part of reconnaissance

Countermeasure Options

Most of your countermeasure options at Layer 4 involve better identity management and access control, along with improved traffic inspection and filtering. Start by considering the following:

  • TCP intercept and filtering (routers, firewalls)
  • DoS prevention services (such as Cloudflare, Prolexic, and many others)
  • Blocked listing of attackers' IP addresses
  • Allowed listing of known, trusted IP addresses
  • Better use of SSL/TLS and SSH
  • Fingerprint scrubbing techniques

Residual Risk

One vulnerability that may remain, after taking all of the countermeasures that you can, is that your traffic itself is still open to being monitored and subjected to traffic analysis. Traffic analysis looks for patterns in sender and recipient address information, protocols or packet types, volumes and timing, and just plain coincidences. Even if your data payloads are well encrypted, someone willing to put the time and effort into capturing and analyzing your traffic may find something worthwhile.

CIANA at Layer 5: Session

More and more, we are seeing attacks that try to take advantage of session-level complexities. As defensive awareness and response has grown, so has the complexity of session hijacking and related Session layer attacks. Many of the steps involved in a session hijack can generate other issues, such as ACK storms, in which both the spoofed and attacking host are sending ACKs with correct sequence numbers and other information in the packet headers; this might require an attacker to take further steps to silence this storm so that it's not detectable as a symptom of a possible intrusion.

Vulnerabilities and Assessment

How much of this applies to your site or organization?

  • Session hijacking.
  • Man-in-the-middle (MITM).
  • ARP poisoning.
  • DNS redirection, either by spoofing (alteration of DNS records returned to a user node), or DNS local cache poisoning (insertion of illegitimate values).
  • Local system hosts file corruption or poisoning.
  • Blind hijacking (attacker injects commands into the communications stream but cannot see results, such as error messages or system response directly).
  • Man-in-the-browser attacks, which are similar to MITM but via a Trojan horse that manipulates calls to/from stack and browser. Browser helper objects, extensions, API hooking, and Ajax worms can inadvertently facilitate these types of attacks.
  • Session sniffing to gain a legitimate session ID and then spoof it.
  • SSH downgrade attack.

SSCPs need to be very concerned about two different but related DNS security concerns. Fundamentally, users need to be able to trust that the use of DNS services by their organization is achieving trustworthy, reliable results: requested URLs and URIs connect to the proper resources, for example, and that DNS system responses to the users' endpoints are not in fact spoofed, corrupted, or being used to send malware or other harmful payloads to the user's endpoint. Security measures must also mitigate the risk of abuse of the DNS infrastructure itself by sophisticated attackers such as advanced persistent threats (APTs) as their own private command, control, and communications infrastructure.

Two related sets of countermeasures can be used to alleviate these concerns. The first is the use of DNS security extensions (DNSSEC), while the second involves more intensive DNS service filtering via firewalls and other security tools. It's worth noting that DNSSEC is not a “top-level domains” issue, nor something that only needs to be done by the owner-operators of the Internet's backbone services (and the DNS as a system). Implementing effective DNSSEC does require action across the entire Internet community.

Countermeasure Options

As with the Transport layer, most of the countermeasures available to you at the Session layer require some substantial sleuthing around in your system. Problems with inconsistent applications or systems behavior, such as not being able to consistently connect to websites or hosts you frequently use, might be caused by errors in your local hosts file (containing your ARP and DNS cache). Finding and fixing those errors is one thing; investigating whether they were the result of user error, applications or systems errors, or deliberate enemy action is quite another set of investigative tasks to take on!

Also, remember that your threat modeling should have divided the world into those networks you can trust, and those that you cannot. Many of your DoS prevention strategies therefore need to focus on that outside, hostile world—or, rather, on its (hopefully) limited connection points with your trusted networks.

Countermeasures to consider include the following:

  • Replace weak password authentication protocols such as PAP, CHAP, and NT LAN Manager (NTLM), which are often enabled as a default to support backward compatibility, with much stronger authentication protocols.
  • Migrate to strong systems for identity management and access control.
  • Use PKI as part of your identity management, access control, and authentication systems.
  • Verify correct settings of DNS servers on your network and disable known attack methods, such as allowing recursive DNS queries from external hosts.
  • Use tools such as SNORT at the Session layer as part of an active monitoring and alarm system.
  • Implementation and use of more robust IDSs or IPSs.

Residual Risk

As you lock down your Session layer defenses, you may find situations where some sessions and the systems that support them need a further layer of defense (or just a greater level of assurance that you've done all that can be done). This may dictate setting up proxies as an additional boundary layer between your internal systems and potential attackers.

CIANA at Layer 6: Presentation

Perhaps the most well-known Presentation layer attacks have been those that exploit vulnerabilities in NetBIOS and SMB; given the near dominance of the marketplace by Microsoft-based systems, this should not be a surprise.

More importantly, the cross-layer protocols, and many older apps and protocols such as SNMP, FTP, and such, all work through or with Layer 6 functionality.

Vulnerabilities and Assessment

Vulnerabilities at this layer can be grouped broadly into two big categories: attacks on encryption or authentication, and attacks on the apps and control logic that support Presentation layer activities. These include:

  • Attacks on encryption used, or on weak protection schemes
  • Attacks on Kerberos or other access control at this layer
  • Attacks on known NetBIOS and SMB vulnerabilities

Countermeasure Options

Building on the countermeasures you've taken at Layer 5, you'll need to look at the specifics of how you're using protocols and apps at this layer. Consider replacing insecure apps, such as FTP or email, with more secure versions.

Residual Risk

Much of what you can't address at Layer 6 or below will flow naturally up to Layer 7, so let's just press on!

CIANA at Layer 7: Application

It's just incredible when we consider how many application programs are in use today! Unfortunately, the number of application-based or Application layer attacks grows every day as well. Chapter 9 addresses many of the ways you'll need to help your organization secure its applications and the data they use from attack, but let's take a moment to consider two specific cases a bit further:

  • Voice, POTS, and VOIP: Plain old telephone service and voice-over IP all share a common security issue: how do you provide the “full CIANA” of protection to what people say to each other, regardless of the channel or the technology they use?
  • Collaboration systems: LinkedIn, Facebook Workspace, Microsoft Teams, and even VOIP systems like Skype provide many ways in which people can organize workflows, collaborate on developing information (such as books or software), and have conversations with each other. Each of these was designed with the goal of empowering users to build and evolve their own patterns of collaboration with each other.

These are just two such combinations of ubiquitous technologies and the almost uncontrollable need that people have to talk with each other, whether in the course of accomplishing the organization's mission and goals or not. When we add in any possible use of a Web browser… Pandora's box is well and truly open for business, you might say.

Vulnerabilities and Assessment

Many of these attacks are often part of a protracted series of intrusions taken by more sophisticated attackers. Such advanced persistent threats may spend months, even a year or more, in their efforts to crack open and exploit the systems of a target business or organization in ways that will meet the attacker's needs. As a result, constant vigilance may be your best strategy. Keep your eyes and IPS/IDS alert and on the lookout for the following:

  • SQL or other injection
  • Cross-site scripting (XSS)
  • Remote code execution (RCE)
  • Format string vulnerabilities
  • Username enumeration
  • HTTP floods
  • HTTP server resource pool exhaustion (Slowloris, for example)
  • Low-and-slow attacks
  • Get/post floods
  • DoS/DDoS attacks on known server vulnerabilities
  • NTP amplification
  • App-layer DoS/DDoS
  • Device, app, or user hijacking

Countermeasure Options

It's difficult to avoid falling into a self-imposed logic trap and see applications security separate and distinct from network security. These two parts of your organization's information security team have to work closely together to be able to spot, and possibly control, vulnerabilities and attacks. It will take a concerted effort to do the following:

  • Monitor website visitor behavior.
  • Block known bad bots.
  • Challenge suspicious/unrecognized entities with a cross-platform JavaScript tester such as jstest (at http://jstest.jcoglan.com); for cookies, use privacy-verifying cookie test Web tools, such as https://www.cookiebot.com/en/gdpr-cookies. Add challenges such as CAPTCHAs to determine if the entity is a human or a robot trying to be one.
  • Use two-factor/multifactor authentication.
  • Use Application layer IDS and IPS.
  • Provide more effective user training and education focused on attentiveness to unusual systems or applications behavior.
  • Establish strong data quality programs and procedures (see Chapter 9).

Residual Risk

Most of what you've dealt with in Layers 1 through 7 depends on having trustworthy users, administrators, and software and systems suppliers and maintainers. Trusting, helpful people, willing to go the extra mile to solve a problem, are perhaps more important to a modern organization than their network infrastructure and IT systems are. But these same people are prone to manipulation by attackers. You'll see how to address this in greater depth when we get to Chapter 11.

Securing Networks as Systems

Looking at the layers of a network infrastructure—by means of TCP/IP's four layers, or the OSI 7-layer reference model's seven layers—provides many opportunities to recognize vulnerabilities, select and deploy countermeasures, and monitor their ongoing operation. It's just as important to take seven giant steps back and remember that to the rest of the organization, that infrastructure is a system in and of itself. So how does the SSCP refocus on networks as systems, and plan for and achieve the degree of security for them that the organization needs?

Let's think back to Chapters 3 and 4, and their use of risk management frameworks. One key message those chapters conveyed, and that frameworks like NIST's embody, is the need to take a cohesive, integrated, end-to-end and top-to-bottom approach. That integrated approach needs to apply across the systems, equipment, places, faces, and timeframes that your organization needs to accomplish its mission.

Timeframes are perhaps most critical to consider as we look at systems security. Other chapters have looked at the planning, preparation, and deployment phases; Chapter 10, “Incident Response and Recovery,” will look at incident response, which in effect is dealing with things after an event of interest has mushroomed into something worse.

What about the now?

Network Security Devices and Services

Thinking back to the security control functions described in Chapter 4, it's easy to see that securing an organization's networks requires a number of critical functions (in fact, keeping networks secure will require that entire set of control functions). Keeping networks secure can be broken down into the following broad sets of processes:

  • Intrusion detection and prevention
  • Access control
  • Data loss prevention
  • Traffic control, including enforcing restrictions on movement of data or user service requests both internally and externally
  • Incident response, including containment and recovery
  • Monitoring network activity and analyzing it for anomalies and possible indicators of compromise (IoCs), out-of-limits conditions, or other precursors of future problems

Access control, including network access control, will be covered in the next chapter; incident response is the subject of Chapter 10, and Chapter 11 will look at preparing for the loss of network services and recovering from them. Let's look at the others more closely.

Intrusion Detection and Prevention

Intrusion detection and prevention is generally performed by a combination of host-based and network-based software. Network-based intrusion detection and prevention systems (NIDS and NIPS, respectively) tend to be applications running on hardware devices such as firewalls or other specially hardened servers. They use a variety of scanning techniques to monitor network traffic flowing past them; prevention systems block unauthorized or suspect traffic from proceeding past, while detection ones merely raise an alarm about it. Host-based intrusion detection systems (HIDS and HIPS) run on endpoints, servers, and other devices, but do the same for network traffic trying to enter that device.

Intrusion detection and prevention systems have tended to be packaged with firewalls, again either in separate hardware or as software loaded onto devices. Windows-based systems (clients or servers) have been using Windows Firewall, for example, as a HIPS (or HIDS, depending upon how security policies are configured on the device) for years. Some routers also perform NIPS and NIDS functions; both firewalls and routers have roles to play in network access control, which we'll cover in Chapter 6.

Firewalls have gone through a rapid evolution from their limited first-generation models on up through the fifth or next-generation firewalls (NGFWs) in today's markets. Firewall functionality is also available as part of managed security services. As this evolution has continued, firewalls began to incorporate more of the functions originally performed by anti-malware software and systems, which was a natural outgrowth of using rules, lists, models, or heuristics to determine what sort of entities, activities, data, or files to allow past a control point or block. Firewalls, in combination with routers and network managers, can also be used to implement screening systems that prevent connections from being established by devices (or the software entities running on them) that cannot be verified as having all the required updates for software, firmware, and anti-malware definitions installed and active on them. (You'll learn more about this in Chapter 6.)

Traffic Control and Data Loss Prevention

Most network use cases quickly discover the need to balance the use of network throughput or bandwidth by different types or classes of traffic. Sometimes, this is necessary to prevent users from consuming greater bandwidth and responsiveness than they are paying for; in other cases, it is to ensure that higher-priority traffic can flow with minimal interruption or degradation. The years 2020 and 2021 demonstrated the value of such balancing as hundreds of millions of users learned how to improve Zoom, Teams, or other collaborative work platforms' utility and connectivity, while throttling back the bandwidth allocated to cloud backup services, games, or other media streaming services. Various quality of service (QoS) and other traffic management features, within apps, within the client's OS and network software, and within routers and firewalls, support this.

Larger enterprises also became much more aware of the need to limit the movement of sensitive data, particularly when it tried to leave the control span of the organization and its networks. Data loss prevention (DLP), also known as data leakage protection (or other combinations of those words), uses a variety of techniques to determine the legitimacy of movements of data, both laterally (to and from servers and clients within the enterprise's networks) and externally (what network engineers would call a northbound movement of data). Even the southbound movements of data, internal to the enterprise's networks, may be attempts by attackers to mask a lateral movement of data.

In the trivial case, the attacker is attempting to move a complete data set as a copy of a single file. This enables DLP systems to inspect the entirety of that data in motion and to use rules, patterns, heuristics, watermarks (or steganographic markers), or other techniques to determine if the file in question is at risk of being exfiltrated (suffering an unauthorized removal from the organization's control). Very quickly, attackers recognized the need to fragment data, encrypt its fragments, and then move those fragments in a scattershot fashion, all with the intent of hiding from the DLP system. (The attacker is in essence trying to construct their own TOR system within their target's environment, something that masks what's being sent, by whom or by which process IDs, from where, to where.)

Solving the DLP problem can be quite complex, and thus far, there is no one silver-bullet single-point solution. Many different techniques applied throughout the information architecture are called for. Given the incredibly lucrative target that many corporate and government data sets present to attackers, this will be a security “hot topic” for years to come.

Wireless Network Access and Security

Most organizations—and many individual private users—are allowing a bewildering array of wireless devices to access their network infrastructures to use services and resources for a variety of functions. The initial concepts of wireless security, largely implemented in the Wi-Fi routers being used, can no longer cope with both service contention and security issues that arise in many of these environments.

Wireless (or unbound) networks are built by providing access points (APs), devices that embody the Layer 1 wireless connection technology on one side, the wired (bound) physical connection technology on the other side, with a mix of Layer 2 and Layer 3 connection and authentication capabilities in between. This allows the router to act as a bridge between the wireless part of your network and the wired parts. DHCP, for example, is often provided as a built-in service in many routers. Access points may also embody a variety of firewall features to provide a limited set of security features.

The 802.11 standard provides a default open system authentication mode, which requires only a simple request–acknowledge handshake to establish a connection between the requesting station (another name for a Wi-Fi capable device of any kind) and the access point device; device or station authentication can then be performed. The standard also defined shared key authentication as a way of using previously established WEP encryption keys, as we saw previously.

Access points can work in one of two modes:

  • Ad hoc mode provides a simple peer-to-peer connection between the requesting device and the access point. As more devices connect, the star topology that grows around the access point can become difficult to manage and harder to keep secure.
  • Infrastructure mode is actually where the wired-connected peer on the ad hoc mode is formally defined as the access point. The AP acts like a base station for the localized set of Wi-Fi requesting stations (or devices) that it supports.

CIANA+PS and Wireless

Adding an access point to your system does first and foremost create a hole in your threat surface; it is a point at which friendly users and hostile ones can attempt to enter your systems, gain access to resources, introduce data or executable code, and take other actions. Securing that hole in your threat surface requires properly configuring and hardening the access point device and the services it provides, as well as enforcing strong access controls on all devices that attempt to connect to and through it.

Rogue access points are a very common concern. These may simply be wireless devices operated by authorized users that are misconfigured, uncontrolled, or already taken over by an attacker via malware or other tactics. Once connected to your access points (and your networks), any wireless device could attempt to impersonate a legitimate access point and stage a machine-in-the-middle attack on an unsuspecting user device.

Service interruption and degradation can also be caused by access points and devices that are not part of your network, but are close enough to your APs and legitimate user devices that your APs (or your neighbors) spend their resources attempting to filter or deny connection requests; RF channels can become more heavily loaded, as the base stations (APs) compete with each other, and service quality can suffer. (An attacker may do much the same thing, as a way of gaining technical intelligence about your Wi-Fi access point management capabilities and techniques.)

Wireless Network Monitoring

Deliberate and accidental attempts to intrude into your wireless networks will often require that your existing network defenses include wireless intrusion detection and prevention systems. Intrusion detection and prevention, of course, begins with monitoring. Wireless intrusion detection and protection does, however, require some different monitoring, detection, and protection strategies and techniques:

  • Monitoring needs to extend to the Layer 1 and Layer 2 traffic between the access point and the devices connecting with it. This type of monitoring can be any of the following:
    • Integrated monitoring, done by your existing access points, if they have the capabilities for it
    • Overlay monitoring, in which a separate set of wireless sensors (known as air monitors) are used
    • Hybrid monitoring, which employs a mix of dedicated air monitors and the existing APs
  • RF or wireless surveillance expands your monitoring to look at the RF environment (or whatever wireless technologies are being used) as a way of understanding all of the transmitter and receiver activity that may be affecting your access points, user devices, and the network as a whole. Laptop-based (or smartphone-based) Wi-Fi signal strength mapping, for example, is often used to find both hot spots and dead spots in a service area, as well as identify the presence of other access points. These devices support surreptitious and untended surveillance and monitoring of the RF side of your network, as well as enabling you to conduct ethical reconnaissance and technical fingerprinting of your own systems and the user devices that may be attempting to connect to them.

Different monitoring strategies may be required across the organization. Integrated monitoring does impose additional load on the APs and on the network fabric, which may not be acceptable in areas that already see high traffic loads. Other areas may be physically challenging to install and operate air monitors in; even in hybrid monitoring configurations, an additional server (and perhaps out-of-band network interfaces) may be needed to manage the air monitors and collect data from them.

Data from the monitoring system should then be integrated and collated with other security information to provide security analysts with the total picture. Most Wi-Fi monitoring solutions will integrate with existing SIEM or security analytics capabilities, making this part of the job relatively straightforward.

Wireless Intrusion Detection and Prevention Systems

Wireless intrusion detection systems (WIDSs) and wireless intrusion prevention systems (WIPSs) work to provide improved security over the wireless network they're part of. WIDS and WIPS technologies and products have continued to evolve, much as wired NIDS and NIPS have, and the wireless versions operate in similar fashion to their wired cousins in many ways. One key difference is that as WIDS and WIPS systems grow from integrated to overlay or hybrid monitoring, these systems will usually include a dedicated server that sits inline with the network segment containing the access points. These servers offer a wide range of capabilities, including the following:

  • Device management, for the WIPS and WIDS devices, additional sensors, the wireless LAN's access points, client devices, and perhaps other devices that might be part of the WLAN.
  • Attack discovery capabilities can vary widely across the different products and services in the WIDS and WIPS marketplace. Not all vendors are very forthcoming about what attack typologies their systems can detect or what their false positive and false negative rates might be. At a minimum, you probably need to ensure that systems being considered can detect rogue access points, denials of service, MITM attacks, authentication bypass, and encryption cracking attempts.
  • Logical WLAN mapping, to provide enumeration of existing WLAN elements.
  • Data for security compliance reporting continues to demand more of the security systems we select and use. Some WIPS and WIDS systems provide powerful data capture and reporting capabilities; others are perhaps more basic.
  • Forensic data capture and reporting needs to be sufficient to meet your organization's needs for investigative follow-up on any kind of security incident. Again, your organization's particular compliance regimes will dictate this.
  • Defensive techniques used by WIPS systems also vary considerably as you compare WIPS products and services; they also change as vendors attempt to respond to the market's needs.
  • Performance, particularly in terms of the scale of WLANs that the overall system can manage, monitor, and secure successfully.

The combination of all of these factors does, of course, affect the total cost of ownership and operations that organizations see with different alternatives.

Monitoring and Analysis for Network Security

The good news is that virtually every network and systems device, software application, interface, and activity can be configured to generate signals that can reveal who is using it, how, and (to some extent) for what purposes. Up until recently, this was also viewed as somewhat a bit of bad news: it takes communications bandwidth, processor time, and storage to generate, send, receive, organize, and store all of those signals, and even more compute power to analyze them to determine whether an attack has happened or not. A whole generation of security professionals, software engineers, and administrators worked to develop various data triage processes to keep from drowning in all of that data. Since much of this data was kept in log files on clients, servers, and network devices, this came to be known as the log management problem. But with the arrival of smarter, more affordable data analytics capabilities, and with the continual decrease in storage and compute costs, gathering more data and doing smarter analysis of it became more cost-effective. And as ransomware attacks, and their related data breaches, became more crippling to modern organizations (private and public alike), the cost-benefits balance tipped in favor of doing a better job of detecting the attack.

Performing this due diligence set of tasks is in some sense a relatively straightforward process, as shown here:

  • Identify candidate IoCs and other conditions that must be detected and characterized, with high confidence.
  • Determine the various indicators, signals, and supporting data that relate to those IoCs and other alarm-worthy conditions.
  • Identify the elements in your IT and OT systems inventory that produce those indicators or that are likely to be on a precursor chain of events that could lead to them.
  • Identify instrumentation needed, such as logs, software agents, or even special-purpose security monitoring devices, to gather this data.
  • Determine the placement and operating conditions for these instruments.
  • Provide the right mix of in-band and out-of-band communications capabilities to bring that data from where it's being generated to where it will be stored, managed, and analyzed.
  • Select, install, and use a set of security information analysis and management tools to meet both your real-time and retrospective analysis, detection, and warning needs.
  • Turn on the agents, logging functions, and collection system, and start paying attention to it.

This begs the question of where these different functions should be placed or hosted on your enterprise's networks. At its simplest, this is a choice between:

  • Placing the data capture and control functions inline, directly between the likely entry path for the threat and the asset or resource being protected
  • Placing the data capture and control functions centrally, such as on a main pathway within the architecture

In some respects, this is similar to whether an intrusion detection and prevention system is host based or network based: the network-based version can see everything that is trying to flow past it and take actions to prevent that traffic from continuing if required. Centrally located, host-based services can protect that host (they are of course in line with it, to a large extent), but they may have restricted visibility into or control over traffic elsewhere in the network. Much of the answer is dependent on the nature of your existing network architecture and its use of techniques such as network segmentation to keep different security domains, each operating with specific security classification and categorization restrictions, separate from each other. The type of access control approaches being used are also part of this puzzle. Certainly, the shift to managed security services and more powerful enterprise-wide security information and event management capabilities (such as SIEMs products and services) have changed the price/performance thinking. This means that having sensors where the data is generated (or the suspect traffic is most visible), with centralized or semi-distributed collection, collation, management, and analysis capabilities, now makes better sense than it once did.

We'll look at this more closely in Chapter 12, after these other major topics have been explored in the intervening chapters.

A SOC Is Not a NOC

Your organization or business may already have a network operations center (NOC); this could be either a physically separate facility or a work area within the IT support team's workspaces. NOCs perform valuable roles in maintaining the day-to-day operation of the network infrastructure; in conjunction with the IT support help desk, they investigate problems that users report, and respond to service requests to install new systems, configure network access for new users, or ensure updates to servers and server-based applications get done correctly. You might say that the NOC focuses on getting the network to work, keeping it working, and modifying and maintaining it to meet changing organizational needs.

The security operations center (SOC) has an entirely different focus. The SOC focuses on deterring, preventing, detecting, and responding to network security events. The SOC provides real-time command and control of all network-related monitoring activities, and it can use its device and systems management tools to further drill down into device, subsystem, server, or other data as part of its efforts to recognize, characterize, and contain an incident. It integrates all network-security related activities and information so as to make informed, timely, and effective decisions to ensure ongoing systems' reliability, availability, and security. The SOC keeps organizational management and leadership apprised of developing and ongoing information security incidents and can notify local law enforcement or other emergency responders as required. Let's look more closely at this important set of tasks we're chartering our SOC to perform:

  • Real-time command and control: The SOC has to be able to “reach out and touch” any element of the organization's network infrastructure, be that element part of the people, hardware, or software parts of the infrastructure. Within the span of assigned information security duties, the SOC has to be able to tell people, hardware, and software to take specific actions, or to stop taking certain actions; to report additional information; or to execute preplanned contingency actions.
  • Management tools: Systems such as people-facing communications tools like phones, pagers, and email, through ICPM and on up to integrated security event information management systems (SEIMs), are the heavy lifters of the SOC. They provide the SOC with the means to gather information, request additional information, ask for a set of diagnostic steps to be performed, or invoke analysis tools to review data already on hand at the SOC. Management tools should provide a real-time status—a state and health display of each element of the network infrastructure.
  • Recognize, characterize, and contain: These are the most urgent and time-critical tasks that a SOC must perform. (Once contained, disaster recovery or business continuity efforts will probably take command of the incident and direct company assets and people in the recovery tasks.)
  • Integrated: The SOC has to bring everything together so that the SOC team and their systems have the best and complete total awareness of the organization's information infrastructure.
  • Keep management informed: Organizational policy and procedure should clearly spell out what decisions the SOC team can make in real time and which need to have senior leadership or management participate in or direct the decision. Leadership and management must also be kept informed, since they may have to engage with other organizational units, external stakeholders, or legal authorities in order to fulfill due diligence and reporting responsibilities.
  • Notify and request support from local emergency responders: The SOC's first priority of course is safety of life, and in some cases, an information security event may have the potential of involving risk to lives and property on site or nearby.

From this brief look at the functions of a SOC, you can see that security operations has its own unique patterns of work—its own workflows—that SOC team members need to perform on a regular and as-needed basis. These are similar to what the network operations team would use, but have a number of points where they must differ. As security functions need to be made more accountable, transparent, and auditable, these differences in NOC vs. SOC activities can become more pronounced. It's important to note that a separate, dedicated, fully staffed, and fully equipped SOC can be difficult, expensive, and time-consuming to set up and get operating; it will continue to be a nontrivial cost to the organization. The organization should build a very strong business case to set up such a separate SOC (or ISOC, information security operations center, to distinguish it from a physical or overall security operations center). Such a business case may be called for to protect highly sensitive data, or if law, government regulation, or industry rules dictate it. If that is the case, one hopes that the business impact analysis (BIA) provides supporting analysis and recommendations!

Smaller organizations quite often combine the functions of NOC and SOC into the same (smaller) set of people, workspaces, systems, and tools. There is nothing wrong with such an approach—but again, the business case, supported by the BIA, needs to make the case to support this decision.

Tools for the SOC and the NOC

It doesn't take a hard-nosed budget analyst to realize that many of the tools the NOC needs to configure, manage, and maintain the network can also address the SOC's needs to recognize, characterize, and contain a possible intrusion. These tools span the range of physical, logical, and administrative controls. For example:

  • Administrative network management starts with understanding the organization's needs, translating that into design, and then managing the build-out of the network itself. Network design tools, including network simulation and modeling suites, can help designers focus on data, control, or management issues separately; view specific network usage scenarios; or evaluate proposed changes, all without having to disturb the current operational network infrastructure.
  • Physical controls can include the placement of security devices, such as firewalls, proxies or gateways, or the segmentation of the network into more manageable subnetworks that are easier to defend. Physical design of the network can also be a powerful ingredient in isolating and containing the damage from an intruder, an accident, or an act of nature. Don't forget to ensure that these physical devices are also physically protected from the range of threats indicated by your vulnerability analysis.
  • Logical network management translates the administrative and physical design characteristics into the actual software and data configuration that brings the network to life.

Combinations of these three control (and management) strategies can also support both the SOC and the NOC:

  • Traffic management and load management systems, which can be hardware, software, or both, provide valuable insight about normal and abnormal network usage. This can help in determining whether congestion is caused by design flaws, legitimate changes in operational patterns of usage, component or subsystem failures, or hostile action.
  • Network-based security devices, such as NIDSs and NIPSs, as well as network management systems and tools, help enforce network management policy decisions or generate warnings or alarms for out-of-limits or suspicious activity, and they can participate in incident characterization, containment, and recovery.

Integrating Network and Security Management

Chapter 3, “Integrated Information Risk Management,” stressed the need for integrated command and control of your company's information systems security efforts; we see this in the definition of the SOC as well. So what is the secret sauce, the key ingredient that brings all of these very different concerns, issues, talents, capabilities, functions, hardware, software, data, and physical systems together and integrates them?

System vendors quickly offer us products that claim to provide “integrated” solutions. Some of these systems, especially in the security information and event management (SIEM) marketplace, go a long way in bringing together the many elements of a geographically dispersed, complex network infrastructure. In many cases, such SIEM products as platforms require significant effort to tailor to your organization's existing networks and your security policies. As your team gains experience using them, you'll see a vicious circle of learning take place; you learn more about security issues and problems, but this takes even more effort to get your systems configured to respond to what you've just learned, which causes more residual issues, which…

You'll also have the chance for a virtuous circle of learning, in which experience teaches you stronger, more efficient approaches to meet your constantly evolving CIANA needs. SIEM as an approach, management philosophy, and as a set of software and data tools can help in this regard.

The key ingredient remains the people plane, the set of information security and network technology people that your organization has hired, trained, and invested in to make NOC-like and SOC-like functions serve and protect the needs of the organization.

Summary

Since the Internet has become the de facto standard for e-commerce, e-business, and e-government, it should be no surprise that as SSCPs, we need to understand and appreciate what makes the Internet work and what keeps it working reliably and securely. By using the OSI 7-layer reference model as our roadmap, we've reaffirmed our understanding of the protocol stacks that are theory and the practice of the Internet. We've ground lots of those details under our fingernails as we've dug into how those protocols work to move data, control that data flow, and manage the networks, all at the same time. This foundation paves our way to Chapter 6, where we'll dive deep into identity management and access control.

We've seen how three basic conceptual models—the TCP/IP protocol stack, the OSI 7-layer reference model, and the idea of the data, control, and management plane—are powerful tools for thinking about networks and physical, real design features that make most of the products and systems we build our networks with actually work. In doing so, we've also had a round-up review of many of the classical and current threat vectors or attacks that intruders often use against every layer of our network-based business or organization and its mission.

We have not delved deep into specific protocols, nor into the details of how those protocols can be hacked and corrupted as part of an attack. But we've laid the foundations you can use to continue to learn those next layers down as you take on more of the role of a network defender. But that, as we say, is a course beyond the scope of this book or the SSCP exam itself, so we'll have to leave it for another day.

Exam Essentials

  • Explain the relationship between the TCP/IP protocol and the OSI 7-layer reference model.  Both the TCP/IP protocol, established by the Internet Engineering Task Force, and the OSI reference model, developed by the International Organization for Standardization (ISO), lay out the fundamental concepts for networking and the details of how it all comes together. Both use a layers-of-abstractions approach, and to a large degree, their first four layers (Physical, Data Link, Network, and Transport) are nearly identical. TCP/IP stops there; the OSI reference model goes on to define the Session, Presentation, and Application layers. Each layer establishes a set of services, delivered by other protocols, which perform functions that logically relate to that layer—however, a number of important functions must be cross-layer in design to actually make important functions work effectively. TCP/IP is often thought of as the designer's and builder's choice for hardware and network systems, as a bottom-up set of standards (from Physical on up to Transport). The OSI reference model provides a more cohesive framework for analyzing and designing the total information flow that gets user-needed purposes implemented and carried out. SSCPs need to be fluent in both.

    Explain why IPv6 is not directly compatible with IPv4.  Users of IPv4 encountered a growing number of problems as the Internet saw a many-fold increase in number of attached devices, users, and uses. First was IPv4's limited address space, which needed the somewhat cumbersome use of Network Address Translation (NAT) as a workaround. The lack of built-in security capabilities was making far too many systems far too vulnerable to attack. IPv4 also lacked built-in quality of service features. IPv6 resolves these and a number of other issues, but it essentially is a completely different network. Its packet structures are just not compatible with each other—you need to provide a gateway-like function to translate IPv4 packet streams into IPv6 ones, and vice versa. Using both systems requires one of several alternative approaches: tunneling, “dual-stack” simultaneous use, address and packet translation, or Application layer gateways. As of 2018, many large systems operators run both in parallel, employ tunneling approaches (to package one protocol inside the other, packet by packet), or look to Application layer gateways as part of their transition strategy.

    Compare and contrast the basic network topologies.  A network topology is the shape or pattern of the way nodes on the network are connected with each other. The basic topologies are point-to-point, bus, ring, star, and mesh; larger networks, including the world-spanning Internet, are simply repeated combinations of these smaller elements. A bus connects a series of devices or nodes in a line and lets each node choose whether or not it will read or write traffic to the bus. A ring connects a set of nodes in a loop, with each node receiving a packet and either passing it on to the other side of the ring or keeping it if it's addressed to the node. Meshes provide multiple bidirectional connections between most or all nodes in the network. Each topology's characteristics offer advantages and risks to the network users of that topology, such as whether a node or link failure causes the entire network to be inoperable, or whether one node must take on management functions for the others in its topology. Mesh systems, for example, can support load leveling and alternate routing of traffic across the mesh; star networks do load leveling, but not alternate routing. Rings and point-to-point cannot operate if all nodes and connections aren't functioning properly; bus systems can tolerate the failure of one or more nodes but not of the backplane or system of interconnections. Note that the beauty of TCP/IP and the OSI 7-layer reference model as layers of abstraction enable us to use these topologies at any layer, or even across multiple layers, as we design systems or investigate issues with their operation and performance.

    Explain the different network roles of peer, client, and server.  Each node on a network interacts with other nodes on the network, and in doing so they provide services to each other. All such interactions are governed by or facilitated by the use of handshake protocols. If two interconnected nodes have essentially equal roles in those handshakes—one node does not control the other or have more control over the conversation—then each node is a peer, or equal, of the other. Simple peer-to-peer service provision models are used for file, printer, or other device sharing, and they are quite common. When the service being provided requires more control and management, or the enforcement of greater security measures (such as identity authentication or access control), then the relationship is more appropriately a client-server relationship. Here, the requesting client node has to make a request to the server node (the one providing the requested services); the server has to recognize the request, permit it to proceed, perform the service, and then manage the termination of the service request. Note that even in simple file or print sharing, the sharing may be peer-to-peer, but the actual use of the shared resource almost always involves a service running on the node that possesses that file or printer, which carries out the sharing of the file or the printing of the requesting node's data.

    Explain how IPv4 addressing and subnetting works.  An IPv4 address is a 32-bit number, which is defined as four 8-bit portions, or octets. These addresses in human-readable form look like 192.168.2.11, with the four octets expressed as their base 10 values (or as two hexadecimal digits), separated by dots. In the packet headers, each IP address (for sender and recipient) occupies one 32-bit field. The address is defined to consist of two parts: the network address and the address of a node on that network. Large organizations (such as Google) might need tens of thousands of node addresses on their network; small organizations might only need a few. This has given rise to address classes: Class A uses the first octet for organization and the other three for node. Class B uses two octets each for organization and node. Class C uses three octets for organization and the fourth for node on the Internet; Class D and E are reserved for special purposes. Subnetting allows an organization's network designers to break a network into segments by logically grouping addresses: the first four devices in one group, the next four in another, and so on. This effectively breaks the node portion of the address into a subnet portion and a node-on-the-subnet portion. A subnet mask is a 32-bit number in four-octet IP address format, with 0s in the rightmost bit positions that indicate bits used to assign node numbers: 255.255.255.240 shows the last 4 bits are available to support 16 subnet addresses. But since all networks reserve address 0 and “all bits on” for special purposes, that's really only 14 node addresses available on this subnet. Classless Inter-Domain Routing (CIDR) simplifies the subnetting process and the way we write it: that same address would be 255.255.255.240/28, showing that 28 bits of the total address specify the network address.

    Explain the differences between IPv4 and IPv6 approaches to subnetting.  IPv4's use of a 32-bit address field meant that you had to assign bits from the address itself to designate a node on a subnet. IPv6 uses a much larger address field of 128 bits, which for unicast packets is broken into a 48-bit host or network field, 16 bits for subnet number, and 64 bits for the node address on that network segment. No more borrowing bits!

    Explain the role of port numbers in Internet use.  Using software-defined port numbers (from 0 to 65535) allows protocol designers to add additional control over routing service requests: the IP packets are routed by the network between sender and recipient, but adding a port number to a Transport layer or higher payload header ensures that the receiving system knows which set of services to connect (route) that payload to. Standardized port number assignments make application design simpler; thus, port 25 for email, port 80 for HTTP, and so on. Ports can be and often are remapped by the protocol stacks for security and performance reasons; sender and recipient need to ensure that any such mapping is consistent, or connections to services cannot take place.

    Describe the man-in-the-middle attack, its impacts, and applicable countermeasures.  In general terms, the man-in-the-middle (MITM) attack can happen when a third party can place themselves between the two nodes and either insert their own false traffic or modify traffic being exchanged between the two nodes, in order to fool one or both nodes into mistaking the third party for the other (legitimate) node. This can lead to falsified data entering company communications and files, the unauthorized disclosure of confidential information, or disruption of services and business processes. Protection at every layer of the protocol stack can reduce or eliminate the exposure to MITM attacks. Strong Wi-Fi encryption, well-configured and enforced identity management and access control, and use of secure protocols as much as possible are all important parts of a countermeasure strategy.

    Describe cache poisoning and applicable countermeasures.  Every node in the network maintains a local memory or cache of address information (MAC addresses, IP addresses, URLs, etc.) to speed up communications—it takes far less time and effort to look it up in a local cache than it does to re-ask other nodes on the network to re-resolve an address, for example. Cache poisoning attacks attempt to replace legitimate information in a device cache with information that could redirect traffic to an attacker, or fool other elements of the system into mistaking an attacker for an otherwise legitimate node. This sets the system up for a man-in-the-middle attack, for example. Two favorite targets of attackers are ARP and DNS caches. A wide variety of countermeasure techniques and software tools are available; in essence, they boil down to protecting and controlling the server and using allowed listing and blocked listing techniques, but these tend not to be well suited for networks undergoing rapid growth or change.

    Explain the need for IPSec, and briefly describe its key components.  The original design of the Internet assumed that nodes connecting to the net were trustworthy; any security provisions had to be provided by user-level processes or procedures. For the 1960s, this was reasonable; by the 1980s, this was no longer acceptable. Multiple approaches, such as access control and encryption techniques, were being developed, but these did not lead to a comprehensive Internet security solution. By the early 1990s, IPSec was created to provide an open and extensible architecture that consists of a number of protocols and features used to provide greater levels of message confidentiality, integrity, authentication, and nonrepudiation protection. It does this first by creating security associations, which are sets of protocols, services, and data that provide encryption key management and distribution services. Then, using the IP Security Authentication Header (AH), it establishes secure, connectionless integrity. The Encapsulating Security Payloads (ESP) protocol uses these to provide confidentiality, connectionless integrity, and anti-replay protection, and authenticates the originator of the data (thus providing a degree of nonrepudiation).

    Explain how physical placement of security devices affects overall network information security.  Physical device placement of security components determines the way network traffic at Layer 1 can be scanned, filtered, blocked, modified, or allowed to pass unchanged. It also directly affects what traffic can be monitored by the security system as a whole. For wired and fiber connections, devices can be placed inline—that is, on the connection from a secured to a non-secured environment. All traffic therefore flows through the security device. Placement of the device in a central segment of the network (or anywhere else) not only limits its direct ability to inspect and control traffic as it attempts to flow through, but may also limit how well it can handle or inspect traffic for various subnets in your overall LAN. This is similar to host-based versus LAN-based antimalware protection. Actual placement decisions need to be made based on security requirements, risk tolerance, affordability, and operability considerations.

    Describe the key security challenges with wireless systems and control strategies to use to limit their risk.  Wireless data communication currently comes in three basic sets of capabilities: Wi-Fi, Bluetooth, and near-field communication (NFC). All share some common vulnerabilities. First, wireless devices of any type must make a connection to some type of access point, and then be granted access to your network, to affect your own system's security. Second, they can be vulnerable to spoofing attacks in which a hostile wireless device can act as a man-in-the-middle to create a fake access point or directly attack other users' wireless devices. Third, the wireless device itself is very vulnerable to loss or theft, allowing attackers to exploit everything stored on the device. Mobile device management (MDM) solutions can help in many of these regards, as can effective use of identity management and access control to restrict access to authorized users and devices only.

    Explain the use of the concept of data, control, and management planes in network security.  All networks exist to move data from node to node; this requires a control function to handle routing, error recovery, and so forth, as well as an overall network management function that monitors the status, state, and health of network devices and the system as a whole. Management functions can direct devices in the network to change their operational characteristics, isolate them from some or all of the network, or take other maintenance actions on them. These three sets of functions can easily be visualized as three map overlays, which you can place over the diagram of the network devices themselves. Each plane (or overlay) provides a way to focus design, operation, troubleshooting, incident detection, containment, and recovery in ways best suited to the task at hand. This is not just a logical set of ideas—physical devices on our networks, and the software and firmware that run them, are built with this concept in mind.

    Describe the role that network traffic shaping and load balancing can play in information security.  Traffic shaping and load balancing systems attempt to look at network traffic (and the connections it wants to make to systems resources) and avoid overloading one set of links or resources while leaving others unused or under-utilized. They may use static parameters, preset by systems administrators, or dynamically compute the parameters they need to accomplish their tasks. Traffic shaping is primarily a bandwidth management approach, allocating more bandwidth for higher-priority traffic. Load balancing tries to spread workloads across multiple servers. This trending and current monitoring information could be useful in detecting anomalous system usage, such as a distributed denial-of-service attack or a data exfiltration taking place. It may also provide a statistical basis for what is “normal” and what is “abnormal” loading on the system, as another indication of a potential security event of interest in the making. Such systems can generate alarms for out-of-limits conditions, which may also be useful indicators of something going wrong.

    Explain the two different security concerns regarding DNS and the countermeasures to deploy to mitigate their risk.   First, the DNS itself as an infrastructure can be abused by attackers, who can use it to create in effect their own command and control architecture with which they can direct subsequent attack activities on a wide variety of target systems. This transforms a trustworthy infrastructure into one of increasing risk to users. Mitigating against this risk requires more widespread implementation of DNS Security Extensions (DNSSEC) by Internet service providers (ISPs), the operators of the Internet backbone and DNS services, and by end user organizations alike. Second, attackers can misuse DNS capabilities to misdirect user queries (via spoofing and other techniques), which can result in the download of malware or other payloads for the attacker to use. User organizations can mitigate this risk with a combination of approaches, including more effective filtering by firewalls, such as increased deep inspection of DNS related traffic (into and out of the organization), more effective blocked/allowed list management, and other techniques.

    Explain the relationship between data loss prevention and network security.   Data loss prevention (DLP) seeks to identify suspicious movements of data within the organization's infrastructure, both laterally (east-west) and across its outer perimeter (northbound into the Internet, southbound into the organization or into deeper security domains within the infrastructure). Such movements may be attempts by attackers to take high-value data sets, fragment them, encrypt them, and then exfiltrate them for later exploitation. From a network security perspective, this requires all the techniques of intrusion detection and prevention, access control, traffic control, and network and systems monitoring and analysis. In the worst case, the sophisticated DLP attack is comparable to building a TOR-like anonymizing virtual network within the target enterprise's infrastructure, masking both the sources and the destinations of the data, the data itself (via encryption), and the routing of the data to its ultimate destination.

    Explain what a zombie botnet is, how to prevent your systems from becoming part of one, and how to prevent being attacked by one.   A zombie botnet is a collection of computers that have had malware payloads installed that allow each individual computer to function as part of a large, remotely controlled collective system. (The name suggests that the owner of the system and the system's operating system and applications don't know that the system is capable of being enslaved by its remote controller.) Zombie botnets typically do not harm the individual zombie systems themselves, which are then used either as part of a massively parallel cycle-stealing computation, as a DDoS attack, or as part of a distributed, large-scale target reconnaissance effort. Reasonable and prudent measures to prevent your systems from becoming part of a zombie botnet include stronger access control, prevention of unauthorized downloading and installation of software, and using effective, up-to-date antimalware or antivirus systems.

    Explain what a DMZ is and its role in systems security.   From a network security perspective, the demilitarized zone (DMZ) is that subset of organizational systems that are not within the protected or bastion systems perimeter. Systems or servers within the DMZ are thus exposed to larger, untrusted networks, typically the entire Internet. Public-facing Web servers, for example, are outside of the DMZ and do not require each Web user to have their identity authenticated in order to access their content. Data flows between systems in the DMZ and those within the protected bastion must be carefully constructed and managed to prevent covert paths (connections into the secure systems that are not detected or prevented by access controls) or the exfiltration of data that should not go out into the DMZ and beyond.

Review Questions

  1. When comparing the TCP/IP and OSI 7-layer reference model as sets of protocols, which statement is most correct?
    1. Network hardware and systems are actually built on TCP/IP, whereas the OSI reference model provides only concepts and theories.
    2. TCP/IP provides only concepts and theories, whereas network hardware and systems are actually built using the OSI reference model.
    3. Both sets of protocols provide theories and concepts, but real hardware is built around the data, control, and management planes.
    4. Hardware and systems are built using both models, and both models are vital to threat assessment and network security.
  2. Is IPv6 backward compatible with IPv4?
    1. No, because the differences in addressing, packet header structure, and other features would not allow an IPv4 packet to successfully travel on an IPv6 network.
    2. No, because IPv4 packets cannot meet the new security considerations built into IPv6.
    3. Yes, because IPv6 has services built into the protocol stacks to convert IPv4 packets into IPv6-compatible structures.
    4. Yes, because the transport and routing protocols are the same.
  3. Which basic network topology best describes the Internet?
    1. Star
    2. Mesh
    3. Ring
    4. Bus
  4. Which relationship between nodes provides the greatest degree of control over service delivery?
    1. VPN tunnel
    2. Peer-to-peer
    3. Client-server
    4. Peer-to-server
  5. Which statement about subnetting is correct?
    1. Subnetting applies only to IPv4 networks, unless you are using Classless Inter-Domain Routing (CIDR).
    2. Both IPv4 and IPv6 provide for subnetting, but the much larger IPv6 address field makes this a lot simpler to design and manage.
    3. Subnetting in IPv4 involves the CIDR protocol, which runs at Layer 3; in IPv6, this protocol, and hence subnetting, is not used.
    4. Because the subnet mask field is so much larger in IPv6, it is easier to subnet in this newer protocol stack than in IPv4.
  6. Which of the following transmission media presents the greatest security challenges for a network administrator?
    1. Twisted-pair wiring
    2. Fiber optic
    3. Radio frequency wireless
    4. Light waves, either infrared or visible, but not in a fiber
  7. Which statement (or statements) about ports and the Internet is/are not correct? (Choose all that apply.)
    1. Using port numbers as part of addressing and routing was necessary during the early days of the Internet, largely because of the small size of the address field, but IPv6 makes most port usage obsolete.
    2. Standard ports are defined for a number of protocols, and these ports allow sender and receiver to establish connectivity for specific services.
    3. Standardized port assignments cannot be changed or things won't work right, but they can be mapped to other port numbers by the protocol stacks on the sender's and recipient's systems.
    4. Many modern devices, such as those using Android, cannot support ports, and so apps have to be redesigned to use alternate service connection strategies.
  8. Which of the following statements about man-in-the-middle (MITM) attacks is most correct?
    1. Session stealing attacks are not MITM attacks.
    2. MITM attacks can occur at any layer and against connectionless or connection-oriented protocols.
    3. This basic attack strategy can be used at any layer of the protocols where there is connection-oriented, stateful communication between nodes.
  9. Which statement about cache poisoning is most correct?
    1. The cache on a user's local machine is immune from being poisoned by an attacker.
    2. Privately maintained DNS servers are the most lucrative targets of attackers, and thus the best strategy is to use commercial DNS service providers with proven security and reliability records.
    3. Almost every device on the network, from a smartphone or laptop on up, has address and DNS cache on it; these can be poisoned in a variety of ways, exposing the user and the network to various attacks.
    4. Cache poisoning can be prevented by encrypting the cache.
  10. What happens to datagrams as they are passed through the protocol stack from the Data Link layer to the Transport layer?
    1. They get shorter as the headers and footers are removed as the datagrams move from one layer to the next.
    2. They get longer as more header and footer information is wrapped around the datagram.
    3. They get converted from character or graphic information and formatting into byte formats.
    4. If an encryption protocol is being used, they get encrypted.
  11. Which layer of the OSI protocol stack does IPSec function?
    1. Layer 2
    2. Layer 3
    3. Layer 4
    4. Layer 5
  12. You're trying to diagnose why a system is not connecting to the Internet. You've been able to find out that your system's IP address is 169.254.0.0. Which of the following statements correctly suggests the next best step?
    1. It sounds like you've got a corrupted local DNS cache, which you should flush and then reset the connection.
    2. Try connecting via another browser.
    3. Check the DHCP server on your LAN to see if it's functioning correctly.
    4. Check to see if any router and modem between your system and your ISP are functioning correctly; you may need to do a hardware (cold) reset of them.
  13. Your IT team has a limited budget for intrusion detection and prevention systems and wants to start with a central server and a small number of remote IDS / IPS devices. Your team lead asks you where you think the remote devices should go. Which answer would you suggest?
    1. Place them in the datacenter on the key access paths to its switch fabric.
    2. Place them on the links between your ISP's point of presence and your internal systems.
    3. Identify the links between high-risk internal systems (such as software development) and mission-critical systems (such as customer order processing, manufacturing control, or finance), and put them on the links between those systems.
    4. The central server is a good start, and you can save even more money by skipping the remote devices for right now.
  14. Which measures would you recommend be used to reduce the security risks of allowing Wi-Fi, Bluetooth, and NFC devices to be used to access your company's networks and information systems? (Choose all that apply.)
    1. MDM systems
    2. Effective access control and identity management, including device-level control
    3. Because the Physical layer is wireless, there is no need to protect anything at this layer.
    4. Allowed listing of authorized devices
  15. You've been asked to investigate a possible intrusion on your company's networks. Which set of protocols or design concepts would you find most valuable, and why? Choose the most correct statement.
    1. Start with the TCP/IP protocol stack; you don't need anything else.
    2. The OSI 7-layer reference model may help you understand the nature of the intrusion to a layer or set of layers; next, you can use the TCP/IP protocol to help investigate the details with a protocol analyzer.
    3. The data, control, and management planes aren't going to be useful to you now; they're only a high-level design concept.
    4. You'll most likely need TCP/IP, the OSI 7-layer reference model, and the data, control, and management diagrams and information about your company's networks to fully understand and contain this incident.
  16. What can traffic shaping, traffic management, or load balancing systems do to help identify or solve information security problems? (Choose all that apply.)
    1. Nothing, since they work autonomously to accomplish their assigned functions.
    2. Log data they generate and keep during operation may provide some useful insight after an incident, but nothing in real time would be helpful.
    3. Such tools usually can generate alarms on out-of-limits conditions, which may be indicative of a system or component failure or an attack or intrusion in progress.
    4. Given sufficient historical data, such systems may help network administrators see that greater-than-normal systems usage is occurring, which may be worthy of closer attention or investigation.
  17. What is the risk of leaving the default settings on the access control lists in routers or firewalls?
    1. Since the defaults tend to allow any device, any protocol, any port, any time, you risk leaving yourself wide open to any attacker or reconnaissance probes. Thus, the risk is very great.
    2. The default settings tend to have everything locked down tightly until the network administrator deliberately opens up apps, time periods, or ports to access and use. Thus, the risk is very low.
    3. Although the default settings leave everything wide open, the normal access control and identity management you have in place on systems, servers, and other resources is all that you need; the risk is very low.
    4. As long as you've changed the administrator login ID and password on the device, you have nothing to worry about.
  18. Which of the following is the best form of Wi-Fi security to use today?
    1. WEP
    2. WPA
    3. WPA TKIP
    4. WPA2
  19. Your team chief is worried about all of those Bluetooth devices being used at the office; she's heard they are not very secure and could be putting the company's information and systems at great risk. How might you respond?
    1. Even with a maximum range of 10 meters (30 feet), you shouldn't have to worry about eavesdroppers or hackers out in the parking lot. Look to how you control visitor access instead.
    2. Bluetooth devices don't have a lot of bandwidth, so it's very unlikely that they present a data exfiltration or an intrusion threat.
    3. The biggest threat you might face is that Bluetooth on most of your staff's smartphones is probably not secure; talk with your MDM service provider and see if they can help reduce that exposure.
    4. You're right, chief! Bluephishing is fast becoming a social engineering threat, and you need to figure out a strategy to deal with it.
  20. Which of the following statements about a NOC and a SOC is correct? (Choose all that apply.)
    1. Both perform essentially the same functions.
    2. With the increased emphasis on security, senior managers and stakeholders may feel that not having a security operations center is not taking the risks seriously enough.
    3. The focus of a NOC is different than that of a SOC.
    4. It's usually a mistake to try to overload the NOC with the security functions the SOC has to take on.