How do we build trust and confidence into the globe-spanning communications that our businesses, our fortunes, and our very lives depend on? Whether by in-person conversation, videoconferencing, or the World Wide Web, people and businesses communicate. Communications, as we saw in earlier chapters, involves exchanging ideas to achieve a common pool of understanding—it is not just about data or information. Effective communication requires three basic ingredients: a system of symbols and protocols, a medium or a channel in which those protocols exchange symbols on behalf of senders and receivers, and trust. Not that we always trust every communications process 100%, nor do we need to!
We also have to grapple with the convergence of communications and computing technologies. People, their devices, and their ways of doing business no longer accept old-fashioned boundaries that used to exist between voice, video, TXT and SMS, data, or a myriad of other computer-enabled information services. This convergence transforms what we trust when we communicate and how we achieve that trust. As SSCPs, we need to know how to gauge the trustworthiness of a particular communications system, keep it operating at the required level of trust, and improve that trustworthiness if that's what our stakeholders need. Let's look in more detail at how communications security can be achieved and, based on that, get into the details of securing the network-based elements of our communications systems.
To do this, we'll need to grow the CIA trinity of earlier chapters—confidentiality, integrity, and availability—into a more comprehensive framework that adds two key ideas to our stack of security needs. This is just one way you'll start thinking in terms of protocol stacks—as system descriptors, as roadmaps for diagnosing problems, and as models of the threat and risk landscape.
It's useful to reflect a bit on the not-too-distant history of telecommunications, computing, and information security. Don't panic—we don't have to go all the way back to the invention of radio or the telegraph! Think back, though, to the times right after World War II and what the communications and information systems of that world were like. Competing private companies with competing technical approaches, and very different business models, often confounded users' needs to bring local communications systems into harmony with ones in another city, in another country, or on a distant continent. Radio and telephones didn't connect very well; mobile two-way radios and their landside systems were complex, temperamental, and expensive to operate. Computers didn't talk with each other, except via parcel post or courier delivery of magnetic tapes or boxes of punched cards. Mail was not electronic.
By the 1960s, however, many different factors were pushing each of the different communications technologies to somehow come together in ways that would provide greater capabilities, more flexibility, and growth potential, and at lower total cost of ownership. Communications satellites started to carry hundreds of voice-grade analog channels, or perhaps two or three broadcast-quality television signals. At the same time, military and commercial users needed better ways to secure the contents of messages, and even secure or obscure their routing (to defeat traffic analysis attacks). The computer industry centered on huge mainframe computers, which might cost a million dollars or more—and which sat idle many times each day, and especially over holiday weekends! Mobile communications users wanted two-way voice communication that didn't require suitcase-sized transceivers that filled the trunk of their cars.
Without going too far into the technical, economic, or political, what transformed all of these separate and distinct communications media into one world-spanning Web and Internet? In 1969, in close cooperation with these (and other) industries and academia, the U.S. Department of Defense Advanced Research Projects Agency started its ARPANet project. By some accounts, the scope of what it tried to achieve was audacious in the extreme. The result of ARPANet is all around us today, in the form of the Internet, cell phone technology, VOIP, streaming video, and everything we take for granted over the Web and the Internet. And so much more.
One simple idea illustrates the breadth and depth of this change. Before ARPANet, we all thought of communications in terms of calls we placed. We set up a circuit or a channel, had our conversation, then took the circuit down so that some other callers could use parts of it in their circuits. ARPANet's packet-based communications caused us all to forget about the channel, forget about the circuit, and focus on the messages themselves. (You'll see that this had both good and bad consequences for information security later in this chapter.)
One of the things we take for granted is the convergence of all of these technologies, and so many more, into what seems to us to be a seamless, cohesive, purposeful, reliable, and sometimes even secure communications infrastructure. The word convergence is used to sum up the technical, business, economic, political, social, and perceptual changes that brought so many different private businesses, public organizations, and international standards into a community of form, function, feature, and intent. What we sometimes ignore, to our peril, is how that convergence has drastically changed the ways in which SSCPs need to think about communications security, computing security, and information assurance.
Emblematic of this change might be the Chester Gould's cartoon character Dick Tracy and his wristwatch two-way radio, first introduced to American readers in 1946. It's credited with inspiring the invention of the smartphone, and perhaps even the smartwatches that are all but taken for granted today. What Gould's character didn't explore for us were the information security needs of a police force whose detectives had such devices—nor the physical, logical, and administrative techniques they'd need to use to keep their communications safe, secure, confidential, and reliable.
To keep those and any other communications trustworthy, think about some key ingredients that we find in any communications system or process:
For example, a letter or holiday greeting might be printed or written on paper or a card, which is placed in an envelope and mailed to the recipient via a national or international postal system. Purpose, the communicating parties, the protocols, the content, and the medium all have to work together to convey “happy holidays,” “come home soon,” or “send lawyers, guns, and money” if the message is to get from sender to receiver with its meaning intact.
At the end of the day (or at the end of the call), both senders and receivers have two critical decisions to make: how much of what was communicated was trustworthy, and what if anything should they do as a result of that communication? The explicit content of what was exchanged has a bearing on these decisions, of course, but so does all of the subtext associated with the conversation. Subtext is about context: about “reading between the lines,” drawing inferences (or suggesting them) regarding what was not said by either party.
The risk that subtext can get it wrong is great! The “Hot Line” illustrates this potential for disaster. During the Cold War, the “Hot Line” communications system connected the U.S. national command authority and their counterparts in the Soviet Union. This system was created to reduce the risk of accidental misunderstandings that could lead to nuclear war between the two superpowers. Both parties insisted that this be a plain text teletype circuit, with messages simultaneously sent in English and Russian, to prevent either side from trying to read too much into the voice or mannerisms of translators and speakers at either end. People and organizations need to worry about getting the subtext wrong or missing it altogether. So far, as an SSCP, you won't have to worry about how to “secure the subtext.”
Communications security is about data in motion—as it is going to and from the endpoints and the other elements or nodes of our systems, such as servers. It's not about data at rest or data in use, per se. Chapter 8, “Hardware and Systems Security,” and Chapter 9, “Applications, Data, and Cloud Security,” will show you how to enhance the security of data at rest and in use, whether inside the system or at its endpoints. Chapter 11, “Business Continuity via Information Security and People Power,” will also look at how we keep the people layer of our systems communicating in effective, safe, secure, and reliable ways, both in their roles as users and managers of their company's IT infrastructures, but also as people performing their broader roles within the company or organization and its place in the market and in society at large.
Chapter 2, “Information Security Fundamentals,” introduced the concepts of confidentiality, integrity, and availability as the three main attributes or elements of information security and assurance. We also saw that before we can implement plans and programs to achieve that triad, we have to identify what information must be protected from disclosure (kept confidential), its meaning kept intact and correct (ensure its integrity), and that it's where we need it, when we need it (that is, the information is available). As we dig further into what information security entails, we'll have to add four additional and very important attributes to our CIA triad: nonrepudiation, authentication, privacy, and safety.
To repudiate something means to attempt to deny an action that you've performed or something you said. You can also attempt to deny that you ever received a particular message or didn't see or notice that someone else performed an action. In most cases, we repudiate our own actions or the actions of others so as to attempt to deny responsibility for them. “They didn't have my informed consent,” we might claim; “I never got that letter,” or “I didn't see the traffic light turn yellow.” Thus, nonrepudiation is the characteristic of a communications system that prevents a user from claiming that they never sent or never received a particular message. This communications system characteristic sets limits on what senders or receivers can do by restricting or preventing any attempt by either party to repudiate a message, its content, or its meaning.
Authentication, in this context, also pertains to senders and receivers. Authentication (or authenticity) is the verification that the sender or receiver is who they claim to be, and then the further validation that they have been granted permission to use that communications system. Authentication might also go further by validating that a particular sender has been granted the privilege of communicating with a particular sender, regarding the content or intent of the message itself. These privileges—use of the system, and connection with a particular party—can also be defined with further restrictions, as we'll see later in Chapter 6, “Identity and Access Control.” Authentication as a process has one more “A” associated with it, and that is accountability. This requires that the system keep records of who attempts to access the system, who was authenticated to use it, and what communications or exchanges of messages they had with whom.
Adding safety to our security needs mnemonic reminds us that whether it is via operational technologies or not, far more modern IT systems can directly place people or property at risk of damage, injury, or death than many of us realize. We've already seen one death attributed to a ransomware attack (which crippled hospitals in Germany in 2019) and attempts to contaminate drinking water supplies in several countries. Cybercrime has also demonstrated the need for greater awareness of the need to protect the privacy of individuals. Initially, this was seen as protecting the ways in which personally identifying information (PII) was gathered, stored, used, and shared; increasingly, the need to protect a person's location and the data about what they are doing on the Internet is growing in urgency and importance. (Some analysts refer to this as “the death of the third-party cookie,” signifying the sea change in online tracking and advertising systems that we saw begin in 2021.)
Thus CIANA+PS: confidentiality, integrity, availability, nonrepudiation, authentication, privacy, and safety. As we'll see in this chapter, networks and their protocols provide significant support to the first five characteristics; safety and privacy are (so far) largely left to the care of applications, programs, and organizational practices that run on top of the Internet's protocol stack. As a result, throughout this chapter, we'll primarily refer to network security needs via CIANA and bring privacy and safety in where it makes sense.
Recall from earlier chapters that CIANA+PS crystallizes our understanding of what information needs what kinds of protection. Most businesses and organizations find that it takes several different but related thought processes to bring this all together in ways that their IT staff and information security team can appreciate and carry out. Several key sets of ideas directly relate to, or help set, the information classification guidelines that should drive the implementation of information risk reduction efforts:
The net result should be that the organization combines those four viewpoints into a cohesive and effective information risk management plan, which provides the foundation for “all things CIANA+PS” that the information security team needs to carry out. This drives the ways that SSCPs and others on that information security team conduct vulnerability assessments, choose mitigation techniques and controls, configure and operate them, and monitor them for effectiveness.
With that integrated picture of information security needs, it's time to do some threat modeling of our communications systems and processes. Chapter 4, “Operationalizing Risk Mitigation,” introduced the concepts of threat modeling and the use of boundaries or threat surfaces to segregate parts of our systems from each other and from the outside world. Let's take a quick review of the basics:
Note that this subject-object access can be bidirectional; there are security concerns in both reading and writing across a security boundary or threat surface. We'll save the theory and practice of that for Chapter 6.
The threat surface thinks of the problem from the defensive perspective: what do I need to protect and defend from attack? By contrast, threat modeling also defines the attack surface as the set of entities, information assets, features, or elements that are the focus of reconnaissance, intrusion, manipulation, and misuse, as part of an attack on an information system. Typically, attack surfaces are at the level of vendor-developed systems or applications; thus, Microsoft Office Pro 2021 is one attack surface, while Microsoft Office 365 Home is another. Other attack surfaces can be specific operating systems, or the hardware and firmware packages that are our network hardware elements. Even a network intrusion detection system (NIDS) can be an attack surface!
Applying these concepts to the total set of organizational communications processes and systems could be a daunting task for an SSCP. Let's peel that onion a layer at a time, though, by separating it into two major domains: that which runs on the internal computer networks and systems, and that which is really people-to-people in nature. We'll work with the people-to-people more closely in Chapter 11.
For now, let's combine this concept of threat modeling with the most commonly used sets of protocols, or protocol stacks, that we use in tying our computers, communications, and endpoints together.
As an SSCP, you'll need to focus your thinking about networks and security to one particular kind of networks—the ones that link together most of the computers and communications systems that businesses, governments, and people use. This is “the Internet,” capitalized as a proper name. It's almost everywhere; almost everybody uses it, somehow, in their day-to-day work or leisure pursuits. It is what the World Wide Web (also a proper noun) runs on. It's where we create most of the value of e-commerce, and where most of the information security threats expose people and business to loss or damage. This section will introduce the basic concepts of the Internet and its protocols; then, layer by layer, we'll look at more of their innermost secrets, their common vulnerabilities, and some potential countermeasures you might need to use. The OSI 7-layer reference model will be our framework and guide along the way, as it reveals some critical ideas about vulnerabilities and countermeasures you'll need to appreciate.
Communications and network systems designers talk about protocol stacks as the layers or nested sets of different protocols that work together to define and deliver a set of services to users. An individual protocol or layer defines the specific characteristics, the form, features, and functions that make up that protocol or layer. For example, almost since the first telephone services were made available to the public, The Bell Telephone Company in the U.S. defined a set of connection standards for basic voice-grade telephone service; today, one such standard is the RJ-11 physical and electrical connector for four-wire telephone services. The RJ-11 connection standard says nothing about dial tones, pulse (rotary dial), or Touch-Tone dual-tone multiple frequency signaling, or how connections are initiated, established, used, and then taken down as part of making a “telephone call” between parties. Other protocols define services at those layers. The “stack” starts with the lowest level, usually the physical interconnect standard, and layers each successively higher-level standard onto those below it. These higher-level standards can go on almost forever; think of how “reverse the charges,” advanced billing features, or many caller ID features need to depend on lower-level services being defined and working properly, and you've got the idea of a protocol stack.
This is an example of using layers of abstraction to build up complex and powerful systems from subsystems or components. Each component is abstracted, reducing it to just what happens at the interface—how you request services of it, provide inputs to it, and get services or outputs from it. What happens behind that curtain is (or should be) none of your concern, as the external service user. (The service builder has to fully specify how the service behaves internally so that it can fulfill what's required of it.) One important design imperative with stacks of protocols is to isolate the impact of changes; changes in physical transmission of signals should not affect the way applications work with their users, nor should adding a new application require a change in that physical media.
A protocol stack is a document—a set of ideas or design standards. Designers and builders implement the protocol stack into the right set of hardware, software, and procedural tasks (done by people or others). These implementations present the features of the protocol stack as services that can be requested by subjects (people or software tasks).
First, let's introduce the concept of a datagram, which is a common term when talking about communications and network protocols. A datagram is the unit of information used by a protocol layer or a function within it. It's the unit of measure of information in each individual transfer. Each layer of the protocol stack takes the datagram it receives from the layers above it and repackages it as necessary to achieve the desired results. Sending a message via flashlights (or an Aldiss lamp, for those of the sea services) illustrates the datagram concept:
Note, however, another usage of this word. The User Datagram Protocol (UDP) is an alternate data communications protocol to Transmission Control Protocol, and both of these are at the same level (Layer 3, Internetworking) of the TCP/IP stack. And to add to the terminological confusion, the OSI model (as we'll see in a moment) uses protocol data unit (PDU) to refer to the unit of measure of the data sent in a single protocol unit and datagram to UDP. Be careful not to confuse UDP and PDU!
Table 5.1 may help you avoid some of this confusion by placing the OSI and TCP/IP stacks side by side. We'll examine each layer in greater detail in a few moments.
TABLE 5.1 OSI and TCP/IP side by side
Types of layers | Typical protocols | OSI layer | OSI protocol data unit name | TCP/IP layer | TCP/IP datagram name |
Host layers | HTTP, HTTPS, SMTP, IMAP, SNMP, POP3, FTP, … | 7. Application | Data | (Outside of TCP/IP model scope) | Data |
Characters, MPEG, SSL/TLS, compression, S/MIME, … | 6. Presentation | ||||
NetBIOS, SAP, session handshaking connections | 5. Session | ||||
TCP, UDP | 4. Transport | Segment, except: UDP: datagram |
Transport | Segment | |
Media layers | IPv4 / IPv6 IP address, ICMP, IPSec, ARP, MPLS, … | 3. Network | Packet | Network (or Internetworking) | Packet |
Ethernet, 802.1, PPP, ATM, Fibre Channel, FDDI, MAC address | 2. Link | Frame | Data Link | Frame | |
Cables, connectors, 10BaseT, 802.11x, ISDN, T1, … | 1. Physical | Symbol | Physical | Bits |
We'll start with a simple but commonplace example that reveals the role of handshaking to control and direct how the Internet handles our data communications needs. A handshake is a sequence of small, simple communications that we send and receive, such as hello and goodbye, ask and reply, or acknowledge or not-acknowledge, which control and carry out the communications we need. Handshakes are defined in the protocols we agree to use. Let's look at a simple file transfer to a server that I want to do via File Transfer Protocol (FTP) to illustrate this:
It's interesting to note that the Internet was first created to facilitate things like simple file transfers between computer centers; email was created as a higher-level protocol that used FTP to send and receive small files that were the email notes themselves.
To make this work, we need ways of physically and logically connecting end-user computers (or smartphones or smart toasters) to servers that can support those endpoints with functions and data that users want and need. What this all quickly turned into is the kind of infrastructure we have today:
The physical connections handle the electronic (or electro-optical) signaling that the devices themselves need to communicate with each other. The logical connections are how the right pair of endpoints—the user NIC and the server or other endpoint NIC—get connected with each other, rather than with some other device “out there” in the wilds of the Internet. This happens through address resolution and name resolution.
Note in that FTP example earlier how the file I uploaded was broken into a series of chunks, or packets, rather than sent in one contiguous block of data. Each packet is sent across the Internet by itself (wrapped in header and trailer information that identifies the sender, recipient, and other important information we'll go into later). Breaking a large file into packets allows smarter trade-offs between actual throughput rate and error rates and recovery strategies. (Rather than resend the entire file because line noise corrupted one or two bytes, we might need to resend just the one corrupted packet.) However, since sending each packet requires a certain amount of handshake overhead to package, address, route, send, receive, unpack, and acknowledge, the smaller the packet size, the less efficient the overall communications system can be.
Sending a file by breaking it up into packets has an interesting consequence: if each packet has a unique serial number as part of its header, as long as the receiving application can put the packets back together in the proper order, we don't need to care what order they are sent in or arrive in. So if the receiver requested a retransmission of packet number 41, it can still receive and process packet 42, or even several more, while waiting for the sender to retransmit it.
Right away we see a key feature of packet-based communications systems: we have to add information to each packet in order to tell both the recipient and the next layer in the protocol stack what to do with it! In our FTP example earlier, we start by breaking the file up into fixed-length chunks, or packets, of data—but we've got to wrap them with data that says where it's from, where it's going, and the packet sequence number. That data goes in a header (data preceding the actual segment data itself), and new end-to-end error correcting checksums are put into a new trailer. This creates a new datagram at this level of the protocol stack. That new, longer datagram is given to the first layer of the protocol stack. That layer probably has to do something to it; that means it will encapsulate the datagram it was given by adding another header and trailer. At the receiver, each layer of the protocol unwraps the datagram it receives from the lower layer (by processing the information in its header and trailer, and then removing them), and passes this shorter datagram up to the next layer. Sometimes, the datagram from a higher layer in a protocol stack will be referred to as the payload for the next layer down. Figure 5.1 shows this in action.
The flow of wrapping, as shown in Figure 5.1, illustrates how a higher-layer protocol logically communicates with its opposite number in another system by having to first wrap and pass its datagrams to lower-layer protocols in its own stack. It's not until the Physical layer connections that signals actually move from one system to another. (Note that this even holds true for two virtual machines talking to each other over a software-defined network that connects them, even if they're running on the same bare metal host!) In OSI 7-layer reference model terminology, this means that layer n of the stack takes the service data unit (SDU) it receives from layer n+1, processes and wraps the SDU with its layer- specific header and footer to produce the datagram at its layer, and passes this new datagram as an SDU to the next layer down in the stack.
We'll see what these headers look like, layer by layer, in a bit.
In plain old telephone systems (POTS), your phone number uniquely identified the pair of wires that came from the telephone company's central office switches to your house. If you moved, you got a new phone number, or the phone company had to physically disconnect your old house's pair of wires from its switch at that number's terminal, and hook up your new house's phone line instead. From the start (thanks in large part to the people from Bell Laboratories and other telephone companies working with the ARPANet team), we knew we needed something more dynamic, adaptable, and easier to use. What they developed was a way to define both a logical address (the IP or Internet Protocol address), the physical address or identity of each NIC in each device (its media access control or MAC address), and a way to map from one to the other while allowing a device to be in one place today and another place tomorrow. From its earliest ARPANet days until the mid-1990s, the Internet Assigned Numbers Authority (IANA) handled the assignment of IP addresses and address ranges to users and organizations who requested them.
Routing is the process of determining what path or set of paths to use to send a set of data from one endpoint device through the network to another. In POTS, the route of the call was static—once you set up the circuit, it stayed up until the call was completed, unless a malfunction interrupted the call. The Internet, by contrast, does not route calls—it routes individually addressed packets from sender to recipient. If a link or a series of communications nodes in the Internet itself go down, senders and receivers do not notice; subsequent packets will be dynamically rerouted to working connections and nodes. This also allows a node (or a link) to say “no” to some packets as part of load-leveling and traffic management schemes. The Internet (via its protocol stack) handles routing as a distributed, loosely coupled, and dynamic process—every node on the Internet maintains a variety of data that help it decide which of the nodes it's connected to should handle a particular packet that it wants to forward to the ultimate recipient (no matter how many intermediate nodes it must pass through to get there).
Switching is the process used by one node to receive data on one of its input ports and choose which output port to send the data to. (If a particular device has only one input and one output, the only switching it can do is to pass the data through or deny it passage.) A simple switch depends on the incoming data stream to explicitly state which path to send the data out on; a router, by contrast, uses routing information and routing algorithms to decide what to tell its built-in switch to properly route each incoming packet.
Another way to find and communicate with someone is to know their name and then somehow look that name up in a directory. By the mid-1980s, the Internet was making extensive use of such naming conventions, creating the Domain Name System (DNS). A domain name consists of sets of characters joined by periods (or “dots”); “bbc.co.uk” illustrates the higher-level domain “.co.uk” for commercial entities in the United Kingdom, and “bbc” is the name itself. Taken together that makes a fully qualified domain name. The DNS consists of a set of servers that resolve domain names into IP addresses, registrars that assign and issue both IP addresses and the domain names associated with them to parties who want them, and the regulatory processes that administer all of that.
Segmentation is the process of breaking a large network into smaller ones. “The Internet” acts as if it is one gigantic network, but it's not. It's actually many millions of internet segments that come together at many different points to provide seamless service. An internet segment (sometimes called “an internet,” lowercase) is a network of devices that communicate using TCP/IP and thus support the OSI 7-layer reference model. This segmentation can happen at any of the three lower layers of our protocol stacks, as we'll see in a bit. Devices within a network segment can communicate with each other, but which layer the segments connect on, and what kind of device implements that connection, can restrict the outside world to seeing the connection device (such as a router) and not the nodes on the subnet below it.
Segmentation of a large internet into multiple, smaller network segments provides a number of practical benefits, which affect the choice of how to join segments and at which layer of the protocol stack. The switch or router that runs the segment, and its connection with the next higher segment, are two single points of failure for the segment. If the device fails or the cable is damaged, no device on that segment can communicate with the other devices or the outside world. This can also help isolate other segments from failure of routers or switches, cables, or errors (or attacks) that are flooding a segment with traffic.
Subnets are different than network segments. We'll take a deep dive into the fine art of subnetting after we've looked at the overall protocol stack.
In 1990, Tim Berners-Lee, a researcher at CERN in Switzerland, confronted the problem that researchers were having: they could not find and use what they already knew or discovered, because they could not effectively keep track of everything they wrote and where they put it! CERN was drowning in its own data. Berners-Lee wanted to take the much older idea of a hyperlinked or hypertext-based document one step further. Instead of just having links to points within the document, he wanted to have documents be able to point to other documents anywhere on the Internet. This required that several new ingredients be added to the Internet:
By 1991, new words entered our vernacular: webpage, Hypertext Transfer Protocol (HTTP), Web browser, Web crawler, and URL, to name a few. Today, all of that has become so commonplace, so ubiquitous, that it's easy to overlook just how many powerfully innovative ideas had to come together all at once. Knowing when to use the right uniform resource locators (URLs) became more important than understanding IP addresses. URLs provide us with an unambiguous way to identify a protocol, a server on the network, and a specific asset on that server. Additionally, a URL as a command line can contain values to be passed as variables to a process running on the server. By 1998, the business of growing and regulating both IP addresses and domain names grew to the point that a new nonprofit, nongovernmental organization was created, the Internet Corporation for Assigned Names and Numbers (ICANN, pronounced “eye-can”).
The rapid acceptance of the World Wide Web and the HTTP concepts and protocols that empowered it demonstrates a vital idea: the layered, keep-it-simple approach embodied in the TCP/IP protocol stack and the OSI 7-layer model works. Those stacks give us a strong but simple foundation on which we can build virtually any information service we can imagine.
I would consider putting in drawings with each topology. Some people are visual learners & need to see it to understand it.
The brief introduction (or review) of networking fundamentals we've had thus far brings us to ask an important question: how do we hook all of those network devices and endpoints together? We clearly cannot build one switch with a million ports on it, but we can use the logical design of the Internet protocols to let us build more practical, modular subsystem elements and then connect them in various ways to achieve what we need.
A topology, to network designers and engineers, is the basic logical geometry by which different elements of a network connect together. Topologies consist of nodes and the links that connect them. Experience (and much mathematical study!) gives us some simple, fundamental topologies to use as building blocks for larger systems:
With these in mind, a typical SOHO (small office/home office) network at a coffee house that provides Wi-Fi for its customers might use a mix of the following topology elements:
The fundamental design paradigm of TCP/IP and OSI 7-layer stacks is that they deliver “best-effort” services. In contract law and systems engineering, a best efforts basis sets expectations for services being requested and delivered; the server will do what is reasonable and prudent, but will not go “beyond the call of duty” to make sure that the service is performed, day or night, rain or shine! There are no guarantees. Nothing asserts that if your device's firmware does things the “wrong” way its errors will keep it from connecting, getting traffic sent and received correctly, or doing any other network function. Nothing also guarantees that your traffic will go where you want it to, and nowhere else, that it will not be seen by anybody else along the way, and will not suffer any corruption of content. Yes, each individual packet does have parity and error correction and detection checksums built into it. These may (no guarantees!) cause any piece of hardware along the route to reject the packet as “in error,” and request the sender retransmit it. An Internet node or the NIC in your endpoint might or might not detect conflicts in the way that fields within the packet's wrappers are set; it may or may not be smart enough to ask for a resend, or pass back some kind of error code and a request that the sender try again.
Think about the idea of routing a segment in a best-effort way: the first node that receives the segment will try to figure out which node to forward it on to, so that the packet has a pretty good chance of getting to the recipient in a reasonable amount of time. But this depends on ways of one node asking other nodes if they know or recognize the address, or know some other node that does.
The protocols do define a number of standardized error codes that relate to the most commonly known errors, such as attempting to send traffic to an address that is unknown and unresolvable. A wealth of information is available about what might cause such errors, how participants might work to resolve them, and what a recommended strategy is to recover from one when it occurs. What this means is that the burden for managing the work that we want to accomplish by means of using the Internet is not the Internet's responsibility. That burden of plan, do, check, and act is allocated to the higher-level functions within application programs, operating systems, and NIC hardware and device drivers that are using these protocols, or the people and business logic that actually invokes those applications in the first place.
In many respects, the core of TCP/IP is a trusting design. The designers (and the Internet) trust that equipment, services, and people using it will behave properly, follow the rules, and use the protocols in the spirit and manner in which they were written. Internet users and their equipment are expected to cooperate with each other, as each spends a fragment of their time, CPU power, memory, or money to help many other users achieve what they need.
One consequence we need to face head on of this trusting, cooperative, best-efforts nature of our Internet: security becomes an add-on. We'll see how to add it on, layer by layer, later in this chapter.
Let's look at two different protocol stacks for computer systems networks. Both are published, public domain standards; both are widely adopted around the world. The “geekiest” of the standards is TCP/IP, the Transmission Control Protocol over Internet Protocol standard (two layers of the stack right there!). Its four layers define how we build up networks from the physical interconnections up to what it calls the Transport layer, where the heavy lifting of turning a file transfer into Internet traffic starts to take place. TCP/IP also defines and provides homes for many of the other protocols that make addressing, routing, naming, and service delivery happen.
By contrast, the OSI 7-layer reference model is perhaps the most “getting-business-done” of the two stacks. It focuses on getting the day-to-day business and organizational tasks done that really are why we wanted to internetwork computers in the first place. This is readily apparent when we start with its topmost, or application, layer. We use application programs to handle personal, business, government, and military activities—those applications certainly need the operating systems that they depend on for services, but no one does their online banking just using Windows 10 or Red Hat Linux alone!
Many network engineers and technicians may thoroughly understand the TCP/IP model, since they use it every day, but they have little or no understanding of the OSI 7-layer model. They often see it as too abstract or too conceptual to have any real utility in the day-to-day world of network administration or network security. Nothing could be further from the truth! As you'll see, the OSI's top three levels provide powerful ways for you to think about information systems security—beyond just keeping the networks secure. In fact, many of the most troublesome information security threats that SSCPs must deal with occur at the upper layers of the OSI 7-layer reference model—beyond the scope of what TCP/IP concerns itself with. As an SSCP, you need a solid understanding of how TCP/IP works—how its protocols for device and port addressing and mapping, routing, and delivery, and network management all play together. You will also need an equally thorough understanding of the OSI 7-layer model, how it contrasts with TCP/IP, and what happens in its top three layers. Taken together, these two protocols provide the infrastructure of all of our communications and computing systems. Understanding them is the key to understanding why and how networks can be vulnerable—and provides the clues you need to choose the right best ways to secure those networks.
Both the TCP/IP protocol stack and the OSI 7-layer reference model grew out of efforts in the 1960s and 70s to continue to evolve and expand both the capabilities of computer networks and their usefulness. While it all started with the ARPANet project in the United States, international business, other governments, and universities worked diligently to develop compatible and complementary network architectures, technologies, and systems. By the early 1970s, commercial, academic, military, and government-sponsored research networks were already using many of these technologies, quite often at handsome profits.
Transmission Control Protocol over Internet Protocol (TCP/IP) was developed during the 1970s, based on original ARPANet protocols and a variety of competing (and in some cases conflicting) systems developed in private industry and in other countries. From 1978 to 1992, these ideas were merged together to become the published TCP/IP standard; ARPANet was officially migrated to this standard on January 1, 1993; since this protocol became known as “the Internet protocol,” that date is as good a date to declare as the “birth of the Internet” as any. TCP/IP is defined as consisting of four basic layers. (We'll see why that “over” is in the name in a moment.)
The decade of the 1970s continued to be one of incredible innovation. It saw significant competition between ideas, standards, and design paradigms in almost every aspect of computing and communications. In trying to dominate their markets, many mainframe computer manufacturers and telephone companies set de facto standards that all but made it impossible (contractually) for any other company to make equipment that could plug into their systems and networks. Internationally, this was closing some markets while opening others. Although the courts were dismantling these near-monopolistic barriers to innovation in the United States, two different international organizations, the International Organization for Standardization (ISO) and the International Telegraph and Telephone Consultative Committee (CCITT), both worked on ways to expand the TCP/IP protocol stack to embrace higher-level functions that business, industry, and government felt were needed. By 1984, this led to the publication of the International Telecommunications Union (ITU, the renamed CCITT) Standard X.200 and ISO Standard 7498.
This new standard had two major components, and here is where some of the confusion among network engineers and IT professionals begins. The first component was the Basic Reference Model, which is an abstract (or conceptual) model of what computer networking is and how it works. This became known as the Open Systems Interconnection Reference Model, sometimes known as the OSI 7-layer model. (Since ISO subsequently developed more reference models in the open systems interconnection family, it's preferable to refer to this one as the OSI 7-layer reference model to avoid confusion.) The other major component was a whole series of highly detailed technical standards.
In many respects, both TCP/IP and the OSI 7-layer reference model largely agree on what happens in the first four layers of their model. But while TCP/IP doesn't address how things get done beyond its top layer, the OSI reference model does. Its three top layers are all dealing with information stored in computers as bits and bytes, representing both the data that needs to be sent over the network and the addressing and control information needed to make that happen. The bottommost layer has to transform computer representations of data and control into the actual signaling needed to transmit and receive across the network. (We'll look at each layer in greater depth in subsequent sections as we examine its potential vulnerabilities.)
Let's use the OSI 7-layer reference model, starting at the physical level, as our roadmap and guide through internetworking. Table 5.2 shows a simplified side-by-side comparison of the OSI and TCP/IP models and illustrates how the OSI model's seven layers fit within a typical organization's use of computer networks. You'll note the topmost layer is “layer 8,” the layer of people, business, purpose, and intent. (Note that there are many such informal “definitions” of the layers above layer 7, some humorous, some useful to think about using.) As we go through these layers, layer by layer, you'll see where TCP/IP differs in its approach, its naming conventions, or just where it and OSI have different points of view. With a good overview of the protocols layer by layer, we'll look in greater detail at topics that SSCPs know more about, or know how to do with great skill and cunning!
TABLE 5.2 OSI 7-layer model and TCP/IP 4-layer model in context
System components | OSI layer | TCP/IP, protocols and services (examples) | Key address element | Datagrams are called… | Role in the information architecture |
People | Name, building and room, email address, phone number, … | Files, reports, memos, conversations, … | Company data, information assets | ||
Application software + people processes, gateways | 7 – Application | HTTP, email, FTP, … | URL, IP address + port | Upper-layer data | Implement business logic and processes |
6 – Presentation | SSL/TSL, MIME, MPEG, compression | ||||
5 – Session | |||||
Load balancers, gateways | 4 – Transport | TCP, UDP | IP address + port | Segments | Implement connectivity with clients, partners, suppliers, … |
Routers, OS software | 3 – Network | IPv4, IPv6, IPSec, ICMP, … | IP address + port | Packets | |
Switches, hubs, routers | 2 – Data Link | 802.1X, PPP, … | MAC address | Frames | |
Cables, antenna, … | 1 - Physical | Physical connection | Bits |
Layer 1, the Physical layer, is very much the same in both TCP/IP and the OSI 7-layer model. The same standards are used in both. It typically consists of hardware devices and electrical devices that transform computer data into signals, move the signals to other nodes, and transform received signals back into computer data. Layer 1 is usually embedded in the NIC and provides the physical handshake between one NIC and its connections to the rest of the network. It does this by a variety of services, including the following:
Multiple standards, such as the IEEE 802 series, document the details of the various physical connections and the media used at this layer.
At Layer 1, the datagram is the bit. The details of how different media turn bits (or handfuls of bits) into modulated signals to place onto wires, fibers, radio waves, or light waves are (thankfully!) beyond the scope of what SSCPs need to deal with. That said, it's worth considering that at Layer 1, addresses don't really matter! For wired (or fibered) systems, it's that physical path from one device to the next that gets the bits where they need to go; that receiving device has to receive all of the bits, unwrap them, and use Layer 2 logic to determine if that set of bits was addressed to it.
This also demonstrates a powerful advantage of this layers-of-abstraction model: nearly everything interesting that needs to happen to turn the user's data (our payload) into transmittable, receivable physical signals can happen with absolutely zero knowledge of how that transmission or reception actually happens! This means that changing out a 10BaseT physical media with Cat6 Ethernet gives your systems as much as a thousand-time increase in throughput, with no changes needed at the network address, protocol, or applications layers. (At most, very low-level device driver settings might need to be configured via operating systems functions, as part of such an upgrade.)
It's also worth pointing out that the physical domain defines both the collision domain and the physical segment. A collision domain is the physical or electronic space in which multiple devices are competing for each other's attention; if their signals out-shout each other, some kind of collision detection and avoidance is needed to keep things working properly. For wired (or fiber-connected) networks, all of the nodes connected by the same cable or fiber are in the same collision domain; for wireless connections, all receivers that can detect a specific transmitter are in that transmitter's collision domain. (If you think that suggests that typical Wi-Fi usage means lots of overlapping collision domains, you'd be right!) At the physical level, that connection is also known as a segment. But don't get confused: we segment (chop into logical pieces) a network into logical sub-networks, which we'll call subnets, at either Layer 2 or Layer 3, but not at Layer 1.
Layer 2, the Data Link layer, performs the data transfer from node to node of the network. As with Layer 2 in TCP/IP, it manages the logical connection between the nodes (over the link provided by Layer 1), provides flow control, and handles error correction in many cases. At this layer, the datagram is known as a frame, and frames consist of the data passed to Layer 2 by the higher layer, plus addressing and control information.
The IEEE 802 series of standards further refine the concept of what Layer 2 in OSI delivers by setting forth two sublayers:
The MAC address is a 48-bit address, typically written (for humans) as six octets—six 8-bit binary numbers, usually written as two-digit hexadecimal numbers separated by dashes, colons, or no separator at all. For example, 3A-7C-FF-29-01-05 is the same 48-bit address as 3A7CFF290105. Standards dictate that the first 24 bits (first three hex digit pairs) are the organizational identifier of the NIC's manufacturer, and 24 bits (remaining three hex digit pairs) are a NIC-specific address. The IEEE assigns the organizational identifier, and the manufacturer assigns NIC numbers as it sees fit. Each 24-bit field represents over 16.7 million possibilities, which for a time seemed to be more than enough addresses; not anymore. Part of IPv6 is the adoption of a larger, 64-bit MAC address, and the protocols to allow devices with 48-bit MAC addresses to participate in IPv6 networks successfully.
Note that one of the bits in the first octet (in the organizational identifier) flags whether that MAC address is universally or locally administered. Many NICs have features that allow the local systems administrator to overwrite the manufacturer-provided MAC address with one of their own choosing. This does provide the end user organization with a great capability to manage devices by using their own internal MAC addressing schemes, but it can be misused to allow one NIC to impersonate another one (so-called MAC address spoofing).
Let's take a closer look at the structure of a frame, shown in Figure 5.6. As mentioned, the payload is the set of bits given to Layer 2 by Layer 3 (or a layer-spanning protocol) to be sent to another device on the network. Conceptually, each frame consists of:
The inter-packet gap is a period of dead space on the media, which helps transmitters and receivers manage the link and helps signify the end of the previous frame and the start of the next. It is not, specifically, a part of either frame, and it can be of variable length.
Layer 2 devices include bridges, modems, NICs, and switches that don't use IP addresses (thus called Layer 2 switches). Firewalls make their first useful appearance at Layer 2, performing rule-based and behavior-based packet scanning and filtering. Datacenter designs can make effective use of Layer 2 firewalls.
Layer 3, the Network layer, is defined in the OSI model as the place where variable-length sequences of fixed-length packets (that make up what the user or higher protocols want sent and received) are transmitted (or received). Routing and switching happens at Layer 3. Logical paths between two hosts are created; data packets are routed and forwarded to destinations; packet sequencing, congestion control, and error handling occur here. Layer 3 is where we see a lot of the Internet's “best efforts” design thinking at work, or perhaps, not at work; it is left to individual designers who build implementations of the protocols to decide how Layer 3–like functions in their architecture will handle errors at the Network layer and below.
ISO 7498/4 also defines a number of network management and administration functions that (conceptually) reside at Layer 3. These protocols provide greater support to routing, managing multicast groups, address assignment (at the Network layer), and other status information and error handling capabilities. Note that it is the job of the payload—the datagrams being carried by the protocols—that make these functions belong to the Network layer, and not the protocol that carries or implements them.
The most common device we see at Layer 3 is the router; combination bridge-routers, or brouters, are also in use (bridging together two or more Wi-Fi LAN segments, for example). Layer 3 switches are those that can deal with IP addresses. Firewalls also are a part of the Layer 3 landscape.
Layer 3 uses a packet. Packets start with a packet header, which contains a number of fields of interest to us; see Figure 5.7. For now, let's focus on the IP version 4 format, which has been in use since the 1970s and thus is almost universally used:
You'll note that we went from MAC addresses at Layer 2, to IP addresses at Layer 3. This requires the use of Address Resolution Protocol (ARP), one of several protocols that span multiple layers. We'll look at those together after we examine Layer 7.
Now that we've climbed up to Layer 4, things start to get a bit more complicated. This layer is the home to many protocols that are used to transport data between systems; one such protocol, the Transport Control Protocol, gave its name (TCP) to the entire protocol stack! Let's first look at what the layer does, and then focus on some of the more important transport protocols.
Layer 4, the Transport layer, is where variable-length data from higher-level protocols or from applications gets broken down into a series of fixed-length packets; it also provides quality of service, greater reliability through additional flow control, and other features. In TCP/IP, Layer 4 is where TCP and UDP work; the OSI reference model goes on to define five different connection-mode transport protocols (named TP0 through TP4), each supporting a variety of capabilities. It's also at Layer 4 that we start to see tunneling protocols come into play.
Transport layer protocols primarily work with ports. Ports are software-defined labels for the connections between two processes, usually ones that are running on two different computers. The source and destination port, plus the protocol identification and other protocol-related information, are contained in that protocol's header. Each protocol defines what fields are needed in its header and prescribes required and optional actions that receiving nodes should take based on header information, errors in transmission, or other conditions. Ports are typically bidirectional, using the same port number on sender and receiver to establish the connection. Some protocols may use multiple port numbers simultaneously.
Over time, the use of certain port numbers for certain protocols became standardized. Important ports that SSCPs should recognize when they see them are shown in Table 5.3, which also has a brief description of each protocol.
TABLE 5.3 Common TCP/IP ports and protocols
Protocol | TCP/UDP | Port number | Description |
File Transfer Protocol (FTP) | TCP | 20/21 | FTP control is handled on TCP port 21, and its data transfer can use TCP port 20 as well as dynamic ports, depending on the specific configuration. |
Secure Shell (SSH) | TCP | 22 | Used to manage network devices securely at the command level; secure alternative to Telnet, which does not support secure connections. |
Telnet | TCP | 23 | Teletype-like unsecure command line interface used to manage network device. |
Simple Mail Transfer Protocol (SMTP) | TCP | 25 | Transfers mail (email) between mail servers, and between end user (client) and mail server. |
Domain Name System (DNS) | TCP/UDP | 53 | Resolves domain names into IP addresses for network routing. Hierarchical, using top-level domain servers (.com, .org, etc.) that support lower-tier servers for public name resolution. DNS servers can also be set up in private networks. |
Dynamic Host Configuration Protocol (DHCP) | UDP | 67/68 | DHCP is used on networks that do not use static IP address assignment (almost all of them). |
Trivial File Transfer Protocol (TFTP) | UDP | 69 | TFTP offers a method of file transfer without the session establishment requirements that FTP has; using UDP instead of TCP, the receiving device must verify complete and correct transfer. TFTP is typically used by devices to upgrade software and firmware. |
Hypertext Transfer Protocol (HTTP) | TCP | 80 | HTTP is the main protocol that is used by Web browsers and is thus used by any client that uses files located on these servers. |
Post Office Protocol (POP) v3 | TCP | 110 | POP version 3 provides client–server email services, including transfer of complete inbox (or other folder) contents to the client. |
Network Time Protocol (NTP) | UDP | 123 | One of the most overlooked protocols is NTP. NTP is used to synchronize the devices on the Internet. Most secure services simply will not support devices whose clocks are too far out of sync, for example. |
NetBIOS | TCP/UDP | 137/138/139 | NetBIOS (more correctly, NETBIOS over TCP/IP, or NBT) has long been the central protocol used to interconnect Microsoft Windows machines. |
Internet Message Access Protocol (IMAP) | TCP | 143 | IMAP version 3 is the second of the main protocols used to retrieve mail from a server. While POP has wider support, IMAP supports a wider array of remote mailbox operations that can be helpful to users. |
Simple Network Management Protocol (SNMP) | TCP/UDP | 161/162 | SNMP is used by network administrators as a method of network management. SNMP can monitor, configure, and control network devices. SNMP traps can be set to notify a central server when specific actions are occurring. |
Border Gateway Protocol (BGP) | TCP | 179 | BGP is used on the public Internet and by ISPs to maintain very large routing tables and traffic processing, which involve millions of entries to search, manage, and maintain every moment of the day. |
Lightweight Directory Access Protocol (LDAP) | TCP/UDP | 389 | LDAP provides a mechanism of accessing and maintaining distributed directory information. LDAP is based on the ITU-T X.500 standard but has been simplified and altered to work over TCP/IP networks. |
Hypertext Transfer Protocol over SSL/TLS (HTTPS) | TCP | 443 | HTTPS is used in conjunction with HTTP to provide the same services but does it using a secure connection that is provided by either SSL or TLS. |
Lightweight Directory Access Protocol over TLS/SSL (LDAPS) | TCP/UDP | 636 | LDAPS provides the same function as LDAP but over a secure connection that is provided by either SSL or TLS. |
FTP over TLS/SSL (RFC 4217) |
TCP | 989/990 | FTP over TLS/SSL uses the FTP protocol, which is then secured using either SSL or TLS. |
It's good to note at this point that as we move down the protocol stack, each successive layer adds additional addressing, routing, and control information to the data payload it received from the layer above it. This is done by encapsulating or wrapping its own header around what it's given by the layers of the protocol stack or the application-layer socket call that asks for its service. Thus, the datagram produced at the Transport layer contains the protocol-specific header and the payload data. This is passed to the Network layer, along with the required address information and other fields; the Network layer puts that information into its IPv4 (or IPv6) header, sets the Protocol field accordingly, appends the datagram it just received from the Transport layer, and passes that on to the Data Link layer. (And so on…)
Most of the protocols that use Layer 4 either use TCP/IP as a stateful or connection- oriented way of transferring data or use UDP, which is stateless and not connection oriented. TCP bundles its data and headers into segments (not to be confused with segments at Layer 1), whereas UDP and some other Transport layer protocols call their bundles datagrams:
Layer 4 devices include gateways (which can bridge dissimilar network architectures together, and route traffic between them) and firewalls.
From here on up, the two protocol stacks conceptually diverge. TCP/IP as a standard stops at Layer 4 and allocates to users, applications, and other unspecified higher-order logic the tasks of managing what traffic to transport and how to make business or organizational sense of what's getting transported. The OSI 7-layer reference model continues to add further layers of abstraction, and for one very good reason: because each layer adds clarity when taking business processes into the Internet or into the cloud (which you get to through the Internet, of course). That clarity aids the design process and the development of sound operational procedures; it is also a great help when trying to diagnose and debug problems.
We also see that from here on out, almost all functions except perhaps that of the firewall and the gateway are hosted either in operating systems or applications software, which of course is running on servers or endpoint devices.
Layer 5, the Session layer, is where the overall dialogue or flow of handshakes is controlled in order to support a logically related series of tasks that require data exchange. Sessions typically require initiation, ongoing operation, adjournment, and termination; many require checkpointing to allow for graceful fallback and recovery to earlier points within the session. Think of logging onto your bank's webpages to do some online banking; from the moment you start to log on, you're initiating a session; a session can contain many transactions as steps you seek to perform; finally, you log off (or time out or disconnect) and end the session. Sessions may be also need full-duplex (simultaneous activity in both directions), half-duplex (activity from one party to the other, a formal turnaround, and then activity in the other way), or simplex (activity in one direction only). Making a bank deposit requires half-duplex operation: the bank has to completely process the deposit steps, then update your account balance, before it can turn the dialogue around and update the display of account information on your endpoint. The OSI model also defines Layer 5 as responsible for gracefully bringing sessions to a close and for providing session checkpoint and recovery capabilities (if any are implemented in a particular session's design).
Newer protocols at the Session layer include Session Description Protocol (SDP) and Session Initiation Protocol (SIP). These and related protocols are extensively used with VOIP (voice over IP) services. Another important protocol at this layer is Real-Time Transport Protocol (RTP). RTP was initially designed to satisfy the demands for smoother delivery of streaming multimedia services and rides over UDP (at the Transport layer). Other important uses are in air traffic control and data management systems, where delivery of flight tracking information must take place in a broadcast or multicast fashion but be in real time—imagine the impact (pardon the pun) if flight tracking updates on an inbound flight consistently come in even as little as a minute late!
Layer 6, the Presentation layer, supports the mapping of data in terms and formats used by applications into terms and formats needed by the lower-level protocols in the stack. The Presentation layer handles protocol-level encryption and decryption of data (protecting data in motion), translates data from representational formats that applications use into formats better suited to protocol use, and can interpret semantical or metadata about applications data into terms and formats that can be sent via the Internet.
This layer was created to consolidate both the thinking and design of protocols to handle the wide differences in the ways that 1970s-era systems formatted, displayed, and used data. Different character sets, such as EBCIDIC, ASCII, or FIELDATA, used different number of bits; they represented the same character, such as an uppercase A, by different sets of bits. Byte sizes were different on different manufacturers' minicomputers and mainframes. The presentation of data to the user, and the interaction with the user, could take many forms: a simple chat, a batch file input and printed output of the results, or a predefined on-screen form with specified fields for data display and edit. Such a form is one example of a data structure that presentation must consider; others would be a list of data items retrieved by a query, such as “all flights from San Diego to Minneapolis on Tuesday morning.”
Sending or receiving such a data structure represents the need to serialize and deserialize data for transmission purposes. To the application program, this table, list, or form may be a series of values stored in an array of memory locations. Serializing requires an algorithm that has to first “walk” the data structure, field by field, row by row; retrieve the data values; and output a list of coordinates (rows and fields) and values. Deserializing uses the same algorithm to take an input list of coordinates and values and build up the data structure that the application needs.
There are several sublayers and protocols that programmers can use to achieve an effective Presentation-layer interface between applications on the one hand and the Session layer and the rest of the protocol stack on the other. HTTP is an excellent example of such a protocol.
NetBIOS (the Network Basic Input/Output System) and Server Message Block (SMB) are also very important to consider at the Presentation layer. NetBIOS is actually an application programming interface (API) rather than a formal protocol per se. From its roots in IBM's initial development of the personal computer, NetBIOS now runs over TCP/IP (or NBT, if you can handle one more acronym!) or any other transport mechanism. Both NetBIOS and SMB allow programs to communicate with each other, whether they are on the same host or different hosts on a network.
Keep in mind that many of the cross-layer protocols, apps, and older protocols involved with file transfer, email, and network-attached file systems and storage resources (such as the Common Internet File System [CIFS] protocol) all “play through” Layer 6.
Layer 7, the Application layer, is where most end users and their endpoints interact with and are closest to the Internet, you might say. Applications such as Web browsers, VOIP or video streaming clients, email clients, and games use their internal logic to translate user actions—data input field-by-field or selection and action commands click-by-click into application-specific sets of data to transfer via the rest of the protocol stack to a designated recipient address. Multiple protocols, such as FTP and HTTP, are in use at the Application layer, yet the logic that must determine what data to pass from user to distant endpoint and back to user all resides in the application programs themselves. None of the protocols, by themselves, make those decisions for us.
There are various mnemonics to help remember the seven OSI layers. Two common mnemonics, and their correspondence with the OSI protocol stack, are shown in Figure 5.8. Depending upon your tastes, you can use:
Look back to Figure 5.1, which demonstrates the OSI reference model in action, in simplified terms, by starting with data a user enters into an application program's data entry screen. The name and phone number entered probably need other information to go with them from this client to the server so that the server knows what to do with these values; the application must pass all of this required information to the Presentation layer, which stuffs it into different fields in a larger datagram structure, encrypting it if required.
But wait…remember, both TCP/IP and the OSI reference model are models, models that define and describe in varying degrees of specificity and generality. OSI and TCP/IP both must support some important functions that cross layers, and without these, it's not clear if the Internet would work very well at all! The most important of these are:
ARP has several variations that are worth being knowing a bit about:
As stated, the original design of the Internet assumed a trustworthy environment; it also had to cope with a generation of computing equipment that just did not have the processing speed or power, or the memory capacity, to deal with effective security, especially if that involved significant encryption and decryption. Designers believed that other layers of functionality beyond the basic IP stack could address those issues, to meet specific user needs, such as by encrypting the contents of a file before handing it to an application like FTP for transmission over the Internet. Rapid expansion of the Internet into business and academic use, and into international markets, quickly demonstrated that the innocent days of a trusting Internet were over. In the late 1980s and early 1990s, work sponsored by the U.S. National Security Agency, U.S. Naval Research Laboratory, Columbia University, and Bell Labs came together to create Internet Protocol Security, or IPsec as it came to be known.
IPSec provides an open and extensible architecture that consists of a number of protocols and features used to provide greater levels of message confidentiality, integrity, authentication, and nonrepudiation protection:
Security associations (or SAs) bundle together the algorithms and data used in securing the payloads. ISAKMP, the Internet Security Association and Key Management Protocol, for example, provided the structure, framework, and mechanisms for key exchange and authentication. IPSec implementations depend upon authenticated keying materials. Since IPSec preceded the development and deployment of PKI, it had to develop its own infrastructure and processes to support users in meeting their key management needs. This could be either via Internet Key Exchange (IKE and IKEv2), the Kerberized Internet Negotiation of Keys (KINK, using Kerberos services), or using an IPSECKEY DNS record exchange.
The mechanics of how to implement and manage IPSec are beyond the scope of the SSCP exam itself; however, SSCPs do need to be aware of IPSec and appreciate its place in the evolution of Internet security.
IPSec was an optional add-in for IPv4 but is a mandatory component of IPv6. IPSec functions at Layer 3 of the protocol stacks, as an internetworking protocol suite; contrast this with TLS, for example, which works at Layer 4 as a transport protocol.
If you stand alongside of those protocol stacks and think in more general terms, you'll quickly recognize that every device, every protocol, and every service has a role to play in the three major functions we need networks to achieve: movement of data, control of that data flow, and management of the network itself. If you were to draw out those flows on separate sheets of paper, you'd see how each provides a powerful approach to use when designing the network, improving its performance, resolving problems with the network, and protecting it. This gives rise to the three planes that network engineers speak of quite frequently:
Hardware designers use these concepts extensively as they translate the protocol stacks into real router, switch, or gateway devices. For example, the movement of data itself ought to be as fast and efficient as possible, either by specifically designed high-speed hardware, firmware, or software. Control functions, which are the heart of all the routing protocols, still need to run pretty efficiently, but this will often be done by using separate circuits and data paths within the hardware. System management functions involve either collecting statistical and performance data or issuing directives to devices, and for system integrity reasons, designers often put these in separate paths in hardware and software as well.
As we saw with the OSI reference model, the concept of separating network activity into data, control, and management planes is both a sound theoretical idea and a tangible design feature in the devices and networks all around us. (Beware of geeks who think that these planes, like the 7-layer model, are just some nice ideas!)
Except in the most trivial case of a single point-to-point connection, networks usually consist of multiple networks joined together. The Internet (capitalized) is the one unified, globe-spanning network; it consists of multiple internetworking segments running IP that are tied together in various ways. For convenience and for localizing one's reference, network engineers, security professionals, and end users talk about the following network types:
The network architectures described earlier are ones that are created by using network devices (virtual or real) to define IP segments and join them together into larger collections of segments. Two specific use cases worth looking at are the use of perimeter networks and botnets.
A perimeter network, sometimes called a bastion network, is a network segment that provides an isolation layer between two or more sets of interconnected network segments. These are often used to create a buffer or barrier between LANs that have dissimilar security needs and use hardened bastion servers, firewalls, and other techniques to restrict both inward-flowing and outward-flowing traffic to ensure that security policies are enforced.
This gives rise to the concept of a demilitarized zone (DMZ), which is that perimeter network. Outside of the DMZ is the rest of the Internet-connected world; inside the DMZ are network segments and systems that need better protection. The network elements that make up an organization's DMZ belong to it and are administered by it, but they are public-facing assets. Public-facing Web servers, for example, are outside of the DMZ and do not require each Web user to have their identity authenticated in order to access their content. Data flows between systems in the DMZ, and those within the protected bastion (within the DMZ) must be carefully constructed and managed to prevent the creation or discovery (and subsequent use) of covert paths, which would be providing connections into the secure systems that are not detected or prevented by access controls. Outbound data flows should either be in suitably protected form (such as by encryption or via VPN or other means) or be prohibited from crossing the DMZ.
Web servers, for example, have to face the WAN side of the DMZ, but also have to face inward to be able to send and receive trustworthy data and service requests to more secure assets.
A botnet, sometimes called a grid network, refers to any collection of systems that operate together in a coordinated fashion to achieve a common purpose. Typically, a botnet is constructed by using software agents installed on each server, endpoint, or other device, and those agents then participate in a command and control process that rides on top of the network connections used by those devices. Botnets typically have a central command and control node that plans, organizes, and directs the activities of the nodes on the botnet. Botnets are often created by users to combine processing resources together to work on problems too large or too complex for one single server or client endpoint. A typical example would be the SETI (search for extraterrestrial intelligence) project, which is a crowd-sourced science activity. In effect, a botnet is something like a user-created cloud, dynamically created to suit their needs.
A zombie botnet, by contrast, is a botnet created without the knowledge, consent, or cooperation of the users of some or all of the server or endpoint devices that are directed by the zombie botnet controller. It's fair to state that all zombie botnets are malicious in intention, as they involve the theft of services from the systems brought together by the bot herder (the person creating and using the zombie botnet). The zombie botnet serves the herder's purposes, at the expense of the individual systems' owners. But not all botnets are zombie botnets, of course, and thus not all botnets are malicious in intent.
As the name suggests, software-defined networks (SDNs) use network management and virtualization tools to completely define the network in software. SDNs are most commonly used in cloud systems deployments, where they provide the infrastructure that lets multiple virtual machines communicate with each other in standard TCP/IP or OSI reference model terms. Cloud-hosted SDNs don't have their own real Physical layer, for they depend on the services of the bare metal environment that is hosting them to provide these. That said, the protocol stacks at Layer 1 still have to interact with device drivers that are the “last software port of call,” you might say, before entering the land of physical hardware and electrical signals.
It might be natural at this point to think that all but the smallest and simplest of networks are software defined, since as administrators we use software tools to configure the devices on the network. This is true, but in a somewhat trivial sense. Imagine a small business network with perhaps a dozen servers, including dedicated DNS, DHCP, remote access control, network storage, and print servers. It might have several Wi-Fi access points and use another dozen routers to segment the network and support it across different floors of a building or different buildings in a small office park. Each of these devices is configured first at the physical level (you connect it with cables to other devices); then, you use its built-in firmware functions via a Web browser or terminal link to configure it by setting its control parameters. That's a lot of individual devices to configure! Network management systems can provide integrated ways to define the network and remotely configure many of those devices.
Virtual private networks (VPNs) were developed initially to provide businesses and organizations a way to bring geographically separate LAN segments together into one larger private network. Prior to using VPN technologies, the company would have to use private communications channels, such as leased high-capacity phone circuits or microwave relays, as the physical communications media and technologies within the Physical layer of this extended private network. (Dial-up connections via modem were also examples of early VPN systems.) In effect, that leased circuit tunneled under the public switched telecommunications network; it was a circuit that stayed connected all the time, rather than one that was established, used, and torn down on a per-call basis.
VPNs tunnel under the Internet using a combination of Layer 2 and Layer 3 services. They provide a secure, encrypted channel between VPN connection “landing points” (not to be confused with endpoints in the laptop, phone, or IoT device sense!). As a Layer 2 service, the VPN receives every frame or packet from higher up in the protocol stack, encrypts it, wraps it in its own routing information, and lets the Internet carry it off to the other end of the tunnel. At the receiving end of the tunnel, the VPN service unwraps the payload, decrypts it, and passes it up the stack. Servers and services at each end of the tunnel have the normal responsibilities of routing payloads to the right elements of that local system, including forwarding them on to LAN or WAN addresses as each packet needs.
Most VPN solutions use one or more of the following security protocols:
Mobile device users (and systems administrators who need to support mobile users) are increasingly turning to VPN solutions to provide greater security.
On the one hand, VPNs bring some powerful security advantages home, both to business and individual VPN customers alike. From the point in your local systems where the VPN starts tunneling, on to the tunnel's landing point, PKI-driven encryption is preventing anybody from knowing what you're trying to accomplish with that data stream. The only traffic analysis they can glean from monitoring your data is that you connect to a VPN landing point.
On the other hand, this transfers your trust to your VPN service provider and the people who own and manage it. You have to be confident that their business model, their security policies (administrative and logical), and their reputation support your CIANA needs. One might rightly be suspicious of a VPN provider with “free forever” offers with no clear up-sell strategy; if they don't have a way of making honest money with what they are doing, due diligence requires you to think twice before trusting them.
Do keep in mind that if your VPN landing point server fails, so does your VPN. Many SOHO VPN clients will allow the user to configure the automatic use of alternate landing sites, but this can still involve service interruptions of tens of seconds.
Wireless network systems are the history of the Internet in miniature: first, let's make them easy to use, right out of the shrink-wrap! Then, we'll worry about why they're not secure and whether we should do something about that.
In one respect, it's probably true to say that wireless data communication is first and foremost a Layer 1 or Physical layer set of opportunities, constraints, issues, and potential security vulnerabilities. Multiple technologies, such as Wi-Fi, Bluetooth, NFC, and infrared and visible light LED and photodiode systems, all are important and growing parts of the way organizations use their network infrastructures. (Keep an eye open for Li-Fi as the next technology to break our trains of thought. Li-Fi is the use of high-frequency light pulses from LEDs used in normal room or aircraft cabin illumination systems.)
Note that mobile devices that use cellular phone systems to access your networks present a mixed bag of access and security issues. These access your systems via your ISP's connection to the Internet, and must then connect up via your remote access control capabilities (such as RADIUS). But at the same time, the devices themselves may be connecting via Wi-Fi or other means in their local service area; you may be inheriting the security weaknesses of a distant Wi-Fi cafe or airport hotspot without knowing it.
Regardless of the technologies used, wireless systems are either a part of our networks, or they are not. These devices either use our TCP/IP protocols, starting with the physical layer on up, or use their own link-specific sets of protocols. Broadly speaking, though, no matter what protocol stack or interface (or interfaces, plural!) they are using, the same risk management and mitigation processes should be engaged to protect the organization's information infrastructures.
Key considerations include the following:
Wireless capabilities accelerate the convergence of communications, computing, control, and collaboration. This convergence breaks down the mental and conceptual barrier that have defined personal roles, tasks, and organizational boundaries in “classical” IT architectures. The dramatic increase in OT systems merging in with IT ones—whether that's industrial-scale applications of SCADA, Common Industry Protocol (CIP) for ICS, or IoT in a highly virtual organization—is further challenging our security planning concepts of what “normal” is or should be.
Let's look at a few of these technologies, and then consider their security needs and implications.
Wi-Fi, which actually does not mean “wireless fidelity,” is probably the most prevalent and pervasive wireless radio technology currently in use. Let's focus a moment longer on protecting the data link between the endpoint device (such as a user's smartphone, laptop, smartwatch, etc.) and the wireless access point, which manages how, when, and which wireless subscriber devices can connect at Layer 1 and above. (Note that a wireless access point can also be a wireless device itself!) Let's look at wireless security protocols:
Bluetooth is a short-range wireless radio interface standard, designed to support wireless mice, keyboards, or other devices, typically within 1 to 10 meters of the host computer they are being used with. Bluetooth is also used to support data synchronization between smartwatches and fitness trackers with smartphones. Bluetooth has its own protocol stack, with one set of protocols for the controller (the time-critical radio elements) and another set for the host. There are 15 protocols altogether. Bluetooth does not operate over Internet Protocol networks.
In contrast with Wi-Fi, Bluetooth has four security modes:
Bluetooth is prone to a number of security concerns, such as these:
Bluetooth link and use it as an eavesdropping platform, collect data from it, or operate it remotely
Given these concerns, it's probably best that your mobile device management solution understand the vulnerabilities inherent in Bluetooth, and ensure that each mobile device you allow onto your networks (or your business premises!) can be secured against exploitations targeted at its Bluetooth link.
Near-field communication (NFC) provides a secure radio-frequency communications channel that works for devices within about 4 cm (1.6 inches) of each other. Designed to meet the needs of contactless, card-less payment and debit authorizations, NFC uses secure on-device data storage and existing radio frequency identification (RFID) standards to carry out data transfers (such as phone-to-phone file sharing) or payment processing transactions.
Multiple standards organizations work on different aspects of NFC and its application to problems within the purview of each body.
NFC is susceptible to man-in-the-middle attacks at the physical Data Link layer and is also susceptible to high-gain antenna interception. Relay attacks, similar to man-in-the-middle, are also possible. NFC as a standard does not include encryption, but like TCP/IP, it will allow for applications to layer on encrypted protection for data and routing information.
Now that we've got an idea of how the layers fit together conceptually, let's look at some of the details of how IP addressing gets implemented within an organization's network and within the Internet as a whole. First, we'll look at this in IPv4 terms. We'll then highlight the differences that come with IPv6, which is rapidly becoming the go-to addressing standard for larger enterprise systems. Recall that an IPv4 address field is a 32-bit number, represented as four octets (8-bit chunks) written usually as base 10 numbers, which IPv6 increases to a 128-bit field.
Let's start “out there” in the Internet, where we see two kinds of addresses: static and dynamic. Static IP addresses are assigned once to a device, and they remain unchanged; thus, 8.8.8.8 has been the main IP address for Google since, well, ever, and it probably always will be. The advantage of a static IP address for a server or webpage is that virtually every layer of ARP and DNS cache on the Internet will know it; it will be quicker and easier to find. By contrast, a dynamic IP address is assigned each time that device connects to the network. ISPs most often use dynamic assignment of IP addresses to subscriber equipment, since this allows them to manage a pool of addresses better. Your subscriber equipment (your modem, router, PC, or laptop) then need a DHCP server to assign them an address.
It's this use of DHCP, by the way, that means that almost everybody's SOHO router can use the same IP address on the LAN side, such as 192.168.2.1 or 192.168.1.1. The router connects on one side (the wide area network [WAN]) to the Internet by way of your ISP, and on the other side to the devices on its local network segment. Devices on the LAN segment can see other devices on that segment, but they cannot see “out the WAN side,” you might say, without using network address translation, which we'll look at in a moment.
The process of assigning a dynamic IP address to a device is known as leasing. DHCP handles the mechanics of this process, but has to do it in two very different fashions given the differences between the two different versions of IP. Two different protocols, DHCPv4 and DHCPv6, provide for the dynamic assignment of an IP address (32 bit or 128 bit, respectively) in these situations. Both protocols are providing some common features:
DHCPv4 has to also manage address reuse, which for many organizations with Class C networks is necessary given the small size of its available address pool. As a result, since 1998 the DHCP protocol (now renamed DHCPv4) and its handshake has been the standard for dynamic IP address assignment. This DORA Dance, as it's affectionately known, consists of the following steps:
DHCPv4 was designed in simpler times, and the notion of binding the IP address to a device's MAC address was not seen as a matter of concern. As the IETF grappled with IPv6, they realized that its much larger address space meant that address reuse was no longer necessary, which might give rise to a more or less permanent assignment of an IP address to each device. This might seem simple in theory, but in practice, it has privacy and user device localization issues that the IETF would rather not cast into concrete with DHCPv6. Instead, they defined the 128-bit IPv6 address field to consist of an upper 80 bits for the network number, with the lower 48 bits being determined by the device or virtual entity itself.
DCHPv6 instead uses what IETF called a stateless automatic address configuration (SLAAC) process. This defines the 128-bit IPv6 address field as consisting of an upper 80 bits for the network number, with the lower 48 bits being determined by the device or virtual entity itself. Most devices that support IPv6 (which is becoming the majority of devices on the market and the Internet) have the built-in capability of generating this privacy address field automatically, typically on a daily basis. They will then ignore traffic that is addressed to their previous privacy address after a predetermined time period, usually one week. Instead of DORA, DHCPv6's four-step handshake consists of the Solicit, Advertise, Request, and Reply process steps, which accomplish much the same as DORA does but for 128-bit addresses. An additional protocol, the neighbor discovery protocol (NDP) is used to determine what other devices might be on the same network with the client seeking an IP address lease so that it can then use the duplicate address detection (DAD) process to avoid trying to reuse a pseudorandom 48-bit temporary address portion that is still in use on that network.
IPv4's addressing scheme was developed with classes of addresses in mind. These were originally designed to be able to split the octets so that one set represented a node within a network, while the other octets were used to define very large, large, and small networks. At the time (1970s), this was thought to make it easier for humans manage IP addresses. Over time, this has proven impractical. Despite this, IPv4 address class nomenclature remains a fixed part of our network landscape, and SSCPs need to be familiar with the defined address classes:
These address classes are summarized in Table 5.4.
TABLE 5.4 IPv4 address classes
Class | Leading bits | Size of Network Number field | Size of Node Number field | Number of networks | Number of nodes per network | Start address | End address |
A | 0 | 8 | 24 | 128 | 16,777,216 | 0.0.0.0 | 127.255.255.255 |
B | 10 | 16 | 16 | 16,384 | 65,536 | 128.0.0.0 | 191.255.255.255 |
C | 110 | 24 | 8 | 2,097,152 | 256 | 192.0.0.0 | 223.255.255.255 |
There are, as you might expect, some special cases to keep in mind:
In Windows systems this is known as the Auto-IP Address (APIPA) because it is generated by Windows when a DHCP server does not respond to requests; regardless of what you call it, it's good to recognize this IP address when trying to diagnose why you've got no Internet connection.
The node address of 255 is reserved for broadcast use. Broadcast messages go to all nodes on the specified network; thus, sending a message to 192.168.2.255 sends it to all nodes on the 192.168.2 network, and sending it to 192.168.255.255 sends it to a lot more nodes! Broadcast messages are blocked by routers from traveling out onto their WAN side. By contrast, multicasting can provide ways to allow a router to send messages to other nodes beyond a router, using the address range of 224. 255.255.255 to 239.255.255.255. Unicasting is what happens when we do not use 255 as part of the node address field—the message goes only to the specific address. Although the SSCP exam won't ask about the details of setting up and managing broadcasts and multicasts, you should be aware of what these terms mean and recognize the address ranges involved.
Subnetting seems to confuse people easily, but in real life, we deal with sets and subsets of things all the time. We rent an apartment, and it has a street address, but the building is further broken down into individual sub-addresses known as the apartment number. This makes postal mail delivery, emergency services, and just day-to-day navigation by the residents easier. Telephone area codes primarily divide a country into geographic regions, and the next few digits of a phone number (the city code or exchange) divide the area code's map further. This, too, is a convenience feature, but first for the designers and operators of early phone networks and switches. (Phone number portability is rapidly erasing this correspondence of phone number to location.)
Subnetting allows network designers and administrators ways to logically group a set of devices together in ways that make sense to the organization. Suppose your company's main Class B IP address is 163.241, meaning you've got 16 bits' worth of node addresses to assign. If you use them all, you have one subgroup, 0.0 to 254.254 (remember that broadcast address!). Conversely:
Designing our company's network to support subgroups requires we know three things: our address class, the number of subgroups we want, and the number of nodes in each subgroup. This lets us start to create our subnet masks. A subnet mask, written in IP address format, shows which bit positions (starting from the right or least significant bit) are allocated to the node number within a subnet. For example, a mask of 255.255.255.0 says that the last 8 bits are used for the node numbers within each of 254 possible subnets (if this were a Class B address). Another subnet mask might be 255.255.255.128, indicating two subnets on a Class C address, with up to 127 nodes on each subnet. (Subnets do not have to be defined on byte or octet boundaries, after all.)
Subnets are defined using the full range of values available for the given number of bits (minus 2 for addresses 0 and 255). Thus, if we require 11 nodes on each subnet, we still need to use 4 bits for the subnet portion of the address, giving us address 0, node addresses 1 through 11, and 15 for all-bits-on; two addresses are therefore unused.
This did get cumbersome after a while, and in 1993, Classless Inter-Domain Routing (CIDR) was introduced to help simplify both the notation and the calculation of subnets. CIDR appends the number of subnet address bits to the main IP address. For example, 192.168.1.168/24 shows that 24 bits are assigned for the network address, and the remaining 8 bits are therefore available for the node-within-subnet address. (Caution: don't get those backward!) Table 5.5 shows some examples to illustrate.
TABLE 5.5 Address classes and CIDR
Class | Number of network bits | Number of node bits | Subnet mask | CIDR notation |
A | 9 | 23 | 255.128.0.0 | /9 |
B | 17 | 15 | 255.255.128.0 | /17 |
C | 28 | 4 | 255.255.255.240 | /28 |
Unless you're designing the network, most of what you need to do with subnets is to recognize subnets when you see them and interpret both the subnet masks and the CIDR notation, if present, to help you figure things out. CIDR counts bits starting with the leftmost bit of the IP address; it counts left to right. What's left after you run out of CIDR are the number of bits to use to assign addresses to nodes on the subnet (minus 2).
Before we can look at subnetting in IPv6, we first have to deal with the key changes to the Internet that the new version 6 is bringing in.
By the early 1990s, it was clear that the IP address system then in use would not be able to keep up with the anticipated explosive growth in the numbers of devices attempting to connect to the Internet. At that point, Version 4 of the protocol (or IPv4 as it's known) used a 32-bit address field, represented in the familiar four-octet address notation (such as 192.168.2.11). That could only handle about 4.3 billion unique addresses; by 2012, we already had 8 billion devices connected to the Internet, and had invented additional protocols such as NAT to help cope. According to the IETF, 2011 was the year we started to see address pool exhaustion become realit; one by one, four of the five Regional Internet Registries (RIRs) exhausted their allocation of address blocks not reserved for IPv6 transition between April 2011 and September 2015. Although individual ISPs continue to recycle IP addresses no longer used by subscribers, the bottom of the bucket has been reached. Moving to IPv6 is becoming imperative. IPv4 also had a number of other faults that needed to be resolved. Let's see what the road to that future looks like.
Over the years we've used it, we've noticed that the design of IPv4 has a number of shortcomings to it. It did not have security built into it; its address space was limited, and even with workarounds like NAT, we still don't have enough addresses to handle the explosive demand for IoT devices. (Another whole class of Internet users are the robots, smart software agents, with or without their hardware that let them interact with the physical world. Robots are using the Internet to learn from each other's experiences in accomplishing different tasks.)
IPv6 brings a number of much-needed improvements to our network infrastructures:
This giant leap of changes from IPv4 to IPv6 stands to make IPv6 the clear winner, over time, and is comparable to the leap from analog video on VHS to digital video. To send a VHS tape over the Internet, you must first convert its analog audio, video, chroma, and synchronization information into bits, and package (encode) those bits into a file using any of a wide range of digital video encoders such as MP4. The resulting MP4 file can then transit the Internet.
IPv6 was published in draft in 1996 and became an official Internet standard in 2017. The problem is that IPv6 is not backward compatible with IPv4; you cannot just flow IPv4 packets onto a purely IPv6 network and expect anything useful to happen. Everything about IPv6 packages the user data differently and flows it differently, requiring different implementations of the basic layers of the TCP/IP protocol stack. Figure 5.9 shows how these differences affect both the size and structure of the IP Network layer header.
For organizations setting up brand-new network infrastructures, there's a lot to be gained by going directly to an IPv6 implementation. Such systems may still have to deal with legacy devices that operate only in IPv4, such as “bring your own devices” users. Organizations trying to transition their existing IPv4 networks to IPv6 may find it worth the effort to use a variety of “dual-rail” approaches to effectively run both IPv4 and IPv6 at the same time on the same systems:
With each passing month, SSCPs will need to know more about IPv6 and the changes it is heralding for personal and organizational Internet use. This is our future!
We've come a long way thus far in showing you how Internet protocols work, which should give you both the concepts and some of the details you'll need to rise to the real challenge of this chapter. As an SSCP, after all, you are not here to learn how to design, build, and administer networks—you're here to learn how to keep networks safe, secure, and reliable!
As we look at vulnerabilities and possible exploits at each layer, keep in mind the concept of the attack surface. This is the layer of functionality and features, usually in software, that an attacker has to interact with, defeating or disrupting its normal operation as part of a reconnaissance or penetration attempt. This is why so many attacks that involve lower layers of the OSI or TCP/IP stacks actually start with attacks on applications, because apps can often provide the entry path the attacker needs to exploit.
For all layers, it is imperative that your organization have a well-documented and well-controlled information technology baseline, so that it knows what boxes, software, systems, connections, and services it has or uses, down to the specifics about make, model, and version! This is your starting point to find the Common Vulnerabilities and Exposures (CVE) data about all of those systems elements.
It's time now to put our white hats firmly back on, grab our vulnerability modeling and assessment notes from Chapter 4, and see how the OSI 7-layer reference model can also be our roadmap from the physical realities of our networks up through the Application Layer—and beyond!
In all technologies we have in use today, data transmission at its root has to use a physical medium that carries the datagrams from Point A to Point B. Despite what Marshall McLuhan said, when it comes to data transmission, the medium is not the message. (McLuhan was probably speaking about messages at Layer 7…) And if you can do something in the physical world, something else can interfere with it, block it, disrupt or distort it.
Or…somebody else can snoop your message traffic, at the physical level, as part of their target reconnaissance, characterization, and profiling efforts.
In Chapter 8, you'll work with a broader class of physical systems, their vulnerabilities, and some high-payoff countermeasures. That said, let's take a closer look at the Physical layer from the perspective of reliable and secure data transmission and receipt. We need to consider two kinds of physical transmission: conduction and radiation.
Conducted and radiated signals are easy prey to a few other problems:
Finally, consider the physical vulnerabilities of the Layer 1 equipment itself—the NIC, the computer it's in, the router and modem, the cabling, and fiber optic elements that make Layer 1 possible. Even the free space that Wi-Fi or LiFi (LEDs used as part of medium data rate communications systems) are part of the system! The walls of a building or vehicle can obstruct or obscure radiated signals, and every electrical system in the area can generate interference. Even other electrical power customers in the same local grid service area can cause electrical power quality problems that can cause modems, routers, switches, or even laptops and desktops to suffer a variety of momentary interruptions.
All of these kinds of service disruptions at Layer 1 can for the most part be either intermittent, even bursty in nature, or they can last for minutes, hours, or even days.
For hostile (deliberate) threat actors, the common attack tools at Layer 1 start with physical access to your systems:
Wi-Fi reconnaissance can be easily conducted from a smartphone app, and this can reveal exploitable weaknesses in your systems at Layer 1 and above. This can aid an attacker in tuning their own Wi-Fi attack equipment to the right channel and pointing it in the right spots in your Wi-Fi coverage patterns, to find potential attack vectors.
Without getting far too technical (for an SSCP or for the exam), the basics of the medium should provide some degree of protection against some source of interference, disruption, or interception. Signal cables can be contained in rigid pipes, and these are buried in the ground or embedded in concrete walls. This reduces the effect of RFI while also reducing the chance of the cable being cut or tapped into. Radio communications systems can be designed to use frequency bands, encoding techniques, and other measures that reduce accidental or deliberate interference or disruption. Placing Layer 1 (and other) communications systems elements within physically secured, environmentally stabilized physical spaces should always be part of your risk mitigation thinking.
This also is part of placing your physical infrastructure under effective configuration management and change control.
Power conditioning equipment can also alleviate many hard-to-identify problems. Not every electronic device behaves well when its AC power comes with bursts of noise, or with voltage drops or spikes that aren't severe enough to cause a shutdown (or a blown surge suppressor). Some consumer or SOHO routers, and some cable or fiber modems provided by ISPs to end users, can suffer from such problems. Overheating can also cause such equipment to perform erratically.
Note that most IPS and IDS products and approaches don't have any real way to reach down into Layer 1 to detect an intrusion. What you're left with is the old-fashioned approach of inspection and audit of the physical systems against a controlled, well- documented baseline.
In general terms, the untreated Layer 1 risks end up being passed on to Layer 2 and above in the protocol stacks, either as interruptions of service, datagram errors, faulty address and control information, or increased retry rates leading to decreased throughput. Monitoring and analysis of monitoring data may help you identify an incipient problem, especially if you're getting a lot of red flags from higher layers in the protocol stack.
Perhaps the worst residual risk at Layer 1 is that you won't detect trespass at this level. Internet-empowered systems can lull us into complacency; they can let us stop caring about where a particular Cat 5 or Cat 6 cable actually goes, because we're too worried about authorized users doing the wrong thing or unauthorized users hacking into our systems or our apps. True, the vast majority of attacks happen remotely and involve no physical access to your Layer 1 systems or activities.
How would you make sure that you're not the exception to that rule?
Attackers at this level have somehow found their way past your logical safeguards on the Physical layer. Perhaps they've recognized the manufacturer's default broadcast SSID of your wireless router, used that to find common vulnerabilities and exploits information, and are now attacking it with one or more of those exploits to see if they can spoof their way into your internet. Note how some of the attack surfaces involve layer-spanning protocols like ARP or DHCP, so we'll address them here first.
A number of known vulnerabilities in Layer 2 systems elements can lead to a variety of attack patterns, such as:
These may lead to denial or disruption of service or degraded service (if your network systems have to spend a lot of time and resources detecting such attacks and preventing them). They may also provide an avenue for the attacker to further penetrate your systems and achieve a Layer 3 access. Attacks at this layer can also enable an attacker to reach out through your network's nodes and attack other systems.
A variety of steps can be taken to help disrupt the kill chain, either by disrupting the attacker's reconnaissance efforts or the intrusion attempts themselves:
Probably the most worrisome residual risk of an unresolved Layer 2 vulnerability is that an intruder has now found a way to gain Layer 3 access or beyond on your network.
One of the things to keep in mind about IP is that it is a connectionless and therefore stateless protocol. By itself, it does not provide any kind of authentication. Spoofing IP packets, launching denial of service attacks, or other attacks have quite literally become the child's play of script kiddies worldwide. ICMP, the other major protocol at this layer, is also pretty easy to use to gather reconnaissance information or to launch attacks with.
Attacks at any layer of the protocol stacks can be either hit-and-run or very persistent. The hit-and-run attacker may need to inject only a few bad packets to achieve their desired results. This can make them very hard to detect. The persistent threat requires more continuous action be taken to accomplish the attack.
Typical attacks seen at this level, which exploit known common vulnerabilities or just the nature of IP networks, can include:
First on your list of countermeasure strategies should be to implement IPSec if you've not already done so for your IPv4 networks. Whether you deploy IPSec in tunnel mode or transport mode (or both) should be driven by your organization's impact assessment and CIANA needs. Other options to consider include these:
For the most part, strong protection via router ACLs and firewall rules, combined with a solid IPSec implementation, should leave you pretty secure at this layer. You'll need to do a fair bit of ongoing traffic analysis yourself, combined with monitoring and analysis of the event logs from this layer of your defense, to make sure.
The other thing to keep in mind is that attacks at higher levels of the protocol stack could wend their way down to surreptitious manipulation, misuse, or outright disruption of your Layer 3 systems.
Layer 4 is where packet sniffers, protocol analyzers, and network mapping tools pay big dividends for the black hats. For the white hats, the same tools—and the skill and cunning needed to understand and exploit what those tools can reveal—are essential in vulnerability assessment, systems characterization and fingerprinting, active defense, and incident detection and response. Although it's beyond the scope of the SSCP exam or this book to make you a protocol wizard, it's not beyond the scope of the SSCP's ongoing duties to take on, understand, and master what happens at the Transport layer.
Let's take a closer look.
How much of this applies to your site or organization?
Most of your countermeasure options at Layer 4 involve better identity management and access control, along with improved traffic inspection and filtering. Start by considering the following:
One vulnerability that may remain, after taking all of the countermeasures that you can, is that your traffic itself is still open to being monitored and subjected to traffic analysis. Traffic analysis looks for patterns in sender and recipient address information, protocols or packet types, volumes and timing, and just plain coincidences. Even if your data payloads are well encrypted, someone willing to put the time and effort into capturing and analyzing your traffic may find something worthwhile.
More and more, we are seeing attacks that try to take advantage of session-level complexities. As defensive awareness and response has grown, so has the complexity of session hijacking and related Session layer attacks. Many of the steps involved in a session hijack can generate other issues, such as ACK storms, in which both the spoofed and attacking host are sending ACKs with correct sequence numbers and other information in the packet headers; this might require an attacker to take further steps to silence this storm so that it's not detectable as a symptom of a possible intrusion.
How much of this applies to your site or organization?
SSCPs need to be very concerned about two different but related DNS security concerns. Fundamentally, users need to be able to trust that the use of DNS services by their organization is achieving trustworthy, reliable results: requested URLs and URIs connect to the proper resources, for example, and that DNS system responses to the users' endpoints are not in fact spoofed, corrupted, or being used to send malware or other harmful payloads to the user's endpoint. Security measures must also mitigate the risk of abuse of the DNS infrastructure itself by sophisticated attackers such as advanced persistent threats (APTs) as their own private command, control, and communications infrastructure.
Two related sets of countermeasures can be used to alleviate these concerns. The first is the use of DNS security extensions (DNSSEC), while the second involves more intensive DNS service filtering via firewalls and other security tools. It's worth noting that DNSSEC is not a “top-level domains” issue, nor something that only needs to be done by the owner-operators of the Internet's backbone services (and the DNS as a system). Implementing effective DNSSEC does require action across the entire Internet community.
As with the Transport layer, most of the countermeasures available to you at the Session layer require some substantial sleuthing around in your system. Problems with inconsistent applications or systems behavior, such as not being able to consistently connect to websites or hosts you frequently use, might be caused by errors in your local hosts file (containing your ARP and DNS cache). Finding and fixing those errors is one thing; investigating whether they were the result of user error, applications or systems errors, or deliberate enemy action is quite another set of investigative tasks to take on!
Also, remember that your threat modeling should have divided the world into those networks you can trust, and those that you cannot. Many of your DoS prevention strategies therefore need to focus on that outside, hostile world—or, rather, on its (hopefully) limited connection points with your trusted networks.
Countermeasures to consider include the following:
As you lock down your Session layer defenses, you may find situations where some sessions and the systems that support them need a further layer of defense (or just a greater level of assurance that you've done all that can be done). This may dictate setting up proxies as an additional boundary layer between your internal systems and potential attackers.
Perhaps the most well-known Presentation layer attacks have been those that exploit vulnerabilities in NetBIOS and SMB; given the near dominance of the marketplace by Microsoft-based systems, this should not be a surprise.
More importantly, the cross-layer protocols, and many older apps and protocols such as SNMP, FTP, and such, all work through or with Layer 6 functionality.
Vulnerabilities at this layer can be grouped broadly into two big categories: attacks on encryption or authentication, and attacks on the apps and control logic that support Presentation layer activities. These include:
Building on the countermeasures you've taken at Layer 5, you'll need to look at the specifics of how you're using protocols and apps at this layer. Consider replacing insecure apps, such as FTP or email, with more secure versions.
Much of what you can't address at Layer 6 or below will flow naturally up to Layer 7, so let's just press on!
It's just incredible when we consider how many application programs are in use today! Unfortunately, the number of application-based or Application layer attacks grows every day as well. Chapter 9 addresses many of the ways you'll need to help your organization secure its applications and the data they use from attack, but let's take a moment to consider two specific cases a bit further:
These are just two such combinations of ubiquitous technologies and the almost uncontrollable need that people have to talk with each other, whether in the course of accomplishing the organization's mission and goals or not. When we add in any possible use of a Web browser… Pandora's box is well and truly open for business, you might say.
Many of these attacks are often part of a protracted series of intrusions taken by more sophisticated attackers. Such advanced persistent threats may spend months, even a year or more, in their efforts to crack open and exploit the systems of a target business or organization in ways that will meet the attacker's needs. As a result, constant vigilance may be your best strategy. Keep your eyes and IPS/IDS alert and on the lookout for the following:
It's difficult to avoid falling into a self-imposed logic trap and see applications security separate and distinct from network security. These two parts of your organization's information security team have to work closely together to be able to spot, and possibly control, vulnerabilities and attacks. It will take a concerted effort to do the following:
http://jstest.jcoglan.com
); for cookies, use privacy-verifying cookie test Web tools, such as https://www.cookiebot.com/en/gdpr-cookies
. Add challenges such as CAPTCHAs to determine if the entity is a human or a robot trying to be one.Most of what you've dealt with in Layers 1 through 7 depends on having trustworthy users, administrators, and software and systems suppliers and maintainers. Trusting, helpful people, willing to go the extra mile to solve a problem, are perhaps more important to a modern organization than their network infrastructure and IT systems are. But these same people are prone to manipulation by attackers. You'll see how to address this in greater depth when we get to Chapter 11.
Looking at the layers of a network infrastructure—by means of TCP/IP's four layers, or the OSI 7-layer reference model's seven layers—provides many opportunities to recognize vulnerabilities, select and deploy countermeasures, and monitor their ongoing operation. It's just as important to take seven giant steps back and remember that to the rest of the organization, that infrastructure is a system in and of itself. So how does the SSCP refocus on networks as systems, and plan for and achieve the degree of security for them that the organization needs?
Let's think back to Chapters 3 and 4, and their use of risk management frameworks. One key message those chapters conveyed, and that frameworks like NIST's embody, is the need to take a cohesive, integrated, end-to-end and top-to-bottom approach. That integrated approach needs to apply across the systems, equipment, places, faces, and timeframes that your organization needs to accomplish its mission.
Timeframes are perhaps most critical to consider as we look at systems security. Other chapters have looked at the planning, preparation, and deployment phases; Chapter 10, “Incident Response and Recovery,” will look at incident response, which in effect is dealing with things after an event of interest has mushroomed into something worse.
What about the now?
Thinking back to the security control functions described in Chapter 4, it's easy to see that securing an organization's networks requires a number of critical functions (in fact, keeping networks secure will require that entire set of control functions). Keeping networks secure can be broken down into the following broad sets of processes:
Access control, including network access control, will be covered in the next chapter; incident response is the subject of Chapter 10, and Chapter 11 will look at preparing for the loss of network services and recovering from them. Let's look at the others more closely.
Intrusion detection and prevention is generally performed by a combination of host-based and network-based software. Network-based intrusion detection and prevention systems (NIDS and NIPS, respectively) tend to be applications running on hardware devices such as firewalls or other specially hardened servers. They use a variety of scanning techniques to monitor network traffic flowing past them; prevention systems block unauthorized or suspect traffic from proceeding past, while detection ones merely raise an alarm about it. Host-based intrusion detection systems (HIDS and HIPS) run on endpoints, servers, and other devices, but do the same for network traffic trying to enter that device.
Intrusion detection and prevention systems have tended to be packaged with firewalls, again either in separate hardware or as software loaded onto devices. Windows-based systems (clients or servers) have been using Windows Firewall, for example, as a HIPS (or HIDS, depending upon how security policies are configured on the device) for years. Some routers also perform NIPS and NIDS functions; both firewalls and routers have roles to play in network access control, which we'll cover in Chapter 6.
Firewalls have gone through a rapid evolution from their limited first-generation models on up through the fifth or next-generation firewalls (NGFWs) in today's markets. Firewall functionality is also available as part of managed security services. As this evolution has continued, firewalls began to incorporate more of the functions originally performed by anti-malware software and systems, which was a natural outgrowth of using rules, lists, models, or heuristics to determine what sort of entities, activities, data, or files to allow past a control point or block. Firewalls, in combination with routers and network managers, can also be used to implement screening systems that prevent connections from being established by devices (or the software entities running on them) that cannot be verified as having all the required updates for software, firmware, and anti-malware definitions installed and active on them. (You'll learn more about this in Chapter 6.)
Most network use cases quickly discover the need to balance the use of network throughput or bandwidth by different types or classes of traffic. Sometimes, this is necessary to prevent users from consuming greater bandwidth and responsiveness than they are paying for; in other cases, it is to ensure that higher-priority traffic can flow with minimal interruption or degradation. The years 2020 and 2021 demonstrated the value of such balancing as hundreds of millions of users learned how to improve Zoom, Teams, or other collaborative work platforms' utility and connectivity, while throttling back the bandwidth allocated to cloud backup services, games, or other media streaming services. Various quality of service (QoS) and other traffic management features, within apps, within the client's OS and network software, and within routers and firewalls, support this.
Larger enterprises also became much more aware of the need to limit the movement of sensitive data, particularly when it tried to leave the control span of the organization and its networks. Data loss prevention (DLP), also known as data leakage protection (or other combinations of those words), uses a variety of techniques to determine the legitimacy of movements of data, both laterally (to and from servers and clients within the enterprise's networks) and externally (what network engineers would call a northbound movement of data). Even the southbound movements of data, internal to the enterprise's networks, may be attempts by attackers to mask a lateral movement of data.
In the trivial case, the attacker is attempting to move a complete data set as a copy of a single file. This enables DLP systems to inspect the entirety of that data in motion and to use rules, patterns, heuristics, watermarks (or steganographic markers), or other techniques to determine if the file in question is at risk of being exfiltrated (suffering an unauthorized removal from the organization's control). Very quickly, attackers recognized the need to fragment data, encrypt its fragments, and then move those fragments in a scattershot fashion, all with the intent of hiding from the DLP system. (The attacker is in essence trying to construct their own TOR system within their target's environment, something that masks what's being sent, by whom or by which process IDs, from where, to where.)
Solving the DLP problem can be quite complex, and thus far, there is no one silver-bullet single-point solution. Many different techniques applied throughout the information architecture are called for. Given the incredibly lucrative target that many corporate and government data sets present to attackers, this will be a security “hot topic” for years to come.
Most organizations—and many individual private users—are allowing a bewildering array of wireless devices to access their network infrastructures to use services and resources for a variety of functions. The initial concepts of wireless security, largely implemented in the Wi-Fi routers being used, can no longer cope with both service contention and security issues that arise in many of these environments.
Wireless (or unbound) networks are built by providing access points (APs), devices that embody the Layer 1 wireless connection technology on one side, the wired (bound) physical connection technology on the other side, with a mix of Layer 2 and Layer 3 connection and authentication capabilities in between. This allows the router to act as a bridge between the wireless part of your network and the wired parts. DHCP, for example, is often provided as a built-in service in many routers. Access points may also embody a variety of firewall features to provide a limited set of security features.
The 802.11 standard provides a default open system authentication mode, which requires only a simple request–acknowledge handshake to establish a connection between the requesting station (another name for a Wi-Fi capable device of any kind) and the access point device; device or station authentication can then be performed. The standard also defined shared key authentication as a way of using previously established WEP encryption keys, as we saw previously.
Access points can work in one of two modes:
Adding an access point to your system does first and foremost create a hole in your threat surface; it is a point at which friendly users and hostile ones can attempt to enter your systems, gain access to resources, introduce data or executable code, and take other actions. Securing that hole in your threat surface requires properly configuring and hardening the access point device and the services it provides, as well as enforcing strong access controls on all devices that attempt to connect to and through it.
Rogue access points are a very common concern. These may simply be wireless devices operated by authorized users that are misconfigured, uncontrolled, or already taken over by an attacker via malware or other tactics. Once connected to your access points (and your networks), any wireless device could attempt to impersonate a legitimate access point and stage a machine-in-the-middle attack on an unsuspecting user device.
Service interruption and degradation can also be caused by access points and devices that are not part of your network, but are close enough to your APs and legitimate user devices that your APs (or your neighbors) spend their resources attempting to filter or deny connection requests; RF channels can become more heavily loaded, as the base stations (APs) compete with each other, and service quality can suffer. (An attacker may do much the same thing, as a way of gaining technical intelligence about your Wi-Fi access point management capabilities and techniques.)
Deliberate and accidental attempts to intrude into your wireless networks will often require that your existing network defenses include wireless intrusion detection and prevention systems. Intrusion detection and prevention, of course, begins with monitoring. Wireless intrusion detection and protection does, however, require some different monitoring, detection, and protection strategies and techniques:
Different monitoring strategies may be required across the organization. Integrated monitoring does impose additional load on the APs and on the network fabric, which may not be acceptable in areas that already see high traffic loads. Other areas may be physically challenging to install and operate air monitors in; even in hybrid monitoring configurations, an additional server (and perhaps out-of-band network interfaces) may be needed to manage the air monitors and collect data from them.
Data from the monitoring system should then be integrated and collated with other security information to provide security analysts with the total picture. Most Wi-Fi monitoring solutions will integrate with existing SIEM or security analytics capabilities, making this part of the job relatively straightforward.
Wireless intrusion detection systems (WIDSs) and wireless intrusion prevention systems (WIPSs) work to provide improved security over the wireless network they're part of. WIDS and WIPS technologies and products have continued to evolve, much as wired NIDS and NIPS have, and the wireless versions operate in similar fashion to their wired cousins in many ways. One key difference is that as WIDS and WIPS systems grow from integrated to overlay or hybrid monitoring, these systems will usually include a dedicated server that sits inline with the network segment containing the access points. These servers offer a wide range of capabilities, including the following:
The combination of all of these factors does, of course, affect the total cost of ownership and operations that organizations see with different alternatives.
The good news is that virtually every network and systems device, software application, interface, and activity can be configured to generate signals that can reveal who is using it, how, and (to some extent) for what purposes. Up until recently, this was also viewed as somewhat a bit of bad news: it takes communications bandwidth, processor time, and storage to generate, send, receive, organize, and store all of those signals, and even more compute power to analyze them to determine whether an attack has happened or not. A whole generation of security professionals, software engineers, and administrators worked to develop various data triage processes to keep from drowning in all of that data. Since much of this data was kept in log files on clients, servers, and network devices, this came to be known as the log management problem. But with the arrival of smarter, more affordable data analytics capabilities, and with the continual decrease in storage and compute costs, gathering more data and doing smarter analysis of it became more cost-effective. And as ransomware attacks, and their related data breaches, became more crippling to modern organizations (private and public alike), the cost-benefits balance tipped in favor of doing a better job of detecting the attack.
Performing this due diligence set of tasks is in some sense a relatively straightforward process, as shown here:
This begs the question of where these different functions should be placed or hosted on your enterprise's networks. At its simplest, this is a choice between:
In some respects, this is similar to whether an intrusion detection and prevention system is host based or network based: the network-based version can see everything that is trying to flow past it and take actions to prevent that traffic from continuing if required. Centrally located, host-based services can protect that host (they are of course in line with it, to a large extent), but they may have restricted visibility into or control over traffic elsewhere in the network. Much of the answer is dependent on the nature of your existing network architecture and its use of techniques such as network segmentation to keep different security domains, each operating with specific security classification and categorization restrictions, separate from each other. The type of access control approaches being used are also part of this puzzle. Certainly, the shift to managed security services and more powerful enterprise-wide security information and event management capabilities (such as SIEMs products and services) have changed the price/performance thinking. This means that having sensors where the data is generated (or the suspect traffic is most visible), with centralized or semi-distributed collection, collation, management, and analysis capabilities, now makes better sense than it once did.
We'll look at this more closely in Chapter 12, after these other major topics have been explored in the intervening chapters.
Your organization or business may already have a network operations center (NOC); this could be either a physically separate facility or a work area within the IT support team's workspaces. NOCs perform valuable roles in maintaining the day-to-day operation of the network infrastructure; in conjunction with the IT support help desk, they investigate problems that users report, and respond to service requests to install new systems, configure network access for new users, or ensure updates to servers and server-based applications get done correctly. You might say that the NOC focuses on getting the network to work, keeping it working, and modifying and maintaining it to meet changing organizational needs.
The security operations center (SOC) has an entirely different focus. The SOC focuses on deterring, preventing, detecting, and responding to network security events. The SOC provides real-time command and control of all network-related monitoring activities, and it can use its device and systems management tools to further drill down into device, subsystem, server, or other data as part of its efforts to recognize, characterize, and contain an incident. It integrates all network-security related activities and information so as to make informed, timely, and effective decisions to ensure ongoing systems' reliability, availability, and security. The SOC keeps organizational management and leadership apprised of developing and ongoing information security incidents and can notify local law enforcement or other emergency responders as required. Let's look more closely at this important set of tasks we're chartering our SOC to perform:
From this brief look at the functions of a SOC, you can see that security operations has its own unique patterns of work—its own workflows—that SOC team members need to perform on a regular and as-needed basis. These are similar to what the network operations team would use, but have a number of points where they must differ. As security functions need to be made more accountable, transparent, and auditable, these differences in NOC vs. SOC activities can become more pronounced. It's important to note that a separate, dedicated, fully staffed, and fully equipped SOC can be difficult, expensive, and time-consuming to set up and get operating; it will continue to be a nontrivial cost to the organization. The organization should build a very strong business case to set up such a separate SOC (or ISOC, information security operations center, to distinguish it from a physical or overall security operations center). Such a business case may be called for to protect highly sensitive data, or if law, government regulation, or industry rules dictate it. If that is the case, one hopes that the business impact analysis (BIA) provides supporting analysis and recommendations!
Smaller organizations quite often combine the functions of NOC and SOC into the same (smaller) set of people, workspaces, systems, and tools. There is nothing wrong with such an approach—but again, the business case, supported by the BIA, needs to make the case to support this decision.
It doesn't take a hard-nosed budget analyst to realize that many of the tools the NOC needs to configure, manage, and maintain the network can also address the SOC's needs to recognize, characterize, and contain a possible intrusion. These tools span the range of physical, logical, and administrative controls. For example:
Combinations of these three control (and management) strategies can also support both the SOC and the NOC:
Chapter 3, “Integrated Information Risk Management,” stressed the need for integrated command and control of your company's information systems security efforts; we see this in the definition of the SOC as well. So what is the secret sauce, the key ingredient that brings all of these very different concerns, issues, talents, capabilities, functions, hardware, software, data, and physical systems together and integrates them?
System vendors quickly offer us products that claim to provide “integrated” solutions. Some of these systems, especially in the security information and event management (SIEM) marketplace, go a long way in bringing together the many elements of a geographically dispersed, complex network infrastructure. In many cases, such SIEM products as platforms require significant effort to tailor to your organization's existing networks and your security policies. As your team gains experience using them, you'll see a vicious circle of learning take place; you learn more about security issues and problems, but this takes even more effort to get your systems configured to respond to what you've just learned, which causes more residual issues, which…
You'll also have the chance for a virtuous circle of learning, in which experience teaches you stronger, more efficient approaches to meet your constantly evolving CIANA needs. SIEM as an approach, management philosophy, and as a set of software and data tools can help in this regard.
The key ingredient remains the people plane, the set of information security and network technology people that your organization has hired, trained, and invested in to make NOC-like and SOC-like functions serve and protect the needs of the organization.
Since the Internet has become the de facto standard for e-commerce, e-business, and e-government, it should be no surprise that as SSCPs, we need to understand and appreciate what makes the Internet work and what keeps it working reliably and securely. By using the OSI 7-layer reference model as our roadmap, we've reaffirmed our understanding of the protocol stacks that are theory and the practice of the Internet. We've ground lots of those details under our fingernails as we've dug into how those protocols work to move data, control that data flow, and manage the networks, all at the same time. This foundation paves our way to Chapter 6, where we'll dive deep into identity management and access control.
We've seen how three basic conceptual models—the TCP/IP protocol stack, the OSI 7-layer reference model, and the idea of the data, control, and management plane—are powerful tools for thinking about networks and physical, real design features that make most of the products and systems we build our networks with actually work. In doing so, we've also had a round-up review of many of the classical and current threat vectors or attacks that intruders often use against every layer of our network-based business or organization and its mission.
We have not delved deep into specific protocols, nor into the details of how those protocols can be hacked and corrupted as part of an attack. But we've laid the foundations you can use to continue to learn those next layers down as you take on more of the role of a network defender. But that, as we say, is a course beyond the scope of this book or the SSCP exam itself, so we'll have to leave it for another day.
Explain why IPv6 is not directly compatible with IPv4. Users of IPv4 encountered a growing number of problems as the Internet saw a many-fold increase in number of attached devices, users, and uses. First was IPv4's limited address space, which needed the somewhat cumbersome use of Network Address Translation (NAT) as a workaround. The lack of built-in security capabilities was making far too many systems far too vulnerable to attack. IPv4 also lacked built-in quality of service features. IPv6 resolves these and a number of other issues, but it essentially is a completely different network. Its packet structures are just not compatible with each other—you need to provide a gateway-like function to translate IPv4 packet streams into IPv6 ones, and vice versa. Using both systems requires one of several alternative approaches: tunneling, “dual-stack” simultaneous use, address and packet translation, or Application layer gateways. As of 2018, many large systems operators run both in parallel, employ tunneling approaches (to package one protocol inside the other, packet by packet), or look to Application layer gateways as part of their transition strategy.
Compare and contrast the basic network topologies. A network topology is the shape or pattern of the way nodes on the network are connected with each other. The basic topologies are point-to-point, bus, ring, star, and mesh; larger networks, including the world-spanning Internet, are simply repeated combinations of these smaller elements. A bus connects a series of devices or nodes in a line and lets each node choose whether or not it will read or write traffic to the bus. A ring connects a set of nodes in a loop, with each node receiving a packet and either passing it on to the other side of the ring or keeping it if it's addressed to the node. Meshes provide multiple bidirectional connections between most or all nodes in the network. Each topology's characteristics offer advantages and risks to the network users of that topology, such as whether a node or link failure causes the entire network to be inoperable, or whether one node must take on management functions for the others in its topology. Mesh systems, for example, can support load leveling and alternate routing of traffic across the mesh; star networks do load leveling, but not alternate routing. Rings and point-to-point cannot operate if all nodes and connections aren't functioning properly; bus systems can tolerate the failure of one or more nodes but not of the backplane or system of interconnections. Note that the beauty of TCP/IP and the OSI 7-layer reference model as layers of abstraction enable us to use these topologies at any layer, or even across multiple layers, as we design systems or investigate issues with their operation and performance.
Explain the different network roles of peer, client, and server. Each node on a network interacts with other nodes on the network, and in doing so they provide services to each other. All such interactions are governed by or facilitated by the use of handshake protocols. If two interconnected nodes have essentially equal roles in those handshakes—one node does not control the other or have more control over the conversation—then each node is a peer, or equal, of the other. Simple peer-to-peer service provision models are used for file, printer, or other device sharing, and they are quite common. When the service being provided requires more control and management, or the enforcement of greater security measures (such as identity authentication or access control), then the relationship is more appropriately a client-server relationship. Here, the requesting client node has to make a request to the server node (the one providing the requested services); the server has to recognize the request, permit it to proceed, perform the service, and then manage the termination of the service request. Note that even in simple file or print sharing, the sharing may be peer-to-peer, but the actual use of the shared resource almost always involves a service running on the node that possesses that file or printer, which carries out the sharing of the file or the printing of the requesting node's data.
Explain how IPv4 addressing and subnetting works. An IPv4 address is a 32-bit number, which is defined as four 8-bit portions, or octets. These addresses in human-readable form look like 192.168.2.11, with the four octets expressed as their base 10 values (or as two hexadecimal digits), separated by dots. In the packet headers, each IP address (for sender and recipient) occupies one 32-bit field. The address is defined to consist of two parts: the network address and the address of a node on that network. Large organizations (such as Google) might need tens of thousands of node addresses on their network; small organizations might only need a few. This has given rise to address classes: Class A uses the first octet for organization and the other three for node. Class B uses two octets each for organization and node. Class C uses three octets for organization and the fourth for node on the Internet; Class D and E are reserved for special purposes. Subnetting allows an organization's network designers to break a network into segments by logically grouping addresses: the first four devices in one group, the next four in another, and so on. This effectively breaks the node portion of the address into a subnet portion and a node-on-the-subnet portion. A subnet mask is a 32-bit number in four-octet IP address format, with 0s in the rightmost bit positions that indicate bits used to assign node numbers: 255.255.255.240 shows the last 4 bits are available to support 16 subnet addresses. But since all networks reserve address 0 and “all bits on” for special purposes, that's really only 14 node addresses available on this subnet. Classless Inter-Domain Routing (CIDR) simplifies the subnetting process and the way we write it: that same address would be 255.255.255.240/28, showing that 28 bits of the total address specify the network address.
Explain the differences between IPv4 and IPv6 approaches to subnetting. IPv4's use of a 32-bit address field meant that you had to assign bits from the address itself to designate a node on a subnet. IPv6 uses a much larger address field of 128 bits, which for unicast packets is broken into a 48-bit host or network field, 16 bits for subnet number, and 64 bits for the node address on that network segment. No more borrowing bits!
Explain the role of port numbers in Internet use. Using software-defined port numbers (from 0 to 65535) allows protocol designers to add additional control over routing service requests: the IP packets are routed by the network between sender and recipient, but adding a port number to a Transport layer or higher payload header ensures that the receiving system knows which set of services to connect (route) that payload to. Standardized port number assignments make application design simpler; thus, port 25 for email, port 80 for HTTP, and so on. Ports can be and often are remapped by the protocol stacks for security and performance reasons; sender and recipient need to ensure that any such mapping is consistent, or connections to services cannot take place.
Describe the man-in-the-middle attack, its impacts, and applicable countermeasures. In general terms, the man-in-the-middle (MITM) attack can happen when a third party can place themselves between the two nodes and either insert their own false traffic or modify traffic being exchanged between the two nodes, in order to fool one or both nodes into mistaking the third party for the other (legitimate) node. This can lead to falsified data entering company communications and files, the unauthorized disclosure of confidential information, or disruption of services and business processes. Protection at every layer of the protocol stack can reduce or eliminate the exposure to MITM attacks. Strong Wi-Fi encryption, well-configured and enforced identity management and access control, and use of secure protocols as much as possible are all important parts of a countermeasure strategy.
Describe cache poisoning and applicable countermeasures. Every node in the network maintains a local memory or cache of address information (MAC addresses, IP addresses, URLs, etc.) to speed up communications—it takes far less time and effort to look it up in a local cache than it does to re-ask other nodes on the network to re-resolve an address, for example. Cache poisoning attacks attempt to replace legitimate information in a device cache with information that could redirect traffic to an attacker, or fool other elements of the system into mistaking an attacker for an otherwise legitimate node. This sets the system up for a man-in-the-middle attack, for example. Two favorite targets of attackers are ARP and DNS caches. A wide variety of countermeasure techniques and software tools are available; in essence, they boil down to protecting and controlling the server and using allowed listing and blocked listing techniques, but these tend not to be well suited for networks undergoing rapid growth or change.
Explain the need for IPSec, and briefly describe its key components. The original design of the Internet assumed that nodes connecting to the net were trustworthy; any security provisions had to be provided by user-level processes or procedures. For the 1960s, this was reasonable; by the 1980s, this was no longer acceptable. Multiple approaches, such as access control and encryption techniques, were being developed, but these did not lead to a comprehensive Internet security solution. By the early 1990s, IPSec was created to provide an open and extensible architecture that consists of a number of protocols and features used to provide greater levels of message confidentiality, integrity, authentication, and nonrepudiation protection. It does this first by creating security associations, which are sets of protocols, services, and data that provide encryption key management and distribution services. Then, using the IP Security Authentication Header (AH), it establishes secure, connectionless integrity. The Encapsulating Security Payloads (ESP) protocol uses these to provide confidentiality, connectionless integrity, and anti-replay protection, and authenticates the originator of the data (thus providing a degree of nonrepudiation).
Explain how physical placement of security devices affects overall network information security. Physical device placement of security components determines the way network traffic at Layer 1 can be scanned, filtered, blocked, modified, or allowed to pass unchanged. It also directly affects what traffic can be monitored by the security system as a whole. For wired and fiber connections, devices can be placed inline—that is, on the connection from a secured to a non-secured environment. All traffic therefore flows through the security device. Placement of the device in a central segment of the network (or anywhere else) not only limits its direct ability to inspect and control traffic as it attempts to flow through, but may also limit how well it can handle or inspect traffic for various subnets in your overall LAN. This is similar to host-based versus LAN-based antimalware protection. Actual placement decisions need to be made based on security requirements, risk tolerance, affordability, and operability considerations.
Describe the key security challenges with wireless systems and control strategies to use to limit their risk. Wireless data communication currently comes in three basic sets of capabilities: Wi-Fi, Bluetooth, and near-field communication (NFC). All share some common vulnerabilities. First, wireless devices of any type must make a connection to some type of access point, and then be granted access to your network, to affect your own system's security. Second, they can be vulnerable to spoofing attacks in which a hostile wireless device can act as a man-in-the-middle to create a fake access point or directly attack other users' wireless devices. Third, the wireless device itself is very vulnerable to loss or theft, allowing attackers to exploit everything stored on the device. Mobile device management (MDM) solutions can help in many of these regards, as can effective use of identity management and access control to restrict access to authorized users and devices only.
Explain the use of the concept of data, control, and management planes in network security. All networks exist to move data from node to node; this requires a control function to handle routing, error recovery, and so forth, as well as an overall network management function that monitors the status, state, and health of network devices and the system as a whole. Management functions can direct devices in the network to change their operational characteristics, isolate them from some or all of the network, or take other maintenance actions on them. These three sets of functions can easily be visualized as three map overlays, which you can place over the diagram of the network devices themselves. Each plane (or overlay) provides a way to focus design, operation, troubleshooting, incident detection, containment, and recovery in ways best suited to the task at hand. This is not just a logical set of ideas—physical devices on our networks, and the software and firmware that run them, are built with this concept in mind.
Describe the role that network traffic shaping and load balancing can play in information security. Traffic shaping and load balancing systems attempt to look at network traffic (and the connections it wants to make to systems resources) and avoid overloading one set of links or resources while leaving others unused or under-utilized. They may use static parameters, preset by systems administrators, or dynamically compute the parameters they need to accomplish their tasks. Traffic shaping is primarily a bandwidth management approach, allocating more bandwidth for higher-priority traffic. Load balancing tries to spread workloads across multiple servers. This trending and current monitoring information could be useful in detecting anomalous system usage, such as a distributed denial-of-service attack or a data exfiltration taking place. It may also provide a statistical basis for what is “normal” and what is “abnormal” loading on the system, as another indication of a potential security event of interest in the making. Such systems can generate alarms for out-of-limits conditions, which may also be useful indicators of something going wrong.
Explain the two different security concerns regarding DNS and the countermeasures to deploy to mitigate their risk. First, the DNS itself as an infrastructure can be abused by attackers, who can use it to create in effect their own command and control architecture with which they can direct subsequent attack activities on a wide variety of target systems. This transforms a trustworthy infrastructure into one of increasing risk to users. Mitigating against this risk requires more widespread implementation of DNS Security Extensions (DNSSEC) by Internet service providers (ISPs), the operators of the Internet backbone and DNS services, and by end user organizations alike. Second, attackers can misuse DNS capabilities to misdirect user queries (via spoofing and other techniques), which can result in the download of malware or other payloads for the attacker to use. User organizations can mitigate this risk with a combination of approaches, including more effective filtering by firewalls, such as increased deep inspection of DNS related traffic (into and out of the organization), more effective blocked/allowed list management, and other techniques.
Explain the relationship between data loss prevention and network security. Data loss prevention (DLP) seeks to identify suspicious movements of data within the organization's infrastructure, both laterally (east-west) and across its outer perimeter (northbound into the Internet, southbound into the organization or into deeper security domains within the infrastructure). Such movements may be attempts by attackers to take high-value data sets, fragment them, encrypt them, and then exfiltrate them for later exploitation. From a network security perspective, this requires all the techniques of intrusion detection and prevention, access control, traffic control, and network and systems monitoring and analysis. In the worst case, the sophisticated DLP attack is comparable to building a TOR-like anonymizing virtual network within the target enterprise's infrastructure, masking both the sources and the destinations of the data, the data itself (via encryption), and the routing of the data to its ultimate destination.
Explain what a zombie botnet is, how to prevent your systems from becoming part of one, and how to prevent being attacked by one. A zombie botnet is a collection of computers that have had malware payloads installed that allow each individual computer to function as part of a large, remotely controlled collective system. (The name suggests that the owner of the system and the system's operating system and applications don't know that the system is capable of being enslaved by its remote controller.) Zombie botnets typically do not harm the individual zombie systems themselves, which are then used either as part of a massively parallel cycle-stealing computation, as a DDoS attack, or as part of a distributed, large-scale target reconnaissance effort. Reasonable and prudent measures to prevent your systems from becoming part of a zombie botnet include stronger access control, prevention of unauthorized downloading and installation of software, and using effective, up-to-date antimalware or antivirus systems.
Explain what a DMZ is and its role in systems security. From a network security perspective, the demilitarized zone (DMZ) is that subset of organizational systems that are not within the protected or bastion systems perimeter. Systems or servers within the DMZ are thus exposed to larger, untrusted networks, typically the entire Internet. Public-facing Web servers, for example, are outside of the DMZ and do not require each Web user to have their identity authenticated in order to access their content. Data flows between systems in the DMZ and those within the protected bastion must be carefully constructed and managed to prevent covert paths (connections into the secure systems that are not detected or prevented by access controls) or the exfiltration of data that should not go out into the DMZ and beyond.