Chapter 11

Implementing Quality of Service on a Wireless Network

This chapter covers the following topics:

An Overview of Wireless QoS Principles: This section begins by examining some of the differences between traditional wired QoS and wireless, and it looks at how the unique challenges of wireless QoS led to the development of the 802.11e and WMM standards.

Implementing QoS Policies on the Wireless Controller: This section introduces you to how QoS is implemented on both the AireOS and IOS-XE controllers. This section discusses QoS profiles, how DSCP-to-802.11e User Priority (UP) mapping occurs, and how other QoS functions are implemented in the wireless controller.

Implementing QoS for Wireless Clients: QoS in the upstream direction begins on the client and determines how well real-time applications, such as voice and video, operate. This section examines how QoS marking is accomplished on various operating systems and how these markings are preserved through the wireless infrastructure.

Implementing Application Visibility and Control: This section examines how AVC can be implemented in the controller to identify applications within the packet to provide better QoS and security controls. This includes an overview of the Fastlane AutoQoS macro that helps to quickly deploy AVC and other QoS services in a best-practices manner.

This chapter covers the following ENWLSI exam topics:

Quality of service (QoS) is one of the most important cornerstones of any network deployment. Without QoS, applications will not function predictably and the user experience will suffer. There is nothing worse than being on a video conference and having your session interrupted by pixelation caused by bandwidth limitations, packet loss, or just poor network performance. Networks by their very nature are built to be oversubscribed—while it makes good engineering sense to design a network like this, the result is that inevitably a network link somewhere along the line will become congested and cause undesirable application performance issues.

While some applications are not latency sensitive (such as email), other are extremely sensitive to even minimal amounts of jitter and latency. Many companies have come to rely on real-time applications such as WebEx, Telepresence, and others to conduct their internal and external meetings. A network without QoS will suffer noticeably poor performance in times of congestion, especially for these real-time applications.

You are probably familiar with the saying that a chain is only as strong as its weakest link. If one of those links is weaker than the others, it is the most likely place the chain will break if put under strain. QoS is exactly like this—to be truly effective, it needs to be implemented end to end, at every hop along the way. If there is even one node that either isn’t configured correctly for QoS or doesn’t support it, that is your weak point in the chain, and application performance will be impacted. Wireless LAN by its very nature is stochastic and unpredictable, making it one of the most challenging places in the network to implement QoS—meaning without proper handling, it can easily become the weakest link in the chain.

This chapter begins with an overview of wireless QoS fundamentals and how the wireless standards have developed to bring QoS to this challenging medium. Next, you learn how to implement QoS on the both the AireOS and IOS-XE wireless LAN controllers. Following that, we examine QoS from the client’s perspective and how QoS can be implemented to protect critical applications. Finally, you are introduced to AVC on the wireless controller and learn how Fastlane can be used to improve overall QoS capabilities of the wireless infrastructure.

“Do I Know This Already?” Quiz

The “Do I Know This Already?” quiz allows you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics, read the entire chapter. Table 11-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix D, “Answers to the ‘Do I Know This Already?’ Quizzes and Review Questions.”

Table 11-1 “Do I Know This Already?” Section-to-Question Mapping

Foundation Topics Section

Questions

An Overview of Wireless QoS Principles

1–4

Implementing QoS Policies on the Wireless Controller

5

Implement QoS for Wireless Clients

6

Implementing Application Visibility and Control

7

  1. 1. What is a unique characteristic of CSMA/CA?

    1. The AP uses a point coordination function to instruct the stations when to send.

    2. CSMA/CA is able to detect collisions after transmission.

    3. CSMA/CA avoids collisions by first asking permission of the AP to send.

    4. Every frame must be acknowledged by the receiving station.

  2. 2. How many Access Categories are defined by EDCA?

    1. Two

    2. Four

    3. Eight

    4. Unlimited

  3. 3. Which EDCA metric defines how long a station may continue transmitting?

    1. AIFSN

    2. CWmin

    3. CWmax

    4. TXOP

    5. TSpec

  4. 4. What is the primary role of the QoS profile in AireOS?

    1. Sets the DSCP trust boundary

    2. Allows customization of the EDCA parameters for QoS handling

    3. Sets the UP-to-DSCP and marking scheme

    4. Sets a maximum allowable DSCP value that can be used on the CAPWAP header and downstream UP value

  5. 5. To restrict traffic to a QoS level that uses DSCP and UP values of 0, what profile should be chosen?

    1. Platinum

    2. Gold

    3. Silver

    4. Bronze

  6. 6. What methods are available to control QoS on a wireless client? (Choose all that apply.)

    1. Microsoft Group Policy

    2. Apple Configurator

    3. Meraki MDM

    4. DNA-Center

  7. 7. Which of the following is not a feature of AVC?

    1. Remarking of DSCP

    2. Weighted Tail Drop

    3. Rate Limiting

    4. Traffic Drop

Foundation Topics

An Overview of Wireless QoS Principles

Wireless networks operate in a far less predictable manner than their wired counterparts, making reliable transport of latency-sensitive applications that much more challenging. Many longtime networking engineers will remember working with hubs—these were very basic half-duplex Ethernet devices where all the ports were part of the same collision domain, meaning no two stations could transmit at the same time without causing a collision. If one station were to transmit at the same time as another, a collision would occur, and both stations would need to back off for a random period of time in which they would try their transmissions again. In a hub environment, it was next to impossible to implement QoS of any kind—the environment was just too unpredictable and packet loss caused by collisions meant that real-time applications could never work reliably.

Today, hubs are next to extinct and have long since been replaced by Layer 2 switches that operate in full-duplex mode and use Content Addressable Memory (CAM) tables to ensure collisions do not occur. You may find it surprising that today’s Wi-Fi (up to 802.11ac Wave 2) essentially acts like a hub environment. Similar to hubs, Wi-Fi is a half-duplex medium, and only one station can transmit at a time without causing a collision (that is, each AP operates in its own collision domain). In other words, if an AP can be compared to a hub, and considering QoS was nearly impossible with hubs, how can QoS possibly be implemented at all in a wireless network?

The primary role of QoS in wired networks is to manage congestion; however, in wireless networks, the role of QoS is broader and much more difficult. In a wireless network, the main objective is to manage and limit the number of collisions for high-priority applications, thereby improving the overall quality of experience (QoE) for end users.

In the early days of Wi-Fi, primarily 802.11a/b/g, there was no QoS mechanism whatsoever. However, over the years the 802.11 QoS toolset slowly matured, with continual progress being made by the IEEE 802.11e Working Group. In 2016, further WLAN QoS enhancements that were proposed by IEEE 802.11e were rolled into a new wider definition of the 802.11-2016 standard, which includes the current definitive standard for wireless QoS. However, it is important to keep in mind that while the IEEE 802.11 body sets the standards, they are not responsible for ensuring equipment vendors are compatible with this standard. To address this, the Wi-Fi alliance formed a wireless QoS compatibility standard (which is based on the 802.11e enhancements defined in the 802.11-2016 standard), called Wireless Multimedia (WMM).

In Wi-Fi networks, every station associated to a particular AP must share the medium with all the other stations, and only one station may transmit at a given time—including the AP itself. The result is that each station must contend with all the other stations for airtime. WLANs are not half-duplex by choice. Wireless is by definition a multiple-access, broadcast medium, meaning that if more than one station transmits at a given time, the two signals interfere with each other and the receiver will not be able to decipher what was transmitted.

This situation is quite familiar to many people in business environments. Have you ever been on a conference call when two people try to talk at the same time? Although our brains have the ability interpret even the subtlest of sounds, it is almost impossible to untangle more than one sound (or person speaking) at a time. The brain’s limbic system is responsible for sorting out what we really want to listen to versus all other background sounds, but if more than one person speaks to us at the same time, all we hear is noise.

Now compare this with how a wireless AP communicates. Since the RF spectrum used by an AP and all its associated stations is shared, there is a physical limitation that only one station can transmit at a given time on a given channel without causing interference.

Note

The method of shared channel access described here primarily relates to 802.11a/b/g/n/ac. IEEE 802.11ax (Wi-Fi 6) introduces a new method of channel access that is much more structured and controlled by a scheduling and resource allocation mechanism in the AP. With 802.11ax, each client device is given a subset of the available channel called a Resource Unit (RU), meaning the approach to QoS is somewhat different. 802.11ax is described in more detail in Appendix A, “802.11ax.”

QoS mechanisms in wired networks are chiefly responsible for managing which packet, according to its class, is transmitted next, especially during times of congestion. In a Wi-Fi network, the job of QoS is far more complicated. Because the wireless medium is both shared and half-duplex, the QoS mechanism must manage priority access to the RF channel for all end stations in an organized and predictable way.

The following section examines the fundamentals of 802.11 media access, which will lead to a discussion of how it has been adapted to support QoS. Although the first incarnation of 802.11 media access had no ability to support QoS, a good grasp of how it works will enable you to understand how the modifications introduced by 802.11e have made WLAN QoS a reality.

The Distributed Coordination Function

Media access in the early days of 802.11 (802.11a/b/g) was governed by a process called the Distributed Coordination Function (DCF). Although much has changed since the early days of DCF, it still remains a foundational topic for modern 802.11 media access operation. DCF, and its successor, the Enhanced Distributed Channel Access (EDCA), can be thought of as “the rules of the road” for how a station gains access to the medium to transmit a frame.

Wi-Fi networks are completely egalitarian, meaning that all stations have equal access to the medium. In fact, even the AP has the exact same level of priority access to the medium as client stations do. If no controls were imposed on media access and all stations could transmit at will, collisions would be uncontrollable—in fact, as more clients are associated to the AP and try to transmit, the probability of collisions dramatically increases. In turn, after each collision, as stations attempt to retransmit, the situation snowballs, causing even more collisions until the situation degrades to the point where the network is all but paralyzed.

A similar problem was encountered in the early days of wired Ethernet hubs. To address this situation, a system called Carrier Sense Multiple Access with Collision Detection (CSMA/CD) was developed. CSMA/CD is a set of transmission and retransmission rules where all stations that wish to transmit must first wait until the medium is idle before transmitting a frame—essentially a “listen before you transmit” model. Once a station confirms there are no other stations currently transmitting and the medium is clear, it will transmit. After transmission, the station continues to listen in case another station happened to transmit at the exact same time, causing a collision. If there is a collision, both sending stations must wait a random backoff period before resending the frame, and hopefully the next time they transmit there will not be a collision. If there is, they again wait for another random backoff period, but this time they exponentially increase their random backoff window.

Similar to hubs, wireless networks are also collisions domains; however, they have unique challenges that cannot be solved simply by applying CSMA/CD. In a wired hub, a transmitting station can detect collisions by continually listening to the wire for an energy signal that would indicate simultaneous transmission. However, wireless stations that are broadcasting into the air do not have this ability—unlike hubs, there is no possible way to detect a collision in the air simply by listening to the wireless medium. Further, wireless clients often experience the “hidden node” problem, where two transmitting clients cannot see each other when listening to see if the medium is idle (because they are either too far away from each other or an obstruction sits between them). This often results in both stations attempting to send at the same time as soon as the medium is free, thereby causing a collision.

In an effort to alleviate the collision problem in wireless networks, CSMA/CD was modified into Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA). There are many similarities between CSMA/CD and CSMA/CD, including the “listen before you talk” method. The main difference comes in what happens after a transmission—instead of just listening to see if a collision occurred, CSMA/CA requires every frame to be acknowledged by the receiver. Thus, instead of listening to the medium to see if a collision occurred (which is not reliable in a wireless medium), the transmitting client will wait until an acknowledgment (ACK) is received from the receiving station to know that the transmission was successful before it moves to the next frame. If an ACK is not received, the station will know it must retransmit. It will then wait a random backoff period and then try to retransmit until it finally receives an ACK.

It is important to note that CSMA/CA can never fully guarantee that a collision won’t occur; rather, it reduces the probability that a collision will occur by trying to avoid a future collision. CSMA/CA is something like stopping your car at a four-way stop. Although you might try very hard to avoid a collision by looking both ways carefully before driving into the intersection, you can never fully guarantee what other drivers will do. If you decide to step on the gas, there is always a slight possibility that another driver might do the same thing at the same time, meaning the possibility of a collision is always present. The same goes for wireless stations that operate using CSMA/CA—even though stations listen before sending, they can never fully guarantee a collision won’t occur after the frame is transmitted.

DCF heavily leverages CSMA/CA for media access. As mentioned earlier, CSMA/CA provides a framework of “listen before you talk” for wireless stations. When a wireless station wants to transmit a frame, the first thing it does is wait a predetermined amount of time called the DCF Interframe Space (DIFS) timer—a period of 34 microseconds in 802.11. Once the DIFS period has expired and if the medium is still clear, the station transmits the frame. Note that all stations must wait the mandatory DIFS period before sending their frame—it is like a level set for all stations that want to transmit. If they all just started transmitting as soon as they had a frame in the queue, collisions would be unavoidable. However, by waiting the DIFS period, it gives a chance for stations to confirm that the media is indeed clear for transmission.

The DIFS is actually two timers in one—a period of 16 microseconds called the Short Interframe Space (SIFS) that occurs as soon as a station finishes transmitting. The SIFS is a short period that confirms the station is indeed finished transmitting. Once the SIFS is concluded, a further 18 microseconds of waiting follows (composed of two slot times, 9 microseconds each). Remember, in CSMA/CA, all frames must be acknowledged by the other side. This last part of the DIFS timer allows the receiver to send its ACK to confirm the transmission was successful. This period finishes the DIFS interval.

Once the DIFS is complete, the station generates a random number called the Contention Window (CW). The CW is a slot time value that must be counted down to zero, at which point if the medium is still free the station begins to transmit. The initial CW value is a number chosen between 0 and 15 slot times (where each slot time is 9 microseconds) and is called the CWmin. Once the CW timer counts to zero, the station begins to transmit its frame. Figure 11-1 illustrates this process.

Images

Figure 11-1 Media Access with the DCF Process

What about situations where a transmission is not acknowledged by the receiving station? In this case, either a collision occurred or interference was encountered. The client must try again, but with a modification to its CW. On the second attempt, the CWmin is doubled to 0–31 slot times (meaning a random countdown number is chosen between 0 and 31). If this is still not successful, the CW is doubled again to 63, and so on and so forth. This process continues until the CW is increased to 0–1,023 slot times, a value called the CWmax. This doesn’t mean that the CW will be 1,023; rather, it simply increases the range of possible random numbers that may be chosen. Figure 11-2 illustrates this process.

Images

Figure 11-2 The Contention Window Exponential Increase from CWmin to CWmax

You might be wondering how long this algorithm will continue if a station does not receive an ACK from the receiving station. In 802.11, there is no predefined limit to the number of retries that a station may attempt; however, in a Cisco AP, the limit is 64 retries before the frame is dumped.

Putting this all together, Figure 11-3 illustrates the overall DCF decision process for a station that is attempting to transmit a frame onto the wireless medium.

Images

Figure 11-3 The DCF Algorithm Block Diagram

Based on the DCF algorithm described in Figure 11-3, consider an example of how this might apply when multiple stations are attempting to transmit frames at the same time and encounter contention. In the following example illustrated in Figure 11-4, there are five stations associated to the same AP. Stations A, B, C, D, and E are all trying to send frames at approximately the same time.

Images

Figure 11-4 An Example of the DCF Algorithm in Action

To begin, Station A is already sending an Ethernet frame. Stations B, C, and D all show up and want to transmit, but since Station A is in the midst of a transmission, they must all defer until the channel is clear (they know this by listening to the medium). Once Station A finishes its transmission, Stations B, C, and D detect that the medium is clear. First, they all wait the mandatory DIFS period. Once the DIFS period has expired, the remaining three stations generate a random number between 0 and CWmin. In this example, Station B generates the smallest random CW value. Once Station B counts down to zero, it immediately transmits its frame (assuming the medium is still clear).

As soon as Station B begins transmitting, Stations C and D hear the transmission and immediately pause their CW countdown timers. These stations must now defer/wait until Station B is finished before they can resume. Notice that during Station B’s transmission, Station E suddenly shows up and wants to transmit as well, so now there are again three stations contending for access to the medium. Once Station B finishes, Stations C and D resume their countdown. Because Station D has the smallest CW timer, it reaches zero first and begins transmission while Stations C and E defer, and so the algorithm continues until everyone has had the opportunity to transmit their frames.

Retrofitting DCF—Enhanced Distributed Channel Access (EDCA)

While DCF does a pretty good job at managing contention and media access, it has an obvious flaw—there is no differentiation of service for higher- or lower-priority applications. In short, there is no QoS in DCF. With the emergence of 802.11e and WMM, a media access algorithm that supports QoS was introduced, called the Enhanced Distributed Channel Access (EDCA) algorithm. While EDCA builds on the foundations of DCF, it introduced five major enhancements that allow Wi-Fi networks to support QoS:

  • The establishment of four Access Categories (ACs), which are analogous to queues in a wired switch and allow differentiated service handling.

  • Instead of a single DIFS for all traffic, 802.11e/WMM introduces different interframe spacing values for each AC, allowing more aggressive media access for high-priority traffic. This spacing timer is called the Arbitrated Interframe Space Number (AIFSN).

  • Different contention window values for each AC (that is, different CWmin and CWmax values for each AC).

  • Transmission Opportunity (TXOP) values for each AC.

  • Call Admission Control (TSpec).

These five key enhancements of EDCA are discussed in the following sections.

Access Categories

EDCA is similar to DCF in many ways, especially in the way that it leverages CSMA/CA for media access. However, it diverges in use of Access Categories and the wait timers that help deal with contention. 802.11e EDCA and WMM specify four different ACs (from highest priority to least):

  • Video (AC_VO)

  • Voice (AC_VI)

  • Best Effort (AC_BE)

  • Background (AC_BK)

Unlike wired switches and routers that may have different numbers of transmit queues depending on the type of interface, link speed, manufacturer and device type, Wi-Fi devices by convention may only have four Access Categories. In fact, to meet WMM compliance, Wi-Fi devices must have these four ACs—no more and no less.

In order to distinguish different classes of service, the 802.11e Ethernet frame header incorporates a 3-bit field known as the 802.11e User Priority (UP) that offers up to eight different values; however, although eight UP values are possible, only four ACs are available for use. The 802.11e UP field is similar to the 802.1p CoS field used on wired 802.1q Ethernet trunks, but in this case UP is only used between wireless stations.

Note

The UP value was only introduced as part of 802.11n, meaning there is no UP value in prior generations (802.11a/b/g).

Figure 11-5 illustrates the WMM AC-to-UP mapping on a Wi-Fi interface.

Images
Images

Figure 11-5 Access Categories and Their UP Mappings

At this point, you may be wondering how a frame is marked with the correct UP value and thus put into the correct AC when it is being transmitted. This happens through a mapping of the IP packet’s DSCP to the 802.11e UP value. In other words, as an IP packet enters the controller in the downstream direction (or arrives at an AP in the upstream direction), the controller examines the DSCP value and maps it to a corresponding UP value, which is then written into the 802.11 header. Table 11-2 summarizes the mappings of DSCP to WMM UP to AC.

Images

Table 11-2 Mapping of DSCP to WMM UP Value to Access Category

Traffic Type

DSCP

WMM UP Value

Access Category Mapping

Voice

46 (ef)

6

Voice (AC_VO)

Interactive Video

34 (af41)

5

Video (AC_VI)

Call Signaling

24 (cs3)

3

Best Effort (AC_BE)

Transactional / Interactive Data

18 (af21)

3

Best Effort (AC_BE)

Bulk Data

10 (af11)

2

Background (AC_BK)

Best Effort

0 (be)

0

Best Effort (AC_BE)

Outside of the specific mapping values shown in Table 11-1, other DSCP-to-UP mappings are derived by taking the three most significant bits (MSB) of the DSCP field and using this to generate an UP value. For example, a DSCP value of 40 in binary is 101000. Taking the MSB of this is 101, which translates to a decimal value of 5. The 802.11e UP value of 5 would then be mapped to the Video AC (AC_VI). The mapping of DSCP to UP values is standardized by RFC 8325, which aligns to the implementation used by Cisco wireless LAN controllers.

As data is processed by each radio interface, the DSCP value is mapped to the corresponding UP value, which in turn assigns the frame to the correct Access Category. Once the frame is assigned to the appropriate AC, the Wi-Fi radio begins to transmit the frame according to the relative priority of the AC.

Note

One aspect of the 802.11e/WMM AC model shown in Table 11-2 is that voice is mapped to a value of 6, rather than 5, which is common for 802.1p CoS, showing that the two marking systems do not exactly align.

Another aspect to be aware of is how your end-to-end QoS design will fit with the wireless network. Due to the complexities and different classes of applications used in modern networks, many companies have adopted QoS designs that utilize more than four classes, making a challenge for the four available 802.11 Access Categories to fit a campus QoS strategy. Today, it is not uncommon to see companies use eight or even 12 class QoS systems. Each networking device must in some way adopt these different QoS classes and provide differential treatment to each class of traffic.

If your network uses more than four QoS classes, how can you adapt such a model to wireless networks? You might be thinking that four is a very small number of QoS classes, but remember that this limitation is something of an artifact from the 802.11e standard that was introduced in 2005. In the early 2000s, use of 4-class QoS models was fairly common. However, today the demands of modern applications have pushed this much higher. The result is that no matter how many QoS classes are in use in your network, in a wireless network this design is reduced to a 4-class model, as all QoS markings will be mapped to one of the four available wireless access categories.

Figure 11-6 illustrates a simple mapping scheme of a typical 8-class system to a wireless network.

Images

Figure 11-6 Mapping an 8-Class QoS System to the Four Access Categories in a Wireless LAN

As can be seen in Figure 11-6, an enterprise QoS class structure must be mapped to the four available ACs, regardless of how many classes are used in the enterprise model. This is done through the DSCP-to-UP mapping scheme. For example, if your enterprise uses two separate QoS classes for video—one for broadcast video and another for interactive video—these could both be mapped into the Video AC (AV_VI). Since UP values of 4 and 5 are mapped to this AC, it is important to ensure that only DSCP values that map to these 11e UP values are chosen for these application classes.

Also, while most wired networks employ a priority queuing system for voice (and sometimes video) and rely on Class-Based Weighted Fair Queuing (CBWFQ) for everything else, in wireless networks there is no such thing as a priority queue. All Wi-Fi ACs are handled by the rules of EDCA.

Arbitrated Interframe Space Number (AIFSN)

One of the key limitations of DCF is that the DIFS value is the same for all traffic types, regardless of how latency-sensitive the data is. To address this, EDCA introduces different interframe spacing periods for each Access Category, called the Arbitration Interframe Spacing Number (AIFSN). The intention of assigning different interframe spacing values to each AC is that the higher-priority ACs are assigned a shorter initial wait period compared to the lower-priority ACs, thus giving high-priority traffic a much better probability of being transmitted first and reducing its probability of contention and retries (thus reducing latency and jitter). The AIFSN values defined in EDCA are shown in Table 11-3 (measured in slot times).

Table 11-3 AIFSNs per Access Category

AC Priority Queue

AIFSN Slot Times

Voice (AC_VO)

2

Video (AC_VI)

2

Best Effort (AC_BE)

3

Background (AC_BK)

7

By way of comparison, the DIFS used by DCF uses an interframe space of two slot times (the same value used by AC_VO in EDCA). Clearly, with the voice and video ACs having a much shorter AIFSN, you would expect latency-sensitive data to spend much less time in the contention algorithm and on average be sent before lower-priority traffic. While assigning differential AIFSNs to each AC goes a long way toward improving QoS, 802.11 is still a contention-based medium and collisions can occur. What varying AIFSNs accomplish is an improvement in the probability of higher-priority traffic being serviced first by giving it a statistical advantage over lower-priority traffic; however, AIFSNs do not guarantee that voice and other high-priority traffic will always be sent first, like you would expect from a strict priority queue on a wired switch.

Contention Window Enhancements

In legacy DCF environments, once the DIFS expires, each station backs off for a random contention window period. If collisions occur, the CW is doubled up to a value of CWmax. Similar to using a common DIFS for all traffic types, DCF in general gives no preferential CW treatment to higher-priority traffic, meaning all traffic types have the same statistical probability of dealing with contention.

Like AIFSNs, EDCA introduces different CW ranges for each AC, helping the higher-priority ACs compete more aggressively to transmit their frames in the presence of contention. This is particularly important for latency-sensitive traffic such as voice and video, which suffer if they have to wait the longer CWmax intervals. Similar to having different AIFSN values assigned to each AC, different CWmin and CWmax values are also assigned to each AC, providing a statistical advantage to the higher-priority ACs. The EDCA CW values are listed in Table 11-4.

Table 11-4 EDCA Contention Window Times for Each Access Category

 

CWmin (slot times)

CWmax (slot times)

Legacy DCF CW Values

(for comparison)

15

1,023

Voice (AC_VO)

3

7

Video (AC_VI)

7

15

Best Effort (AC_BE)

15

1,023

Background (AC_BK)

15

1,023

Note from Table 11-4 how AC_VO only backs off between 0 and 3 (CWmin) slot times and a maximum of 0 to 7 (CWmax) slot times. Note also how AC_BE and AC_BK are given the same CW times as DCF. Of course, since the CW is randomly generated, there is still a small probability that the lower-priority ACs could back off for a shorter period than a higher- priority AC; however, statistically speaking, the higher-priority ACs will experience much less contention than the lower-priority ones.

In summary, the AIFSN and CW timers work together to greatly improve the overall handling of high-priority traffic and its ability to successfully transmit, even in the presence of contention.

Transmission Opportunity (TXOP)

A fourth enhancement of 802.11e/WMM is contention-free access periods for stations to access the medium, called the Transmission Opportunity (TOXP). The TXOP is a set period of time when a wireless station may continue to send as many frames as possible without having to contend with other stations. Winning the EDCA contention algorithm in a busy WLAN is something like winning a contest—but imagine if a transmitting station were to only win the right to send one single frame at a time before going back and competing all over again before it can send the next frame. Obviously, this would severely impact latency-sensitive traffic and would be largely ineffective. Conversely, if the transmitting station were to be given unlimited access and continually send frames after it wins the contention algorithm, this could starve out other stations.

With EDCA’s TXOP enhancement, each AC has a set time limit where it can continually transmit frames uninterrupted. Once the TXOP limit expires, it must give up access to the medium and go back and contend once again for its the next chance to transmit. Table 11-5 summarizes the TXOP values for each AC.

Table 11-5 EDCA Transmission Opportunity (TXOP) Values for Each Access Category

EDCA / WMM AC

TXOP (µs)

TXOP (Units)

Voice (AC_VO)

2,080

65

Video (AC_VI)

4,096

128

Best Effort (AC_BE)

2,528

79

Background (AC_BK)

2,528

79

Notice from Table 11-5 that AC_VO has a shorter TXOP value than any other AC. In fact, AC_VI’s TXOP is double that of AC_VO, despite having a lower priority. The explanation is in how congestion is handled by the TXOP. Recall that voice traffic generally consumes only a small amount of bandwidth, requiring much less airtime. For example, as shown in Figure 11-7, only one voice packet is sent every 20 ms. Compare that to a 4K video stream where each video frame contains 8.3 million pixels, most of which are changing 30 times per second (much faster than voice). The sheer volume of data needed by video requires a larger TXOP to maintain an acceptable performance level. As for AC_BE and AC_BK, giving a larger TXOP than voice actually helps voice traffic by reducing the amount of time lower-priority traffic contends for the medium.

Images

Figure 11-7 Comparing Voice and Video Traffic Volumes for TXOP Use

802.11 Transmission Specification (TSpec)

The last major QoS enhancement introduced by 802.11e/WMM is a method of Call Admission Control (CAC) called Traffic Specification (TSpec). TSpec allows real-time applications, such as voice calls, to be prioritized by reserving bandwidth on the AP before it begins transmission. To use this feature, TSpec must be configured on the AP and optionally on the client stations.

When running TSpec, a client station signals its traffic requirements (data rate, power save mode, frame size, and so on) to the AP using an ADD Traffic Stream (ADDTS) message. If the AP is running TSpec for that AC, it will respond back with either an acceptance or a rejection of the request. If the request is accepted, the AP will reserve the requested bandwidth for the client and the call may be made. If the AP is not able to accommodate the request, it will reply back that the ADDTS request has been declined and the client may try to transmit anyway without reserved bandwidth, or it may attempt to roam to another AP.

Figure 11-8 illustrates the function of TSpec.

Images

Figure 11-8 The Function of TSpec

As 802.11 networks have become faster and support wider channels, TSpec has become less critical for real-time applications. In general, the greatest improvements to QoS are seen through the EDCA timers, particularly the AIFSN, CW, and TXOP timers.

Implementing QoS Policies on the Wireless Controller

Before the wireless controller can be configured for QoS, it is important to understand how the mappings of DSCP to UP works on the CAPWAP tunnel, and vice versa. The next section will begin with an explanation of how QoS markings are managed, both in the downstream direction (from controller to client) and in the upstream direction (from client to controller). Next, you will see how QoS is implemented on the AireOS and IOS-XE controllers.

QoS Mapping and Marking Schemes Between the Client and Controller

In order to maintain proper QoS handling of IP packets, QoS markings must be preserved end-to-end. To accomplish this, a consistent system of mapping DSCP and UP values is required for both the original IP packet and the CAPWAP packet that tunnels the wireless traffic.

In the downstream direction (packets entering the controller from the wired network, which are passed into the CAPWAP tunnel down to the AP and are finally transmitted to the client), there are two QoS remarking steps involved:

Step 1. An Ethernet frame is received over an 802.1q trunk at the controller from its upstream switch. The controller examines the DSCP marking on the IP packet and transcribes this to the DSCP field on the header of the CAPWAP packet. Although the 802.1q trunk will likely carry an 802.1p CoS value, this is generally ignored because DSCP is the preferred method of QoS trust.

Note that in most cases the inner DSCP and the CAPWAP DSCP will be the same; however, some exceptions to this rule exist, which will be discussed later in this chapter.

Step 2. When the AP receives an incoming CAPWAP packet, the DSCP value on the CAPWAP tunnel is examined and mapped to an 802.11e UP value. This is then transmitted over the air from the corresponding AC. The mapping of DSCP to UP in this case is based on the mapping table shown in Table 11-2. Figure 11-9 illustrates the downstream QoS remarking and mapping process.

Images

Figure 11-9 QoS Mapping and Remarking in the Downstream Direction

An important subtlety in this scheme is where the final downstream mapping of DSCP to UP is derived from. At the AP, the CAPWAP’s DSCP value is used rather than the inner packet. In most cases, this is not an issue because the two DSCP values are exactly the same. However, there are important cases where the DSCP values on the CAPWAP header and the inner IP packet are in fact different, meaning the UP value that is selected will give a different QoS handling than what is marked on the inner packet.

In the upstream direction, the process is much the same but in reverse. The two steps in the upstream direction are as follows:

Step 1. A client transmits a frame with the UP field marked on the 802.11 header, as well as the DSCP value on the IP packet header. When the frame arrives at the AP, it has the choice (defined by the administrator) to either inherently trust the DSCP value of the inner packet and map this to the CAPWAP header or to map the Layer 2 11e UP value to a DSCP value based on Table 11-2 and then map this to the CAPWAP header. The inner DSCP marking is preserved and does not change.

Step 2. After the CAPWAP packet is decapsulated at the controller, the original IP packet is sent to the upstream switch over an 802.1q trunk. Again, even if QoS is used by default, it is ignored in place of the DSCP value. Figure 11-10 illustrates how the QoS markings are handled in the upstream direction.

Images

Figure 11-10 QoS Mapping and Remarking in the Upstream Direction

As discussed previously, the administrator has the choice to either trust DSCP or UP at the access point, but the recommended approach is to trust DSCP in the upstream direction.

Note

If the client does not support WMM, there will be no 802.11e UP value marked into the frame (since 802.11e UP is only supported by WMM). In this case, the AP is forced to apply a default QoS setting to the traffic. This is described in more detail later in this chapter.

From the underlying IP transport network’s perspective, the CAPWAP tunnel is simply a flow of IP packets that need to be handled with the appropriate level of QoS based on the DSCP marking in the IP header.

Handling QoS Marking in the WLAN

The AireOS controller implements QoS in four profiles, known as “precious metal” profiles. These profiles are something of a historical artifact in AireOS, with a loose mapping to the four WMM Access Categories; however, today there is no real correlation between the QoS profiles and the WMM ACs. The QoS profiles provide a method to tweak the QoS handling in a templated way and then have this mapped to a WLAN.

The fundamental purpose of the QoS profiles is to set a maximum DSCP marking limit on the CAPWAP tunnel and in turn the downstream 11e UP value. As discussed earlier, the inner IP packet’s DSCP value is mapped directly to the DSCP value of the CAPWAP header; however, the precious metal profile can override this mapping function by capping the DSCP on the CAPWAP header to a maximum value. For example, if the Platinum profile is implemented, but a packet with a DSCP value of 56 enters the controller in the downstream direction, the CAPWAP DSCP value will be restricted to 46 while the inner IP packet’s DSCP value will remain at 46. Table 11-6 summarizes the maximum DSCP value of each QoS profile.

Table 11-6 The Four QoS Profiles in AireOS Controllers

Images

QoS Profile Name

Maximum DSCP Ceiling

Use Case

Platinum

46

Most commonly used. Recommended for most enterprise deployments.

Gold

34

Limited use.

Silver

0

Hotspots/guest users.

Bronze

10

Limited use.

In past literature, it was recommended to implement a separate WLAN for each class of service used in the wireless network and then apply a different QoS profile to these WLANs. For example, a common recommendation was to deploy wireless IP phones on a dedicated voice SSID and apply the Platinum profile. Clearly, this is no longer a practical way to implement a wireless network, as mobile phones, laptops, and anything else you can think of are Wi-Fi capable and most of them can be used for voice and video communications. The need to mix these types of devices on a single WLAN means dedicating a single voice WLAN, which is not practical. In light of this, it is generally recommended to use the Platinum profile for all enterprise applications, meaning the other profiles are rarely used. In some cases where you may want to limit the CAPWAP DSCP value, such as in a guest or a hotspot network, the Silver or Bronze profile may be used.

So how do these QoS profiles actually work? In the downstream direction, the QoS is handled by mapping the incoming packet’s DSCP value to the CAPWAP DSCP (as illustrated in Figure 11-9). At the AP, the DSCP is mapped to the corresponding 802.11e UP value and is placed in the correct egress AC. As packets enter the controller, the DSCP makings are compared with the QoS profile applied to the WLAN. If the DSCP value exceeds the QoS profile’s maximum allowable DSCP value, it will be downgraded to the maximum allowed value for that profile. For example, if a WLAN has the Gold profile implemented, the maximum DSCP allowed on the CAPWAP header is 34 (af41). If a packet enters the controller from the wired side with a DSCP of 46 (ef), the controller will downgrade the DSCP to 34 on the CAPWAP tunnel, and the packet will now be treated as a video packet (af41 is generally used for video packets in an IP network according to RFC 4594).

It is important to note that the mappings in the AP and the controller never impact the inner IP packet’s DSCP value—the profile only limits the DSCP value on the CAPWAP header. If a packet enters the controller from the wired side with a DSCP lower than the default maximum of the QoS profile on that WLAN, then the original packet’s DSCP value is simply transcribed to the CAPWAP header and is in turn used to map to the 802.11e UP value at the AP.

In the upstream direction, the AP maps the QoS fields in a very similar way to how the controller does it but in reverse. Since the 802.11e UP values are set by the client, the AP can either (1) compare the incoming 11e UP value with the maximum allowed UP value for the QoS profile on that WLAN and then map it to the corresponding DSCP on the CAPWAP header or (2) just copy the inner packet’s DSCP value directly to the CAPWAP header (this is the preferred model). In either case, the DSCP value that is written on the CAPWAP header will be up to the maximum of the QoS profile.

By controlling the CAPWAP DSCP, the QoS profile also indirectly sets a maximum allowable 802.11e UP value for each WLAN. For example, if the profile is set to Gold, then by enforcing a maximum DSCP value of 34, a maximum UP value of 5 results at the AP. In the upstream direction, if a frame with an UP value of 6 is received by the AP, it will map the UP value to a DSCP of 34 on the upstream CAPWAP packet, essentially downgrading the QoS handling of that packet across the IP transport network. However, if the AP receives any lower UP values (1–5), these will just be mapped to the corresponding DSCP value shown in Table 11-2. Although this doesn’t really constitute as a trust model, it does allow the AP and controller to establish a ceiling on the QoS levels that are accepted per WLAN.

Figure 11-11 illustrates the example of a controller where the Gold profile has been applied to the WLAN. Note what happens when voice packets enter the controller or AP marked with DSCP 46 (EF), either in the upstream or downstream direction.

Images

Figure 11-11 The Effect of Applying the Gold Profile to a WLAN

Consider another example where the QoS policy can be used to mark down traffic to best effort. This might be the case with a guest wireless network where all traffic should be remarked to DSCP 0. Although you cannot control the DSCP and 802.11e UP markings that originate from the client device, the AP can use the Silver QoS profile, which has a default maximum value of 0 to enforce the maximum DSCP on the CAPWAP header as well as the downstream UP value used on the WLAN.

It is important to note that the mappings shown in Table 11-2 are not customizable and the QoS profile ceilings are hardcoded. This means that certain situations may arise where the AP maps the UP value to a DSCP that is not aligned with the QoS policy in the campus network, thus affecting the handling of CAPWAP packets as they are transported across the IP backbone.

Implementing QoS on the AireOS Controller

To implement the QoS policies, navigate to Wireless > QoS > QoS Profiles. Figure 11-12 illustrates how the four “precious metal” QoS profiles are presented in the AireOS controller.

Images
Images

Figure 11-12 The QoS Profile Configuration Menu in AireOS

As noted in Figure 11-12, the description of each profile can be ignored, as this is nothing more than a historical artifact. The best-practice recommendation is to use the Platinum profile for enterprise deployments. By clicking each profile, you can configure certain aspects of the profile. This is shown in Figure 11-13. In this menu, the following QoS capabilities can be configured:

Images

Figure 11-13 Configuring the QoS Profile

  • Per-User Bandwidth Contracts: A bandwidth rate limiter/policer that is applied for each (useful for hotspots or guest networks).

  • Per-SSID Bandwidth Contracts: A bandwidth rate limiter for the whole SSID (this is rarely used).

  • WLAN QoS Parameters:

    • Maximum Priority: Sets the upper limit of the DSCP value on the CAPWAP header:

      Voice profile max DSCP = 46

      Video profile max DSCP = 34

      Best Effort profile max DSCP = 0

      Background profile max DSCP = 10

    • Unicast Default Priority: Sets the default DSCP value that will be used on the CAPWAP header if non-WMM clients are present (802.11a/b/g). Because non-WMM clients don’t use an UP value, this is the default that is used. It is recommended to use besteffort here (DSCP 0). This one should be set to besteffort.

    • Multicast Default Priority: The default DSCP value used for multicast packets.

  • Wired QoS Protocol: Defines the default QoS values to be used on the 802.1q trunk connecting the controller to the upstream L2 switch. This feature is rarely used as almost all L2 switches use the DSCP trust method, making trust of the 802.1p QoS field unnecessary.

Note

Although design recommendations around per-user and per-SSID bandwidth controls varies depending on the network, it should be considered in places where there is a high density of users (such as in university campus networks) or in places where the AP’s wired connection is bandwidth limited, such as at a remote site.

Once the QoS profile has been configured, the final step is to apply the profile to the WLAN. Figure 11-14 illustrates how the Platinum QoS policy is applied to wlan-enterprise.

Images

Figure 11-14 Applying the Platinum Profile to a WLAN

Implementing QoS on the IOS-XE Controller

The IOS-XE controller offers similar QoS functionality to the AireOS controller, but also inherits many of the well-known native QoS capabilities in IOS and IOS-XE. For example, the IOS-XE controller supports multilevel hierarchical QoS levels, starting with the physical port level, AP radio level, SSID level, and finally the individual user (see Figure 11-15).

Images

Figure 11-15 Hierarchical QoS Policies in the IOS-XE Controller

To configure a QoS policy in the IOS-XE controller, follow these steps:

Step 1. Navigate to Configuration > Services > QoS and click Add. Here you will be presented with a menu similar to Figure 11-16. In this menu there is an option to add class-maps that will define the QoS behavior for the policy. The underlying CLI syntax follows the same Modular QoS CLI (MQC) used in other IOS and IOS-XE devices that use class-maps and policy maps, including a default class, which is used when the other class-maps are not matched (shown in Figure 11-16).

Images

Figure 11-16 Configuration of a QoS Policy in the IOS-XE Controller

By clicking Add to create a class-map, you can define matching criteria that will trigger behavior of the policy and apply an action. For example, the class-map can be configured to match on either an AVC (Application Visibility and Control) or a User Defined criteria. If User Defined is selected, a matching criterion, such as ACL or incoming DSCP, can be selected. The action taken can be either to remark or to drop the packet.

You may also have noticed that the QoS configuration in Figure 11-17 allows for the configuration of AutoQoS. The AutoQoS feature is essentially a macro that generates the underlying configuration for different types of QoS profile. The AutoQoS feature has four template options, as follows:

Images

Figure 11-17 Defining the QoS Policy

  • Enterprise (adds classes with AVC criteria for common enterprise applications)

  • Fastlane (implements the latest EDCA parameters)

  • Guest (sets DSCP to a default value of 0 for all packets)

  • Voice (classifies and sets DSCP values strictly for voice and video)

Each has slightly different implementations of QoS that support best practices for DSCP-to-UP mappings, DSCP trust, and prioritization of common business applications through the use of AVC.

Note

AireOS supports an AutoQoS macro called Fastlane. This will be discussed later in the chapter.

Step 2. Once the policy has been created, add the new QoS policy to the correct policy profile and select the direction of traffic you would like it to be applied in (ingress or egress or both), as shown in Figure 11-18.

Images

Figure 11-18 Selecting a Policy Profile for the QoS Policy That Was Just Created

Step 3. In the final step, the policy needs to be added to the correct WLANs and APs. Figure 11-19 shows an example of adding the default-policy-profile to the correct WLAN.

Images

Figure 11-19 Adding the Policy to the Correct WLAN

In addition to creating QoS policies as described, it is also possible to implement a DSCP ceiling on the CAPWAP header, similar to using the four precious metal profiles in AireOS (Platinum/Gold/Silver/Bronze), as shown in Figure 11-20.

Images

Figure 11-20 Customizing the QoS Policy with a DSCP Ceiling

Implementing QoS for Wireless Clients

If QoS is to be compared to a chain, which is only as strong as its weakest link, then wireless QoS from the client to the AP can easily be considered one of the weakest links in the chain. All wireless stations, from the AP to the client, must obey the same rules of EDCA. Each station must use the same access category schemes and follow the same AIFSN, CWmin, CWmax, and TXOP timers for a given WLAN. Media access only works if everybody obeys the same rules for contention. To ensure this happens, the EDCA parameters that clients will use are announced from the AP in wireless management frames. When a client station receives the AP’s instructions for the EDC parameters, it must use them. In fact, if the client ignores the EDCA instruction set from the AP, the client cannot be considered Wi-Fi compliant. The following sections examine implementation considerations for the wireless client.

Implementing Client QoS Marking Schemes

For QoS in the upstream direction (from the client to AP) to be successful, not only must the EDCA parameters be correctly observed, but the client also needs to ensure the applications are properly classified and marked and are correctly mapped to the 802.11e UP value that will ensure the frame is sent from the right access category. If a client is not set up to correctly mark DSCP of the packets, or if the mapping of DSCP to UP is not correct, QoS in the upstream direction will not work as expected, or it may not work at all.

In most corporate wireless environments the DSCP marking of packets from an application is orchestrated centrally to prevent a client from incorrectly (or maliciously) misconfiguring the DSCP values and disrupting QoS functions on the network. For example, imagine the impact to a network if a client marked all BitTorrent traffic as high priority, such as DSCP ef (46). Not only would this compete with other voice traffic, it could potentially cause massive disruptions to backbone switches and routers.

In a Microsoft environment, DSCP values are generally controlled by central Group Policy. By default, Windows policy has a default DSCP value of 0; however, on a per-application basis, this can be overridden as desired by Group Policy. For example, a Group Policy Object can be defined to mark all voice traffic originating from webex.exe as DSCP 46. Figure 11-21 illustrates a centralized Group Policy scheme to mark MS Lync traffic.

Images

Figure 11-21 Using Microsoft Group Policy to Mark DSCP Values on Client Traffic

For other operating systems, such as MacOS, iOS, and Android, the DSCP markings are natively set by the application and are implicitly trusted by the application (as opposed to marking all traffic to DSCP zero). While this is user-friendly, it also carries the risk that certain applications may incorrectly mark traffic that do not comply with the QoS design or class structure used in an organization. For example, if someone was to transmit a high volume of streaming media packets and give them the highest level of QoS, it could interfere with other applications that are using the same DSCP values but are trusted by the organization.

For these situations, an approach that is similar to Microsoft Group Policy can be used to administer the applications on end users’ devices. Two such examples are Apple Configurator and the Meraki MDM (Mobile Device Manager). A tool like Meraki MDM allows an administrator to control not only which applications are used on a device but also what DSCP marking those devices may use. Either the MDM can trust the DSCP values natively used by these applications (since the applications are already assumed to be trusted by corporate IT) or it can remark the DSCP to some other value. All other applications will have their DSCP values remarked to zero. Figure 11-22 illustrates the QoS configuration menu in the Meraki MDM for a collection of applications.

Images

Figure 11-22 Configuring Client-Side QoS with the Meraki MDM

Mapping DSCP to UP in the Client

Marking the correct DSCP value on a client is the one aspect that you can control; however, wireless QoS is ultimately decided by which WMM access category the frame is transmitted from, meaning the DSCP-to-802.11e UP value marked into the frame must be accurate. Generally speaking, we would expect that the DSCP-to-UP mapping is consistent across all clients and operating systems, as it is defined by RFC 8325. However, this is not always the case. One example is in Microsoft Windows, where the mapping of DSCP to UP does not follow the mapping standard (unlike Apple and Android).

In the case of Windows, the DSCP value is derived from the three most significant bits (MSB) of the DSCP value. For example, if DSCP 46 (ef) is marked into a voice packet, the binary value of the DSCP is 101110. Taking the three MSB of this DSCP value results in a binary value of 101. In decimal form this is 5, which becomes the UP value on the 802.11 frame. An UP value of 5 translates to AC_VI rather than AC_VO, meaning the voice packet will need to contend for access as if it was video, not voice. The mapping of DSCP to UP is something that is hardcoded into Windows and cannot be changed. Figure 11-23 illustrates a wireless sniffer trace of this effect.

Images

Figure 11-23 Comparing the DSCP and UP Markings in a Windows OS

As the frame arrives at the AP, the true QoS value of the inner packet should be reprioritized as voice, not video. If the controller were to trust the incoming UP value and translate this to a corresponding DSCP value on the CAPWAP header, it would translate UP 5 → DSCP 34, which is incorrect for voice. To overcome this, it is important to configure the AP to trust DSCP in the upstream direction rather than UP. This configuration option is illustrated in Figure 11-24 on an AireOS controller.

Images

Figure 11-24 Trusting Upstream DSCP from the Client

Implementing Application Visibility and Control

Application Visibility and Control (AVC) is a technology that involves multiple components, including Network-Based Application Recognition Version 2 (NBAR2), Flexible NetFlow (FNF), and management tools that provide powerful application visibility and control capabilities based on stateful deep packet inspection (DPI).

With the Cisco AVC solution available on wireless controllers from AireOS 7.4 onward, it is possible to identify applications inside the packet and to have a measure of control over them. Types of control include the following:

  • Marking of DSCP

  • Rate-limiting/policing traffic in the upstream or downstream direction

  • Dropping certain traffic types

Using the AVC engine on the controller, it is possible to identify over a thousand applications. The number of applications that can be identified is constantly being updated as new signatures become available, and these can be added or updated to the controller independently of an operating system upgrade. Importantly, unlike the WLAN QoS configuration that was discussed previously, AVC has the ability to mark the original DSCP value. Figure 11-25 illustrates the functionality of AVC in Cisco wireless controllers.

Images
Images

Figure 11-25 The Function of AVC in a Cisco Wireless Controller

With DSCP remarking capabilities, better QoS handling in the downstream direction can be achieved. Since AVC operates on the controller in centralized mode, the effect on wireless QoS is only in the downstream direction. Note that in FlexConnect mode, AVC operates in the AP, whereas in centralized mode it operates only on the controller. This also means that for upstream traffic, the effect of AVC for controlling traffic is only toward the wired network from the controller (meaning from the AP to the controller over the CAPWAP tunnel, AVC will have no effect in the upstream direction until it reaches the controller).

The following summarizes the interaction of AVC and QoS in the controller in both the upstream and downstream directions:

Upstream direction (from wireless client to wired network):

  1. A packet is sent from a wireless client.

  2. The DSCP is mapped to an 802.11e UP value and is transmitted to the AP.

  3. The AP trusts the DSCP value on the incoming packet and maps this to the CAPWAP tunnel but does not touch the inner DSCP marking.

  4. The controller receives the incoming packet via CAPWAP.

  5. Using AVC, the controller examines the inner packet at the application layer and applies an AVC policy (such as remarking DSCP to the configured value).

  6. The packet is sent to the wired network with the new DSCP value.

Downstream direction (from wired network to wireless client):

  1. A packet is sent to the controller from the wired network.

  2. Using AVC, the controller examines the IP packet and applies an AVC policy (such as rewriting the DSCP into the original packet header).

  3. The controller compares this new DSCP value to the WLAN QoS profile and uses the lower of the two to write the DSCP value into the CAPWAP header (in other words, the profile’s upper limit does not trump the AVC policy for the CAPWAP DSCP).

  4. When the AP receives the packet, it examines the CAPWAP header and maps the DSCP value to an 802.11e UP value as per RFC 8325.

  5. The packet is transmitted to the wireless client.

From a visibility and monitoring perspective, the controller can collect and display various wireless performance metrics, such as bandwidth usage for individual clients and applications. This reporting information can be displayed locally on the controller or exported through NetFlow to a management tool.

Through this technology, the controller has the ability to identify applications such as Oracle, SAP, Citrix, BitTorrent, MS Exchange, Skype, Facebook, and many others. Figure 11-26 illustrates the visibility offered by AVC (an IOS-XE controller is shown here).

Images

Figure 11-26 AVC Visibility Shown on an IOS-XE Controller

Implementing AVC on a Cisco Wireless Controller

To configure AVC in an AireOS controller, the following steps must be followed:

Step 1. Create an AVC policy.

Step 2. Create rules for the policy.

Step 3. Attach the policy to the WLAN.

The following provides these steps in detail:

Step 1. To create the AVC policy, navigate to Wireless > Application Visibility and Control. Under this menu, select AVC Profiles and create a new profile.

Step 2. Once the profile is created, rules must be added. Profiles are composed of a series of rules that are used to first identify an application and then take an action. To make things simpler, the applications are collected into logical groupings. Figure 11-27 illustrates the voice-and-video application group.

Images

Figure 11-27 AVC Application Groupings in a New Rule

Once you have identified the correct application group, the next step is to identify the specific application you want to create a rule for. Figure 11-28 illustrates how an application can be identified.

Images

Figure 11-28 Selecting the AVC Application Identifier for a Rule

Once you select the application, a rule may be configured. Figure 11-29 illustrates how this may be done. AVC can either remark the DSCP, rate-limit the application through the controller, or drop the packets. Note in this example that the Mark option allows five different marking options:

Images

Figure 11-29 Creating an AVC Policy Rule

  • Platinum (packets are remarked to DSCP 46)

  • Gold (packets are remarked to DSCP 34)

  • Silver (packets are remarked to DSCP 0)

  • Bronze (packets are remarked to DSCP 10)

  • Custom (you can remark to whatever DSCP you choose)

If packets are remarked with one of the precious metal profiles, the DSCP values will map to the 802.11e UP values discussed previously. If you choose a custom DSCP, this value is marked on the downstream CAPWAP tunnel and is in turn mapped to the appropriate 802.11e UP value according to RFC 8325. Also, you can select the direction (upstream or downstream) in which the rule is applied.

Figure 11-30 illustrates a series of rules that have been created on the controller. Note from this example that multiple rules have been created with different controls: some remark, another drops traffic, and another limits rate.

Images

Figure 11-30 Example of Different AVC Rules Created on the Controller

As the list of rules grows, you might be wondering what about applications that are not covered by a rule in the list? To deal with this, a default rule can be added to remark any unidentified applications to a set value. In Figure 11-30, the application class-default used at the bottom of the list is used to catch all other traffic and remark it to DSCP 0.

Step 3. The final step is to add the policy to the correct WLAN. Two things are required: first enable AVC for the WLAN, and then attach the correct AVC profile. Figure 11-31 illustrates how to add the AVC profile your WLAN.

Images

Figure 11-31 Attaching the AVC Profile to a WLAN

The prior steps demonstrated how AVC can be implemented on an AireOS controller, but it can also be implemented in a similar way on an IOS-XE controller—with a few improvements that are available when compared with AireOS. One example with IOS-XE is that it allows you to create custom AVC inspection policies. For example, you can create an AVC rule that matches on a specific value in the HTTP header, such as a URL. The customized policy is generated through use of regular expressions, as shown in Example 11-1.

Example 11-1 Creating a Custom AVC Rule in IOS-XE

C9800(config)# ip nbar custom my_http http url "latest/whatsnew.html"
C9800(config)# ip nbar custom my_http http host "www.anydomain.com"
C9800(config)# ip nbar custom my_http http url "latest/whatsnew" host
"www.anydomain.com"

Implementing AutoQoS with Fastlane

As you can see in this chapter, implementing QoS in a wireless LAN has many aspects. To make the task of configuring QoS easier, AutoQoS macros are supported in both AireOS and IOS-XE. In AireOS, the AutoQoS macro is called Fastlane. Fastlane helps you configure all the QoS features of the controller according to best practices in one click. This includes configuration of the Platinum profile, the DSCP-to-UP mappings, EDCA profiles, CAC features, as well as a generic best-practices AVC profile that uses a sampling of common business applications, called AUTOQOS-AVC-PROFILE.

Fastlane is enabled under the WLAN > QoS configuration menu, as shown in Figure 11-32.

Images

Figure 11-32 Enabling Fastlane on an AireOS Controller

As soon as Fastlane is enabled, the AutoQoS macro will execute and generate the QoS configuration. Example 11-2 shows the AVC profile that is created by Fastlane.

Images

Example 11-2 The AVC Profile Created by the Fastlane Command

(Cisco Controller) >debug aaa tacacs enable

. . . snip . . .

avc profile AUTOQOS-AVC-PROFILE create
avc profile AUTOQOS-AVC-PROFILE rule add application cisco-phone-audio mark 46
avc profile AUTOQOS-AVC-PROFILE rule add application cisco-jabber-audio mark 46
avc profile AUTOQOS-AVC-PROFILE rule add application ms-lync-audio mark 46
avc profile AUTOQOS-AVC-PROFILE rule add application citrix-audio mark 46
avc profile AUTOQOS-AVC-PROFILE rule add application cisco-phone-video mark 34
avc profile AUTOQOS-AVC-PROFILE rule add application cisco-jabber-video mark 34
avc profile AUTOQOS-AVC-PROFILE rule add application ms-lync-video mark 34
avc profile AUTOQOS-AVC-PROFILE rule add application webex-media mark 34
avc profile AUTOQOS-AVC-PROFILE rule add application citrix mark 26
avc profile AUTOQOS-AVC-PROFILE rule add application pcoip mark 26
avc profile AUTOQOS-AVC-PROFILE rule add application vnc mark 26
avc profile AUTOQOS-AVC-PROFILE rule add application vnc-http mark 26
avc profile AUTOQOS-AVC-PROFILE rule add application skinny mark 24
avc profile AUTOQOS-AVC-PROFILE rule add application cisco-jabber-control mark 24
avc profile AUTOQOS-AVC-PROFILE rule add application sip mark 24
avc profile AUTOQOS-AVC-PROFILE rule add application sip-tls mark 24
avc profile AUTOQOS-AVC-PROFILE rule add application cisco-jabber-im mark 18
avc profile AUTOQOS-AVC-PROFILE rule add application ms-office-web-apps mark 18
avc profile AUTOQOS-AVC-PROFILE rule add application salesforce mark 18
avc profile AUTOQOS-AVC-PROFILE rule add application sap mark 18
avc profile AUTOQOS-AVC-PROFILE rule add application dhcp mark 16
avc profile AUTOQOS-AVC-PROFILE rule add application dns mark 16
avc profile AUTOQOS-AVC-PROFILE rule add application ntp mark 16
avc profile AUTOQOS-AVC-PROFILE rule add application snmp mark 16
avc profile AUTOQOS-AVC-PROFILE rule add application ftp mark 10
avc profile AUTOQOS-AVC-PROFILE rule add application ftp-data mark 10
avc profile AUTOQOS-AVC-PROFILE rule add application ftps-data mark 10
avc profile AUTOQOS-AVC-PROFILE rule add application cifs mark 10
avc profile AUTOQOS-AVC-PROFILE rule add application netflix mark 8
avc profile AUTOQOS-AVC-PROFILE rule add application youtube mark 8
avc profile AUTOQOS-AVC-PROFILE rule add application skype mark 8
avc profile AUTOQOS-AVC-PROFILE rule add application bittorrent mark8

Summary

This chapter focused on implementing QoS in a wireless network. In this chapter you have learned the following:

  • The fundamentals of how DCF was developed to support multiple-access media access in Wi-Fi systems

  • Improvements made through EDCA to support QoS through the use of access categories and the various parameters that are used in each access category to offer differentiated services

  • The upstream and downstream marking schemes used in a centralized wireless implementation

  • How QoS is implemented in both the AireOS and IOS-XE wireless controllers

  • How QoS is controlled on wireless clients and implications to upstream QoS handling over the wireless LAN

  • The implementation of AVC in wireless networks, including Fastlane

References

For additional information, refer to these resources:

Enterprise Wireless Design Guide 8.5—QoS Chapter: https://www.cisco.com/c/en/us/td/docs/wireless/controller/8-5/Enterprise-Mobility-8-5-Design-Guide/Enterprise_Mobility_8-5_Deployment_Guide/ch5_QoS.html

RFC 8325: Mapping Diffserv to 802.11: https://tools.ietf.org/html/rfc8325

QoS Design and Deployment for Wireless LANs: https://www.ciscolive.com/c/dam/r/ciscolive/us/docs/2018/pdf/BRKRST-2515.pdf

AutoQoS on the Catalyst 9800: https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/config-guide/b_wl_16_10_cg/wireless-auto-qos.html

Wireless QoS on the Catalyst 9800: https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/config-guide/b_wl_16_10_cg/quality-of-service.html

802.11 QoS Tutorial: http://www.ieee802.org/1/files/public/docs2008/avb-gs-802-11-qos-tutorial-1108.pdf

Wireless QoS: Five-Part Series: http://www.revolutionwifi.net/revolutionwifi/2010/07/wireless-qos-part-1-background_7048.html

Exam Preparation Tasks

As mentioned in the section “How to Use This Book” in the Introduction, you have a few choices for exam preparation: the exercises here, Chapter 18, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

Review All Key Topics

Review the most important topics in this chapter, noted with the Key Topic icon in the outer margin of the page. Table 11-7 lists these key topics and the page numbers on which each is found.

Table 11-7 Key Topics for Chapter 11

Key Topic Element

Description

Page Number

Figure 11-5

Access Categories and their UP mappings

251

Table 11-2

Mapping of DSCP to WMM UP value to Access Category

251

Table 11-6

The four QoS profiles in AireOS controllers

259

Figure 11-12

The QoS profile configuration menu in AireOS

261

Figure 11-25

The function of AVC in a Cisco wireless controller

270

Example 11-2

The AVC profile created by the Fastlane command

276

Define Key Terms

Define the following key terms from this chapter and check your answers in the glossary:

Carrier Sense Multiple Access / Collision Avoidance (CSMA/CA)

Distributed Coordination Function (DCF)

contention

Enhanced Distributed Channel Access (EDCA)

Access Category (AC)

Arbitrated Interframe Space Number (AIFSN)

Contention Window (CW)

Transmission Opportunity (TXOP)

Transmission Specification (TSpec)

User Priority (UP)

Wireless Multimedia (WMM)

Diffserv Code Point (DSCP)

Application Visibility and Control (AVC)

Fastlane