802.11ax
Although not called out directly in the first version of the CCNP exam blueprint, 802.11ax is at the heart of Wi-Fi innovations for the first half of the 2020 decade. If you work in Wi-Fi, having some knowledge of the protocol, and of what it changes, will go a long way in helping you assess why Wi-Fi 6 deeply changes Wi-Fi.
In fact, you will often hear Wi-Fi 6 and 5G compared. 5G, a cellular technology developed by 3GPP, has been designed and thought of throughout the 2010 decade as the next generation of radio interface standards for mobile systems. The goal was to prepare the cellular world for massive machine-type communications as well as and ultra-reliable and low-latency communications. The ability to communicate over higher frequencies (suitable only for medium-range communications, a few hundred meters at most) and in unlicensed bands (the same bands that Wi-Fi uses today) was added. This fifth generation of cellular standards release started being published and implemented in 2018 and 2019. Its characteristics brought some actors to claim that 5G would be sufficient for all needs and that soon Wi-Fi would no longer be needed.
However, this rather partisan view tended to compare 5G to older Wi-Fi technologies, like 802.11a and 802.11n. Just like their cellular counterparts, the 802.11 experts at the IEEE were also designing the next generation of Wi-Fi protocols with similar concerns: addressing super-high density and low-latency communications (like AR/VR), for which delay, jitter, or loss can be highly destructive of the user quality of experience. The outcome is 802.11ax. Just like for 802.11ac and 802.11n, the industry excitement was so high that the Wi-Fi Alliance (WFA) decided to design a first 802.11ax certification based on a stable 802.11ax from 2018 (draft 3.0). Also recognizing that certifications bearing code names (like “802.11ac wave 1” or similar) could cause confusion for the general public, the WFA decided to adopt a consistent naming convention for all certifications relative to what we call PHY technologies. These are technologies that implement new modulations, new data rates, and so on, while the WFA also publishes many new MAC-based certifications that implement new features (for example, around security, quality of service [QoS], and so on).
Figure A-1 represents these Wi-Fi generations, following the WFA’s new naming convention.
Figure A-1 The WFA 6 Generations of Wi-Fi PHY-Based Certifications
You will hear many experts say that Wi-Fi 6 brings to 802.11 technologies the same groundbreaking improvements that 5G did for cellular. In essence, these improvements can be placed in three groups: efficiency (much in the same line as 802.11ac and previous generations), a new scheduling method, and Internet of Things (IoT).
Until 2009, 802.11 channels were 20MHz wide and sending a single signal at any given time. 802.11n introduced the idea of grouping two adjacent channels in a single transmission, thus allowing 40MHz transmissions. With the OFDM modulations and its 64 subcarriers, this transmission more than doubled the potential throughput by allowing the reuse of subcarriers that were at the edge of the channel. These subcarriers were flat in 20MHz transmissions to create a margin at the edge of the channel. With 40MHz, the upper part of the lower channel and the lower part of the upper channel could now also actively send data, thus only keeping flat the subcarriers at the bottom of the lower channel and at the top of the upper channel.
802.11n also allowed up to four concurrent transmissions (from a single transmitter), called spatial streams. Via careful coordination between these streams, 802.11n allowed for increased range or increased throughput. In theory, if you send four signals at the same time, you can send four times as many bits in the same time window. This technology was called Multiple Input, Multiple Output (MIMO).
Finally, as chipsets became more efficient, allowing a receiver to differentiate between two or more concurrent streams, 802.11n also improved OFDM modulations. In its higher data rates, 802.11 OFDM uses a quadrature amplitude modulation (QAM) transmission technique, where each subcarrier varies its intensity and transmission direction so that the peak of the signal matches a target position. Although the process occurs in the time and space domains, an easy way to represent it is to imagine a target with a vertical and a horizontal line passing through the center, with four quadrants, as represented in Figure A-2.
Figure A-2 OFDM QAM Transmissions
Each target position represents a specific code (for example, 45 degrees up and left, a mid-intensity represents 001 101). To limit losses, a percentage of the signal is repeated (for example, 25% repeats, coded as “3/4 of new symbols in all transmissions”). Naturally, more repeats decrease the risk of losses but also reduce the amount of new information transmitted. Thus, lower repeat schemes are adapted for transmissions in quieter RF conditions. Similarly, with RF noise, the various targets in each quadrant are never reached exactly. Therefore, a system with more targets in each quadrant will require chipsets of better quality and a quieter RF channel. 802.11a and 802.11g allowed for up to 64-QAM 3/4, which means 64 different possible signal positions (16 in each quadrant), and 3/4 of new symbols in all transmissions. 802.11n also extended this scheme, still using 64-QAM, but allowing for the 5/6 scheme.
802.11ac continued this trend, allowing 80MHz and even 160MHz transmissions, 256-QAM 8/9, and up to eight spatial streams (SS). Practically, though, no vendor implemented more than four SS (because it’s complicated), and the WFA did not certify beyond four SS. 802.11ac also allowed Multi-User MIMO (MU-MIMO), by which an Access Point (AP) could send spatial streams to different users (up to four SS = up to four stations receiving the AP transmission at the same time, with each station receiving its own data in its own stream).
802.11ax continued that same trend, still allowing 160MHz transmissions, eight SS, but also allowing 1024-QAM 5/6. 802.11ax allows upstream MU-MIMO (UL MU-MIMO). This new mode became possible by improving the clocks on the 802.11ax chipset, thus enabling the stations (STA) to carefully coordinate their upstream transmissions (upon trigger from the AP) so their signal would combine (and not collide randomly with one another). The result of these improvements is a theoretical 9.6Gbps throughput per radio, up to four times more than 802.11ac.
802.11ax also introduced the concept of Basic Service Set (BSS) coloring. In high-density environments, you can expect that two neighboring APs will be on the same channel, especially in settings where large channels (80MHz or 160MHz) are used. APs may not hear one another, especially if they use directional antennas (for example, in a stadium) or if an obstacle is placed between them. However, clients positioned between these APs will suffer (collision with traffic from the neighboring cell may happen while the client is attempting to send to or receive traffic from its AP). With 802.11ax, such clients can send a Basic Service Set (remember, this means the AP cell) collision report. At that time, the AP marks, and asks its clients to mark, all the frames with a specific series of bits (a sort of cell-specific label, called the “color,” although it really has no relationship with a color). The clients will also reduce their sensitivity (so as to ignore a bit more of the noise coming from the neighboring cell, where clients will also proceed with the same logic). Then, with the assumption that the neighboring cell is “farther away” than the local cell, the clients will detect if transmissions have their cell color (in which case, a client or the AP in their cell is transmitting, and they should stay quiet to avoid collisions) or another cell color (in which case, the transmission is just noise from the neighbors and can be ignored; the station can send if it needs to, knowing that stations in the neighboring cells have reduced their sensitivity and will ignore that STA signal). This mechanism allows for higher cell density and better coexistence for OBSS (Overlapping BSS, on the same channel) scenarios.
The major revolution in 802.11ax is undoubtedly OFDMA (Orthogonal Frequency Division Multiple Access), a new modulation technique to complement the regular OFDM (Orthogonal Frequency Division Multiplexing) leveraged in 802.11g/a/n/ac. OFDMA brings multiple major enhancements to improve operations in high-density environments but also for IoT. Of course, 802.11ax transmitters can still use OFDM, but the implementation of the OFDMA scheme dramatically changes the channel efficiency.
The first major improvement is a change in the subcarrier structure. With OFDM under 802.11a/g/n/ac, a 20MHz channel is split into 64 subcarriers (or tones). Each subcarrier center frequency is 312.5kHz away from the next subcarrier center frequency (312.5 * 64 = 20,000). Each subcarrier transmits bits organized in what is called a symbol. With legacy OFDM, the transmission of a symbol takes 3.2 microseconds. There is then 0.8 microsecond, with the standard “guard interval,” or 0.4 microsecond with the “short” guard interval, of meaningless signal (giving a space where echoes and reflections can come back to the main signal without affecting the transmitted message) before the next symbol.
With 802.11ax, the space between channels is 78.125kHz, thus allowing for 256 subcarriers in a 20MHz channel. However, the symbol duration was extended to 12.8 microseconds (with 0.8, 1.6, or 3.2 microsecond guards between symbols). This change means that more symbols can be sent in parallel, but they are sent at a slower pace, thus better resisting interferences. Four times more tones, but four times slower signal, may give the impression that both models provide the same overall throughput. This is “almost” true. With OFDMA, more of these subcarriers are actively carrying data (instead of being used as references), thus allowing for a 10 to 20% throughput increase (depending on the mode), with the benefit of a much better resistance to interferences. This model is very useful in the outdoors or in indoor noisy environments.
Another improvement with the carrier structure is that subcarriers can be accessed, or addressed, almost individually. Tones are grouped in Resource Units (RUs) of various sizes: 26, 52, 106, 242, 484, or 996 tones. Obviously, the last two are only possible in 40MHz and 80MHz transmissions, respectively. A 26-tone RU occupies about 2MHz. These numbers also account for side-tones that are left unused at the edge of each RU and at the edge of the channel.
This change is the one that is seen as revolutionary. With OFDM, only one station can send at a time. With standard contention methods (CSMA/CA), a station gains access to the medium and sends a frame. With large channels (80MHz, for example), it may be that the sender does not really need to full channel and may simply send over 20 or 40MHz. As the AP has the entire 80MHz, the other 40 or 60MHz are simply not used if the transmission is narrower. This is a clear waste of resources. In an ideal world where each station sends one after the other, you get the scheme illustrated on the left side of Figure A-3, where each station has to wait on average seven contention cycles before being able to transmit, while there is still space available on the channel during these other seven cycles.
Figure A-3 OFDM vs. OFDMA Transmissions
In this scheme, as more stations join the cell, latency and jitter increase accordingly.
OFDMA changes everything. With the concept of RU, several stations can send at the same time, as illustrated on the right side of Figure A-3, making transmissions much more deterministic.
This transmission scheme works as follows:
At regular intervals, the AP performs sounding. This technique has existed since 802.11n and MIMO. It allows the AP to group stations that are “RF-compatible” (that is, the transmission from one would not be destructive to the transmissions of the others).
At regular intervals, or upon AP trigger, each station sends to the AP a Buffer Status Report (BSR). This report lists, for each 802.11 access category (AC_VO, AC_VI, AC_BE, AC_BK), the buffer depth and characteristics of the station. In other words, the station is able to say, for example, “I have a lot of voice packets ready to send” or “I have a few best-effort and a few background bytes to send.”
Based on these BSRs and its own scheduling algorithm, the AP switches to OFDMA trigger-based mode. In this mode, the AP defines a transmission opportunity period (TXOP, typically around 2.5 milliseconds) and allocates to each station in a given group a number of RUs.
Starting at the exact same time, the stations send symbols only in the RUs they were allocated. This allows the transmission to occupy the full channel, permits multiple stations to send at the same time, and maximizes the overall system efficiency.
The AP can then switch back to the standard contention-based CSMA/CA (unscheduled) method before going back to scheduled periods. This method is revolutionary not only because it increases the efficiency of the channel, but also because this scheduling allows the AP to provide a very deterministic access to the medium for stations that need it. For example, if your voice application needs to send one frame of about one RU per 20 ms, the AP can allocate exactly that amount, at exactly that interval, removing the uncertainties from contention and collisions in multistation environments. This process opens the door to the support of applications that need high reliability and very low latency or jitter. And with multiple RUs, 1024 QAM, and the other improvements, massive machine-type communications (with low tolerance for losses and retries) also become possible. As you can see, 802.11ax, pursuing similar goals as 5G, at about the same time, developed solutions providing comparable efficiency. This is not entirely surprising, as designers for both groups worked in the same general cultural and technical contexts. The main difference is that Wi-Fi does not require a user to pay a monthly (or per GB) fee to access the RF channel.
We place the next set of improvements in the Internet of Things (IoT) category because they were designed with IoT in mind. However, keep in mind that these improvements also benefit regular stations. Also keep in mind that Wi-Fi is not alone in this effort. LTE (4G) and 5G also brought radio efficiency improvements targeted to IoT devices.
IoT stations tend to have power and CPU constraints, so they need to minimize the cost of modulating and transmitting a signal (or receiving it). With narrower subcarriers and longer symbols, transmission in OFDMA is simpler than with OFDM. A narrow tone means that transmission costs less energy. A longer symbol means that the computation and the modulation of the symbol take less processing. Even if the transmission duration is longer, the overall result is that a simpler, cheaper Wi-Fi module can be implemented in IoT devices and can transmit with less energy consumption than with OFDM.
A major roadblock in Wi-Fi adoption for IoT devices was indeed related to energy. 802.11 was initially designed with laptops in mind. It was extended, of course, to phones and tablets, but these devices have batteries that can be charged daily. Their requirements are very different from those of a battery-operated sensor, which has a battery the size of a coin, and whose lifetime needs to be 5 years or more. Such characteristics were not compatible with Wi-Fi. With 802.11, a station would need to associate and end keepalives at regular intervals, even if it had nothing to send. Each AP has a session timeout that would cause the station to be removed from the list of associated clients, if the station failed to exchange data with the AP for too long. The initial 802.11 did not even bother create a way for the AP to tell the station what the timeout would be. 802.11 introduced some enhancements over the years, and 802.11ax introduced a radical improvement: the Target Wake Time (TWT).
With TWT, the station can tell the AP how often it would wake up (the AP can negotiate or override this interval). Then, the station can sleep for a long time, without sending anything and without losing its association from the AP. The AP keeps any incoming traffic for the station. Then, at the time the station is supposed to wake up, the AP can send that traffic directly, without waiting for the station to signal its return (because the AP knows that the station must be back, as scheduled). This process allows battery-operated devices to appear in Wi-Fi networks. The longest possible sleeping period is also gigantic (5 years!), allowing applications like rust sensors in walls (sending updates only every few months) to become possible. Additionally, as the AP can override the schedule, super-high density becomes possible, where the AP can organize a large number of clients in smaller groups that wake up and communicate at rotating intervals.
This concern for low power and IoT is pervasive in OFDMA, and the main IoT-friendly features are represented in Figure A-4. By allowing a station to send only over a single RU, power is also saved (as the station does not need to modulate an 80MHz-wide signal beyond the preamble and can just send a 2MHz-wide signal). In fact, it can even only send a 20MHz preamble. This is also useful because most IoT objects do not need to send a lot of data—they do not need 1Gbps! With such a small transmission and using a simple (and power-efficient) modulation (like Binary Phase Shift Keying [BPSK]), the IoT object can send traffic at 375Kbps, which is more than enough.
Figure A-4 802.11ax OFDMA Improvements for IoT
A last improvement of 802.11ax for IoT solves the IoT nightmare of retries. Being highly battery-sensitive, IoT objects are very vulnerable to the cost of retries. If a transmission is not received (not acknowledged), the IoT object has to wait (a duration called the extended interframe space, or EIFS) and then attempt to resend, thus incurring again the entire cost of computing the modulation and sending the preamble and the symbols. In most cases, the transmission failed because of a narrow interference that affected a few symbols of the transmission.
To save on this cost, 802.11ax allows a mode called dual subcarrier modulation (DCM). With this technique, the IoT object can send its frame in a redundant mode, over two RUs, far apart from each other. This is economical, because the station only needs to modulate a single preamble and only needs to compute the modulation once. The station does spend twice the energy at the time of the symbol transmission, but this is (from a power standpoint) cheaper than waiting for an EIFS and then retransmitting everything, if the first transmission fails. DCM is optional but can be useful in noisy environments where a high level of retries are measured. A last important detail is that 802.11ax is allowed in the 2.4GHz band and the 5GHz band (while 8023.11ac is only allowed in the 5GHz band), which is great, as many simple Wi-Fi chipsets (targets for IoT) were designed to operate in 2.4GHz.
All these features make 802.11ax ready for high-density, real-time, and IoT environments. The Wi-Fi Alliance certifies 20, 40, 80, or 160MHz channels, 1024 QAM, downlink MU-MIMO, BSS coloring, TWT, and OFDMA with Wi-Fi 6.
At the same time, operations in 6GHz are envisioned for a second certification phase. This is likely to introduce major changes for your networks as well, because operations in 6GHz will not have to coexist with legacy systems (like operations in 5GHz or 2.4GHz do). This will allow these operations to be “pure 802.11ax,” directly with high efficiency and without the need to implement any overhead to avoid collisions with stations running older technologies.
Meanwhile, the IEEE 802.11be working group is designing the next generation of Wi-Fi, with in mind the possibility for a station to communicate with several APs at the same time (thus ensuring maximum throughput and zero delay or drop when roaming), as well as the possibility for several APs to communicate with a target station at the same time (thus ensuring hyper-high throughput as the station moves). The future of Wi-Fi looks bright.