Chapter 2

Global cellular IoT standards

Abstract

This chapter first presents the Third Generation Partnership Project (3GPP), including its ways of working, its organization, and its linkage to the world's largest regional standardization development organizations (SDOs).

Then, after providing a basic overview of the 3GPP cellular systems architecture, 3GPP's work on Cellular IoT is introduced. This introduction includes a summary of the early work performed by 3GPP in the area of massive machine-type communications (mMTC). The Power Saving Mode (PSM) and extended Discontinuous Reception (eDRX) features are discussed together with the feasibility studies of the technologies Extended Coverage Global System for Mobile Communications Internet of Things (EC-GSM-IoT), Narrowband Internet of Things (NB-IoT), and Long-Term Evolution for Machine-Type Communications (LTE-M).

To introduce the work on critical MTC (cMTC) the 3GPP Release 14 feasibility Study on Latency reduction techniques for Long Term Evolution is presented. It triggered the specification of several features for reducing latency and increasing reliability in LTE.

Support for cMTC is a pillar in the design of the fifth generation (5G) New Radio (NR) system. To put the 5G cMTC work in a context an overview of NR is provided. This includes the Release 14 NR study items, the Release 15 normative work and the work on qualifying NR, and LTE, as IMT-2020 systems.

Finally, an introduction to the MulteFire Alliance (MFA) and its work on mMTC radio systems operating in unlicensed spectrum is given. The MulteFire Alliance modifies 3GPP technologies to comply with regional regulations and requirements specified to support operation in unlicensed frequency bands.

2.1. 3GPP

Third Generation Partnership Project (3GPP) is the global standardization forum behind the evolution and maintenance of the Global System for Mobile Communications (GSM), the Universal Mobile Telecommunications System (UMTS), the Long Term Evolution (LTE) and the fifth generation (5G) cellular radio access technology known as the New Radio (NR). The project is coordinated by seven regional standardization development organizations representing Europe, the United States, China, Korea, Japan, and India. 3GPP has since its start in 1998 organized its work in release cycles and has in 2019 reached Release 16.
A release contains a set of work items where each typically delivers a feature that is made available to the cellular industry at the end of the release cycle through a set of technical specifications (TSs). A feature is specified in four stages where stage 1 contains the service requirements, stage 2 a high-level feature description, and stage 3 the detailed description that is needed to implement the feature. The fourth and final stage contains the development of the performance requirements and conformance testing procedures for ensuring proper implementation of the feature. Each feature is implemented in a distinct version of the 3GPP TSs that maps to the release within which the feature is developed. At the end of a release cycle the version of the specifications used for feature development is frozen and published. In the next release a new version of each technical specification is created and edited as needed for new features associated with that release. Each release contains a wide range of features providing functionality spanning across GSM, UMTS, LTE, and NR as well as providing interworking between the four. In each release it is further ensured that GSM, UMTS, LTE, and NR can coexist in the same geographical area. That is, the introduction of, for example, NR into a frequency band should not have a negative impact on GSM, UMTS or LTE operation.
The technical work is distributed over a number of technical specification groups (TSGs), each supported by a set of working groups (WGs) with technical expertise representing different companies in the industry. The 3GPP organizational structure is built around three TSGs:
  1. TSG Service and System Aspects (SA),
  2. TSG Core Network (CN) and Terminals (CT), and,
  3. TSG Radio Access Network (RAN).
TSG SA is responsible for the SA and service requirements, i.e., the stage 1 requirements, and TSG CT for CN aspects and specifications. TSG RAN is responsible for the design and maintenance of the RANs. So, while TSG CT for example is working on the 4G Evolved Packet Core (EPC) and the 5G Core network (5GC), TSG RAN is working on the corresponding radio interfaces know as LTE and NR, respectively.
The overall 3GPP project management is handled by the Project Coordination Group (PCG) that, for example, holds the final right to appoint TSG Chairmen, to adopt new work items and approve correspondence with external bodies of high importance, such as the International Telecommunications Union (ITU). Above the PCG are the seven SDOs: ARIB (Japan), CCSA (China), ETSI (Europe), ATIS (US), TTA (Korea), TTC (Japan), and TSDSI (India). Within 3GPP these standardization development organizations are known as the Organizational Partners that hold the ultimate authority to create or terminate TSGs and are responsible for the overall scope of 3GPP.
The Release 13 massive MTC (mMTC) specification work on EC-GSM-IoT, NB-IoT, and LTE-M was led by TSG GSM EGPRS RAN (GERAN) and TSG RAN. TSG GERAN was at the time responsible for the work on GSM/Enhanced Data Rates for GSM Evolution (EDGE) and initiated the work on EC-GSM-IoT and NB-IoT through a feasibility study resulting in technical report (TR) 45.820 Cellular System Support for Ultra-Low Complexity and Low Throughput Internet of Things [1]. It is common that 3GPP before going to normative specification work for a new feature performs a study of the feasibility of that feature and records the outcome of the work in a TR. In this specific case the report recommended to continue with normative work items on EC-GSM-IoT and NB-IoT. While TSG GERAN took on the responsibility of the EC-GSM-IoT work item, the work item on NB-IoT was transferred to TSG RAN. TSG RAN also took responsibility for the work item associated with LTE-M, which just as NB-IoT is part of the LTE series of specifications.
After 3GPP Release 13, i.e. after completion of the EC-GSM-IoT specification work, TSG GERAN and its WGs GERAN1, GERAN2, and GERAN3 were closed and their responsibilities were transferred to TSG RAN and its WGs RAN5 and RAN6. Consequently, TSG RAN is responsible for NB-IoT and GSM, including EC-GSM-IoT, in addition to being responsible for UMTS, LTE and the development of NR.
Fig. 2.1 gives an overview of the 3GPP organizational structure during Release 16, indicating the four levels: The Organizational Partners (OP) including the regional standards development organizations, the PCG, the three active TSGs, and the WGs of each TSG.

2.2. Cellular system architecture

2.2.1. Network architecture

In case of LTE the radio network is known as E-UTRAN, or LTE, while the CN is named the EPC. Together E-UTRAN and the EPC define the EPS. In the EPC the Packet Data network Gateway (P-GW) provides the connection to an external packet data network. The Serving Gateway (S-GW) routes user data packets from the P-GW to an evolved Node B (eNB) that transmits them over the LTE radio interface (Uu) to an end user device. The connection between the P-GW and the device is established by means of a so-called EPS bearer, which is associated with certain Quality of Service (QoS) requirements. These correspond to for example data rate and latency requirements expected from the provided service.
Data and control signaling is separated by means of the user plane and control plane. The Mobility Management Entity (MME) which e.g. is responsible for idle mode tracking is connected to the eNB via the control plane. The MME also handles subscriber authentication and is connected to the Home Subscriber Service (HSS) data base for this purpose. It maps the EPS bearer to radio bearers that provides the needed QoS over the LTE radio interface.
In the GPRS core the GGSN acts as the link to the external packet data networks. The SGSN fills a role similar to the MME and handles idle mode functions as well as authentication toward the Home Location Register (HLR) which keeps track of the subscriber information. It also routes the user data to the radio network. In an LTE network the eNB is the single infrastructure node in the RAN. In case of GERAN the eNB functionality is distributed across a Base Station Controller and the Base Transceiver Station. One of the most fundamental difference between GSM/EDGE and the EPS architectures is that GSM/EDGE supports a circuit switched domain for the handling of voice calls, in addition to the packet switched domain. The EPS only operates in the packet switched domain. The Mobile Switching Center (MSC) is the GSM CN node that connects the classic Public Switched Telephone Network (PSTN) to GERAN. The focus of this book lies entirely in the packet switched domain.
Section 2.4 provides an architectural overview of NR and the 5G CN.

2.2.2. Radio protocol architecture

Understanding the 3GPP radio protocol stack and its applicability to the nodes and interfaces depicted in Fig. 2.2 is a good step towards understanding the overall system architecture. Fig. 2.3 depicts the LTE radio protocol stack including the control and user plane layers as seen from the device.
In the user plane protocol stack the highest layer is an IP layer, which carries application data and terminates in the P-GW. IP is obviously not a radio protocol, but still mentioned here to introduce the interface between the device and the P-GW. The IP packet is transported between the P-GW, the S-GW and the eNB using the GPRS Tunnel Protocol (GTP).
The Non-Access Stratum (NAS) and Radio Resource Control (RRC) layers are unique to the control plane. A message-based IP transport protocol known as the Stream Control Transmission Protocol (SCTP) is used between the eNB and MME for carrying the NAS messages. It provides a reliable message transfer between the eNB and MME.
The RRC handles the overall configuration of a cell including the Packet Data Convergence Protocol (PDCP), Radio Link Control (RLC), Medium Access Control (MAC) and physical (PHY) layers. It is responsible for the connection control, including connection setup, (re-)configuration, handover and release. The system information messages described in section 5.3.1.2 is a good example of RRC information.
The PDCP, the RLC, the MAC and the PHY layers are common to the control and user planes. The PDCP perform Robust Header Compression (RoHC) on incoming IP packets and manages integrity protection and ciphering of the control plane and ciphering of the user plane data sent over the access stratum. It acts as a mobility anchor for devices in RRC connected mode. It buffers, and in if needed retransmits, packets received during a handover between two cells. The PDCP packets are transferred to the RLC layer which handles a first level or retransmission in an established connection and makes sure that received RLC packets are delivered in sequence to the PDCP layers.
The RLC layer handles concatenation and segmentation of PDCP protocol data units (PDU) into RLC service data units (SDU). The RLC SDUs are mapped on RLC PDUs which are transferred to the MAC layer. Each RLC PDU is associated with a radio bearer and a logical channel. Two types of radio bearers are supported: signaling radio bearers (SRBs) and data radio bearers (DRBs). The SRBs are sent over the control plane and bears the logical channels known as the Broadcast, Common and Dedicated Control Channels (BCCH, DCCH, CCCH). The DRBs are sent over the user plane and are associated with the Dedicated Traffic Channel (DTCH). The distinction provided by the bearers and the logical channels allows a network to apply suitable access stratum configurations to provide a requested QoS for different types of signaling and data services.
MAC manages multiplexing of bearers and their logical channels with MAC control elements according to specified and configured priorities. The MAC control elements are used to convey information related to an ongoing connection such as the data buffer status report. MAC is also responsible for the random-access procedure and hybrid automatic repeat request (HARQ) retransmissions. The MAC PDUs are forwarded to the physical layer which is responsible for the physical layer functions and services such as encoding, decoding, modulation and demodulation.
Fig. 2.4 shows the data transfer through the protocol stack. At each layer a header (H) is appended to the SDU to form the PDU, and at the physical layer also a CRC is attached to the transport block.
The GPRS protocol stack also includes the RRC, PDCP, RLC, MAC and PHY layers. Although the same naming conventions are used in GPRS and LTE is should be understood that the functionality belonging to the different layers has evolved. GPRS non-access stratum signaling between the device and SGSN is defined by means of the Logical Link Control (LLC) and Sub-Network Dependent Convergence Protocol (SNDCP) protocols. LLC handles encryption and integrity protection, while SNDCP manages RoHC. This is functionality is similar to that provided by the LTE PDCP for providing compression and access stratum security. As a comparison remember that the PDCP terminates in the E-UTRAN while LLC and SNDCP terminates in the GPRS CN.

2.3. From machine-type communications to the cellular internet of things

2.3.1. Access class and overload control

This section presents the early work done by 3GPP for GSM and LTE in the area of MTC from the very start of 3GPP in Release 99 until Release 14. UMTS is not within the scope of this overview, but the interested reader should note that many of the features presented for LTE are also supported by UMTS.
In 2007 and Release 8 TSG SA WG1 working on the 3GPP system architecture published TR 22.868 Study on Facilitating Machine to Machine Communication in 3GPP Systems [2]. It highlights use cases such as metering and health, which are still of vital interest as 3GPP continues with the 5G specification effort. 3GPP TR 22.868 provides considerations in areas such as handling large numbers of devices, addressing of devices, and the level of security needed for machine-to-machine applications.
In 3GPP TSG SA typically initiates work for a given feature by first agreeing to a corresponding set of general service requirements and architectural considerations. In this case the SA WG1 work also triggered a series of Stage 1–3 activities in 3GPP Release 10 denoted Network Improvement for Machine-Type Communications [3]. The main focus of the work was to provide functionality to handle large numbers of devices, including the ability to protect existing networks from overload conditions that may appear in a network aiming to support a very large number of devices. For GSM/EDGE the overload control features Extended Access Barring (EAB) [4] and Implicit Reject [5] were specified as part of these Release 10 activities.
Already in the Release 99 specifications, i.e., the first 3GPP release covering GSM/EDGE, support for the Access Class Barring (ACB) feature is specified. It allows a network to bar devices of different access classes regardless of their registered Public Land Mobile Network (PLMN) identity. Each device is pseudo randomly, i.e., based on the last digit of their International Mobile Subscriber Identity (IMSI), configured to belong to 1 of 10 normal access classes. In addition, five special access classes are defined, and the device may also belong to one of these special classes. The GSM network regularly broadcasts in its system information a bitmap as part of the Random Access Channel Control Parameters to indicate if devices in any of these 15 access classes are barred. EAB is built around this functionality and reuses the 10 normal access classes. However, contrary to ACB, which applies to all devices, EAB is only applicable to the subset of devices that are configured for EAB. It also allows a network to enable PLMN-specific and domain-specific, i.e., packet switched or circuit switched, barring of devices. For GSM/EDGE data services belong to the packet switched domain, while voice services belong to the circuit switched domain. In GSM/EDGE, System Information message 21 broadcasted in the network contains the EAB information. In case a network is shared among multiple operators, or more specifically among multiple PLMNs, then EAB can be configured on a per PLMN basis. Up to four additional PLMNs can be supported by a network. System Information message 22 contains the network sharing information for these additional PLMNs and, optionally, the corresponding EAB information for each of the PLMNs [5].
The GSM Implicit Reject feature introduces an Implicit Reject flag in a number of messages sent on the downlink (DL) Common Control CHannel (CCCH). Before accessing the network, a device configured for Low Access Priority [6] is required to decode a message on the DL CCCH and read the Implicit Reject flag therein. The support for low access priority is signaled by a device over the Non-Access Stratum (NAS) interface using the Device Properties information element [7] and over the Access Stratum in the Packet Resource Request message [6]. In case the Implicit Reject flag is set to “1” the device is not permitted to access the GSM network (NW) and is required to await the expiration of a timer before attempting a new access. Because it does not require the reading of the system information messages, Implicit Reject has the potential benefit of being a faster mechanism than either ACB or EAB type-based barring. When the Implicit Reject flag is set to “1” in a given downlink CCCH message then all devices that read that message when performing system access are barred from network access. By toggling the flag with a certain periodicity within each of the messages sent on the downlink CCCH, a partial barring of all devices can be achieved. Setting the flag to “1” within all downlink CCCH messages sent during the first second of every 10-s time interval will, for example, bar 10% of all devices. A device that supports the Implicit Reject feature may also be configured for EAB.
For LTE, ACB was included already in the first release of LTE, i.e. 3GPP Release 8, while the low-priority indicators were introduced in Release 10 [8]. An NAS low-priority indication was defined in the NAS signaling [15] and an Establishment Cause indicating delay tolerant access was introduced in the Radio RRC Connection Request message sent from the device to the base station [9]. These two indicators support congestion control of delay tolerant MTC devices. In case the RRC connection request message signals that the access was made by a delay tolerant device, the base station has the option to reject the connection in case of congestion and via the RRC Connection Reject message request the device to wait for the duration of a configured extended wait timer before making a new attempt.
In Release 11 the MTC work continued with the work item System Improvements for MTC [10]. In TSG RAN EAB was introduced in the LTE specifications. A new System Information Block 14 (SIB14) was defined to convey the EAB-related information [9]. To allow for fast notification of updates of SIB14 the paging message was equipped with a status flag indicating an update of SIB14. As for TSG GERAN, barring of 10 different access classes is supported. In case of network sharing, a separate access class bitmap can, just as for GSM/EDGE, be signaled per PLMN sharing the network. A device with its low-priority indicator set needs to support EAB.
Table 2.1 summarizes the GSM/EDGE and LTE 3GPP features designed to provide overload control in different releases. It should be noted that ETSI was responsible for the GSM/EDGE specifications until 3GPP Release 99 when 3GPP took over the responsibility for the evolution and maintenance of GSM/EDGE. ACB was, for example, part of GSM/EDGE already before 3GPP Release 99. Note that after Release 99, the 3GPP release numbering was restarted from Release 4.

Table 2.1

3GPP features until Release 13 related to MTC overload control.
Release GSM LTE
99 Access class barring
8 Access class barring
10
Extended access class barring
Implicit reject
Low priority and access delay tolerant Indicators
Low priority and access delay tolerant Indicators
11 Extended access class barring

2.3.2. Small data transmission

In Release 12 the work item Machine-Type Communications and other mobile data applications communications [11] triggered a number of activities going beyond the scope of the earlier releases that to a considerable extent were focused on managing large numbers of devices. It resulted in TR 23.887 Study on Machine-Type Communications (MTC ) and other mobile data applications communications enhancements [12] that introduces solutions to efficiently handle small data transmissions and solutions to optimize the energy consumptions for devices dependent on battery power.
MTC devices are to a large extent expected to transmit and receive small data packets, especially when viewed at the application layer. Consider, for example, street lighting controlled remotely where turning on and off the light bulb is the main activity. On top of the small application layer payload needed to provide the on/off indication, overhead from higher-layer protocols, for example, User Datagram and Internet Protocols and radio interface protocols need to be added thereby forming a complete protocol stack. For data packets ranging up to a few hundred bytes the protocol overhead from layers other than the application layer constitutes a significant part of the data transmitted over the radio interface. To optimize the power consumption of devices with a traffic profile characterized by small data transmissions it is of interest to reduce this overhead. In addition to the overhead accumulated over the different layers in the protocol stack, it is also vital to make sure various procedures are streamlined to avoid unnecessary control plane signaling that consumes radio resources and increases the device power consumption. Fig. 2.5 shows an overview of the message flow associated with an LTE mobile originated (MO) data transfer where a single uplink (UL) is sent between the user equipment (UE) and the eNB. It is clear from the depicted signaling flow that several signaling messages are transmitted before the uplink and downlink data packets are sent.
One of the most promising solutions for support of small data transmission is the RRC Resume procedure [9]. It aims to optimize, or reduce the number of signaling messages, that is needed to set up a connection in LTE. Fig. 2.5 indicates the part of the connection setup that becomes redundant in the RRC Resume procedure, including the Security mode command and the RRC connection reconfiguration messages. The key in this solution is to resume configurations established in a previous connection. Part of the possible optimizations is to suppress the RRC signaling associated with measurement configuration. This simplification is justified by the short data transfers expected for MTC. For these devices measurement reporting is less relevant compared to when long transmissions of data are dominating the traffic profile. In 3GPP Release 13 this solution was specified together with the Control plane CIoT EPS Optimization [9] as two alternative solutions adopted for streamlining the LTE setup procedure to facilitate small and infrequent data transmission [13]. These two solutions are highly important to optimize latency and power consumption for LTE-M and NB-IoT evaluated in Chapters 6 and 8, respectively.

2.3.3. Device power savings

The 3GPP Release 12 study on MTC and other mobile data applications communications enhancements introduced two important solutions to optimize the device power consumption, namely Power Saving Mode (PSM) and extended Discontinuous Reception (eDRX). PSM was specified both for GSM/EDGE and LTE and is a solution where a device enters a power saving state in which it reduces its power consumption to a bare minimum [14]. While in the power saving state the mobile is not monitoring paging and consequently becomes unreachable for mobile terminated (MT) services. In terms of power efficiency this is a step beyond the typical idle mode behavior where a device still performs energy consuming tasks such as neighbor cell measurements and maintaining reachability by listening for paging messages. The device leaves PSM when higher layers in the device triggers a MO access, e.g., for an uplink data transfer or for a periodic Tracking Area Update/Routing Area Update (TAU/RAU). After the MO access and the corresponding data transfer have been completed, a device using PSM starts an Active Timer. The device remains reachable for MT traffic by monitoring the paging channel until the Active timer expires. When the Active timer expires the device reenters the power saving state and is therefore unreachable until the next MO event. To meet MT reachability requirements of a service a GSM/EDGE device using PSM can be configured to perform a periodic RAU with a configurable periodicity in the range of seconds up to a year [7]. For an LTE device the same behavior can be achieved through configuration of a periodic TAU timer [15]. Compared to simply turning off a device, PSM has the advantage of supporting the mentioned MT reachability via RAU or TAU. In PSM the device stays registered in the network and may maintain its higher layer configurations. As such, when leaving the power saving state in response to a MO event the device does not need to first attach to the network, as it would otherwise need to do when being turned on after previously performing a complete power off. This reduces the signaling overhead and optimizes the device power consumption.
Fig. 2.6 depicts the operation of a device configured for PSM when performing periodic RAUs and reading of paging messages according to the idle mode DRX cycle applicable when the Active timer is running. The RAU procedure is significantly more costly than reading of a paging message as indicated by Fig. 2.6. During the Active timer the device is in idle mode and is required to operate accordingly. After the expiry of the Active timer the device again reenters the energy-efficient PSM.
In Release 13 eDRX was specified for GSM and LTE. The general principle for eDRX is to extend the since earlier specified DRX cycles to allow a device to remain longer in a power saving state between paging occasions and thereby minimize its energy consumption. The advantage over PSM is that the device remains periodically available for MT services without the need to first perform e.g., a Routing or Tracking Area Update to trigger a limited period of downlink reachability. The Study on power saving for Machine-Type Communication (MTC) devices [16] considered, among other things, the energy consumption of devices using eDRX or PSM. The impacts of using PSM and eDRX on device battery life, assuming a 5 Watt-hour (Wh) battery, were characterized as part of the study. More specifically, the battery life for a device was predicted for a range of triggering intervals and reachability periods. A trigger may e.g., correspond to the start of a MT data transmission wherein an application server requests a device to transmit a report. After reception of the request the device is assumed to respond with the transmission of the requested report. The triggering interval is defined as the interval between two adjacent MT events. The reachability period is, on the other hand, defined as the period between opportunities for the network to reach the device using a paging channel. Let us, for example, consider an alarm condition that might only trigger on average once per year, but when it does occur there is near real-time requirement for an application server to know about it. For this example the ongoing operability of the device, capable of generating the alarm condition, can be verified by the network sending it a page request message and receiving a corresponding page response. Once the device operability is verified, the network can send the application layer message that serves to trigger the reporting of any alarm condition that may exist.
Fig. 2.7 presents the estimated battery life for a GSM/EDGE device in normal coverage when using PSM or eDRX. Reachability for PSM was achieved by the device by performing a periodic RAU, which initiates a period of network reachability that continues until the expiration of the Active timer. Both in case of eDRX and PSM it was here assumed that the device, before reading a page or performing a RAU, must confirm the serving cell identity and measure the signal strength of the serving cell. This is to verify that the serving cell remains the same and continues to be suitable from a signal strength perspective. When deriving the results depicted in Fig. 2.7 the energy costs of confirming the cell identity, estimating the serving cell signal strength, reading a page, performing a TAU, and finally transmitting the report were all taken from available results provided within the Study on power saving for MTC devices [16]. A dependency both on the reachability period and on the triggering interval is seen in Fig. 2.7. For eDRX a very strong dependency on the triggering interval is seen. The reason behind this is that the cost of sending the report is overshadowing the cost of being paged. Remember that the reachability period corresponds to the paging interval. For PSM the cost of performing a RAU is in this example of similar magnitude as sending the actual report so the reachability period becomes the dominating factor, while the dependency on triggering interval becomes less pronounced. For a given triggering interval Fig. 2.7 shows that eDRX is outperforming PSM when a shorter reachability is required, while PSM excels when the reachability requirement is in the same range as, or relaxed compared to, the actual triggering interval.
In the end GSM/EDGE eDRX cycles ranging up to 13,312 51-multiframes, or roughly 52   min, were specified. A motivation for not further extending the eDRX cycle is that devices with an expected reachability beyond 1   h may use PSM and still reach an impressive battery life, as seen in Fig. 2.7. The eDRX cycle can also be compared to the legacy max DRX cycle length of 2.1   s, which can be extended to 15.3   s if the feature Split Paging Cycle is supported [17].
For GSM/EDGE 3GPP went beyond PSM and eDRX and specified a new mode of operation denoted Power Efficient Operation (PEO) [18]. In PEO a device is required to support either PSM or eDRX, in combination with relaxed idle mode behavior. A PEO device is, for example, only required to verify the suitability of its serving cell shortly before its nominal paging occasions or just before a MO event. Measurements on a reduced set of neighbor cells is only triggered for a limited set of conditions such as when a device detects that the serving cell has changed or the signal strength of the serving cell has dropped significantly. PEO is mainly intended for devices relying on battery power where device power consumption is of higher priority than, e.g., mobility and latency, which may be negatively impacted by the reduced Idle Mode activities. Instead of camping on the best cell the aim of PEO is to assure that the device is served by a cell that is good enough to provide the required services.
For LTE, Release 13 specifies idle mode eDRX cycles ranging between 1 and 256 hyperframes. As one hyperframe corresponds to 1024 radio frames, or 10.24   s, 256 hyperframes correspond to roughly 43.5   min. As a comparison the maximum LTE idle mode DRX cycle length used before Release 13 equals 256 frames, or 2.56   s.
Table 2.2 summarizes the highest configurable MT reachability periodicities for GSM and LTE when using idle mode eDRX, connected mode DRX, or PSM. For PSM the assumption is that the MT reachability periodicity is achieved through the configuration of periodic RAU or TAU.
In general, it is expected that the advantage of eDRX over PSM for frequent reachability periods is reduced for LTE compared with what can be expected in GSM/EDGE. The reason is that a typical GSM/EDGE device uses 33   dBm output power, while LTE devices typically use 23   dBm output power. This implies   that the cost of transmission and a RAU or TAU in relation to receiving a page is much higher for GSM/EDGE than what is the case for LTE.
Table 2.3 summarizes the features discussed in this section and specified in Release 12 and 13 to optimize the mobile power consumption.
Chapters 3, 5, and 7 will further discuss how the concepts of relaxed monitoring of serving and neighbor cells, PSM, paging, idle, and connected mode DRX and eDRX have been designed for EC-GSM-IoT, LTE-M and NB-IoT. It will then be seen that the DRX cycles mentioned in Table 2.2 have been further extended for NB-IoT to support low power consumption and long device battery life.

2.3.4. Study on provision of low-cost MTC devices based on LTE

Table 2.2

The maximum configurable mobile terminated reachability periodicities for GSM and LTE when using Idle mode eDRX, Connected mode eDRX or PSM with RAU/TAU based reachability.
GSM LTE
Idle mode eDRX ∼52   min ∼43   min
Connected mode eDRX 10.24   s
PSM >1   year (RAU) >1   year (TAU)

Table 2.3

3GPP Release 12 and 13 features related to device power savings.
Release GSM LTE
12 Power saving mode Power saving mode
13
Extended DRX
Power efficient operation
Extended DRX
A number of solutions for lowering the complexity and cost of the radio frequency (RF) and baseband parts of an LTE modem were proposed in the scope of the LTE-M study item. It was concluded that a reduction in transmission and reception bandwidths and peak data rates in combination with adopting a single RF receive chain and half-duplex operation would make the cost of an LTE device modem comparable to the cost of an EGPRS modem. A reduction in the maximum supported transmission and reception bandwidths and adopting a single RF receive chain reduces the complexity in both the RF and the baseband because of, e.g., reduced RF filtering cost, reduced sampling rate in the analog-to-digital and digital-to-analog conversion (ADC/DAC), and reduced number of baseband operations needed to be performed. The peak data rate reduction helps reduce the baseband complexity in both demodulation and decoding parts. Going from full-duplex operation as supported by Category 1 devices to half duplex allows the duplex filter(s) in the RF front end to be replaced with a less costly switch. A reduction in the transmission power can also be considered, which relaxes the requirements on the RF front-end power amplifier and may support integration of the power amplifier on the chip that is expected to reduce device complexity and manufacturing costs. Table 2.4 summarizes the findings recorded in the LTE-M study item for the individual cost reduction techniques and indicates the expected impact on coverage from each of the solutions. As the cost savings are not additive in all cases refer to Table 6.15 for cost estimates of combinations of multiple cost reduction techniques. The main impact on downlink coverage is caused by going to a single RF chain, i.e., one receive antenna instead of two. If a lower transmit power is used in uplink, this will cause a corresponding uplink coverage reduction. Reducing the maximum signal bandwidth to 1.4   MHz may cause coverage loss due to reduced frequency diversity. This can however be partly compensated for by use of frequency hopping.

Table 2.4

Overview of measures supporting an LTE modem cost reduction [19].
Objective Modem cost reduction Coverage impact
Limit full duplex operation to half duplex 7%–10% None
Peak rate reduction through limiting the maximum transport block size (TBS) to 1000 bits 10.5%–21% None
Reduce the transmission and reception bandwidth for both RF and baseband to 1.4   MHz 39% 1–3   dB DL coverage reduction due to loss in frequency diversity
Limit RF front end to support a single receive branch 24%–29% 4   dB DL coverage reduction due to loss in receive diversity
Transmit power reduction to support PA integration 10%–12% UL coverage loss proportional to the reduction in transmit power
Besides studying means to facilitate low device complexity the LTE-M study item [19] provided an analysis of the existing LTE coverage and presented means to improve it by up to 20   dB. Table 2.5 summarizes the frequency-division duplex LTE maximum coupling loss (MCL) calculated as:
M C L = P T X ( S N R + 10 l o g 10 ( k · T · B W ) + N F )
image (2.1)
PTX equals the transmitted output power, SNR is the supported signal to noise ratio, BW is the signal bandwidth, NF the receiver Noise Figure, T equals an assumed ambient temperature of 290   K, and k is Boltzmann's constant.
It was assumed that the eNB supports two transmit and two receive antennas. The reference LTE device was assumed to be equipped with a single transmit and two receive antennas. The results were obtained through simulations assuming downlink Transmission Mode 2, i.e., downlink transmit diversity [20]. It is seen that the Physical Uplink Shared Channel (PUSCH) is limiting the LTE coverage to a MCL of 140.7   dB.
The initial target of the LTE-M study item [19] was to provide 20   dB extra coverage for low-cost MTC devices leading to a MCL of 160.7   dB. After investigating the feasibility of extending the coverage of each of the channels listed in Tables 2.5 to 160.7   dB through techniques such as transmission time interval (TTI ) bundling, Hybrid Automatic Repeat Request (HARQ) retransmissions and repetitions, it was concluded that a coverage improvement of 15   dB leading to a MCL of 155.7   dB was an appropriate initial target for low-complexity MTC devices based on LTE.
The LTE-M study item triggered a 3GPP Release 12 work item [21] introducing an LTE device category (Cat-0) of low-complexity, and a Release 13 work item [22] introducing an LTE-M device category (Cat-M1) of even lower-complexity for low-end MTC applications and the needed functionality to extend the coverage for LTE and LTE-M devices. Chapters 5 and 6 are in detail presenting the design and performance of LTE-M that is the result of these two work items and the two that followed in Release 14 and 15.

Table 2.5

Overview of LTE maximum coupling loss performance [19].
Performance/Parameters Downlink coverage Uplink coverage
Physical channel PSS/SSS PBCH
PDCCH
Format 1A
PDSCH PRACH
PUCCH
Format 1A
PUSCH
Data rate [kbps] 20 20
Bandwidth [kHz] 1080 1080 4320 360 1080 180 360
Power [dBm] 36.8 36.8 42.8 32 23 23 23
NF [dB] 9 9 9 9 5 5 5
#TX/#RX 2TX/2RX 2TX/2RX 2TX/2RX 2TX/2RX 1TX/2RX 1TX/2RX 1TX/2RX
SNR [dB] -7.8 -7.5 -4.7 -4 -10 -7.8 -4.3
MCL [dB] 149.3 149 146.1 145.4 141.7 147.2 140.7

image

2.3.5. Study on cellular system support for ultra-low complexity and low throughput internet of things

In 3GPP Release 13 the study item on Cellular System Support for Ultra-Low Complexity and Low Throughput Internet of Things [1], here referred to as the Cellular IoT study item, was started in 3GPP TSG GERAN. It shared many commonalities with the LTE-M study item [19], but it went further both in terms of requirements and in that it was open to GSM backward compatible solutions as well as to non-backward compatible radio access technologies. The work attracted considerable interest, and 3GPP TR 45.820 Cellular system support for ultra-low complexity and low throughput IoT capturing the outcome of the work contains several solutions based on GSM/EDGE, on LTE and non-backwards compatible solutions, so-called Clean Slate solutions.
Just as in the LTE-M study item improved coverage was targeted, this time by 20   dB compared to GPRS. Table 2.6 presents the GPRS reference coverage calculated by 3GPP. It is based on the minimum GSM/EDGE Block Error Rate performance requirements specified in 3GPP TS 45.005 Radio transmission and reception [23]. For the downlink the specified device receiver Sensitivity of -102   dBm was assumed to be valid for a device noise figure (NF) of 9   dB. When adjusted to a NF of 5   dB, which was assumed suitable for IoT devices, the GPRS Reference Sensitivity ended up at -106   dBm. For the uplink 3GPP TS 45.005 specifies a GPRS single antenna base station sensitivity of -104   dBm that was assumed valid for a NF of 5   dB. Under the assumption that a modern base station supports a NF of 3   dB the uplink sensitivity reference also ended up at -106   dBm. To make the results applicable to a base station supporting receive diversity a 5-dB processing gain was added to the uplink reference performance.

Table 2.6

Overview of GPRS maximum coupling loss performance [8].
# Link direction DL UL
1 Power [dBm] 43 33
2 Thermal noise [dBm/Hz] -174 -174
3 NF [dB] 5 3
4 Bandwidth [kHz] 180 180
5 Noise power [dBm] = (2)+(3)+10log10((4)) -116.4 -108.7
6 Single antenna receiver sensitivity according to 3GPP TS 45.005 [dBm] -102 @ NF 9   dB -104 @ NF 5   dB
7 Single antenna receiver sensitivity according to 3GPP TR 45.820 [dBm] -106 @ NF 5   dB -106 @ NF 3   dB
8 SINR [dB] = (7)–(5) 10.4 12.4
9 RX processing gain [dB] 0 5
10 MCL [dB] = (1)-((8)–(10)) 149 144

image

The resulting GPRS MCL ended up at 144   dB because of limiting uplink performance. As the target of the Cellular IoT study item was to provide 20   dB coverage improvements on top of GPRS, this led to a stringent MCL requirement of 164   dB. The Cellular IoT study also specified stringent performance objectives in terms of supported data rate, latency, battery life, system capacity and device complexity.
After the Cellular IoT study item had concluded, normative work began in 3GPP Release 13 on EC-GSM-IoT [24] and NB-IoT [25]. Chapters 3, 4, 7, and 8 go into detail and present how EC-GSM-IoT and NB-IoT were designed to meet   all the objectives of the Cellular IoT study item.
When comparing the initially targeted coverage for EC-GSM-IoT, NB-IoT, and LTE-M, it is worth to notice that Tables 2.5 and 2.6 are based on different assumptions, which complicates a direct comparison between the LTE-M target of 155.7   dBs MCL and the EC-GSM-IoT and NB-IoT target of 164   dB. Table 2.5 assumes, e.g., a base station NF of 5   dB, while Table 2.6 uses a NF of 3   dB. If those assumptions had been aligned, the LTE reference MCL had ended up at 142.7   dB and the LTE-M initial MCL target at 157.7   dB. If one takes into account that the LTE-M coverage target is assumed to be fulfilled for 20-dBm LTE-M devices, but that all the LTE-M coverage enhancement techniques are also available to 23-dBm LTE-M devices, the difference between the LTE-M coverage target and the 164-dB target shrinks to 3.3   dB. The actual coverage performance of EC-GSM-IoT, LTE-M, and NB-IoT is presented in Chapters 4, 6, and 8, respectively.

2.3.6. Study on Latency reduction techniques for LTE

In 3GPP Release 14 the Study on Latency reduction techniques for LTE [26] was carried out. It initiated the work to reduce the latency of LTE toward enabling support for critical MTC (cMTC). The attention of 3GPP in the area of IoT services had until Release 14 been on massive MTC, but from this point onwards 3GPP focused on two parallel cellular IoT streams: mMTC and cMTC.
The latency reduction study focused on optimizations of the connected mode procedures. As part of the study, reduced uplink transmission latency was investigated by means of shortening the Semi Persistent Scheduling uplink grant periodicity. Shortening of the minimum transmission time interval (TTI) to below 1   ms, and a reduction of device processing times were also considered candidate techniques for achieving latency improvements. This study led to the normative work on cMTC in LTE carried out in 3GPP Releases 14 and 15 which in detail are presented in Chapter 9. From Release 15 also NR supports cMTC, as presented in Chapters 11 and 12.

2.4. 5G

2.4.1. IMT-2020

In 2017 the International Telecommunications Union Radiocommunication sector (ITU-R) defined the fundamental framework for the work on 5G by the publication of the report Minimum requirements related to technical performance for IMT-2020 radio interfaces(s) [27]. It presents the uses cases and requirements associated with the so called International Mobile Telecommunication-2020 (IMT-2020) system. IMT-2020 is what in layman's terms is referred to as 5G. IMT-2020 is intended to support three major categories of use cases referred to as:
  1. • mMTC,
  2. • cMTC, and
  3. • enhanced mobile broadband (eMBB).
The scope of this book includes mMTC and cMTC, which also may be referred to as ultra-reliable and low latency communications (URLLC). In this book we use the term cMTC for the set of critical use cases, and URLLC when discussing the technologies supporting the cMTC services and applications.
For each of the use cases, a set of requirements are defined that needs to be met by a 5G system. For mMTC, the 5G requirement to meet is known as connection density. It defines a set of deployment scenarios for which it needs to be shown that connectivity can be provided to 1,000,000 devices per square kilometer.

2.4.2. 3GPP 5G

2.4.2.1. 5G feasibility studies

3GPP took on the challenge to develop its 5G System (5GS) with start in Release 14. Following the regular procedures introduced in Section 2.1, the work began with a range of feasibility studies. The Study on Scenarios and Requirements for Next Generation Access Technologies [28] presents the 3GPP requirements on NR which is the 3GPP 5G radio interface. It builds on and extends the set of IMT-2020 requirements. Table 2.8 presents the 3GPP 5G requirements in terms of connection density, coverage, latency and device battery life agreed for mMTC. The coverage, latency and battery life requirements are recognized from the Cellular IoT study introduced in Section 2.3.5. Chapters 6 and 8 discusses these requirements in detail.

Table 2.7

IMT-2020   mMTC and cMTC performance requirements [27].
mMTC connection density cMTC latency cMTC reliability
1,000,000 device/km2
User plane: 1   ms
Control plane: 20   ms
99.999%

Table 2.8

3GPP 5G mMTC performance requirements.
Connection density Coverage Latency Device battery life
1000.0000 device/km2 164   dB 10   s 10   years

image

For cMTC the requirement categories match those specified for IMT-2020. 3GPP did however go beyond ITU and tightened the latency requirements compared to the IMT-2020 requirements. Table 2.9 presents the 3GPP cMTC requirements. For latency a pair of user and control plane requirements with no associated reliability requirement is defined. The reliability requirement is associated with a 1   ms user plane latency. Chapters 10 and 12 discusses the interpretation of these requirements in further detail.
The Study on scenarios and requirements for next generation access technologies also specifies a set of operational requirements. Worth to notice is the requirement to provide support for a frequency range up to 100   GHz. Through this requirement 3GPP by far extends the in 4G supported range of frequencies to enable both increased capacity and enhanced mobile broadband services. In the Study on channel model for frequencies from 0.5 to 100   GHz [29] 3GPP defined new channel models applicable to this extended frequency range for evaluating the NR capabilities. Finally, in the Study on NR Access Technology Physical layer aspects [30] 3GPP considered the feasibility of various radio technologies for meeting the set of IMT-2020 and 3GPP 5G requirements.
In Release 15 the Study on self-evaluation toward IMT-2020 [31] was initiated. It collected the 3GPP NR and LTE performance for the set of evaluation scenarios associated with the IMT-2020 requirements. The results from this work was used as basis for the formal 3GPP 5G submission to ITU. The evaluated performance included the mMTC connection density and the cMTC latency and reliability items. These evaluations are in detail presented in Sections 6, 8, 10 and 12.

Table 2.9

3GPP cMTC performance requirements.
Category Latency Reliability
User plane 0.5   ms
User plane 1   ms 99.999 %
Control plane 10   ms

2.4.2.2. 5G network architecture

In 3GPP Release 15 the normative work on 5G started. The 3GPP 5G System is defined by the 5G Core network (5GC) and the Next-Generation RAN (NG-RAN). The NG-RAN includes both LTE Release 15 and the NR technologies, which means that both LTE and NR can connect to the 5GC. A base station connecting to the 5GC for providing NR services to the devices is known as a gNB. The base station providing LTE services is known as a ng-eNB. In Release 15 the 5GC does not support Cellular IoT optimizations, including e.g. PSM and eDRX, which are needed to make a connection to LTE-M or NB-IoT relevant. Release 16 have started to address the needed enhancements to prepare 5GC for IoT.
The 5G system supports several architectural options in its first release, i.e. 3GPP Release 15 [33]. In the Standalone Architecture NR operates on its own as a standalone system with the gNB responsible for both the control plane signaling and user plane data transmissions as depicted on the right side in Fig. 2.8.
An alternative and highly relevant setup is defined by the Non-Standalone Architecture option which is based on E-UTRAN and NR Dual Connectivity [34]. In this architecture, which is exemplified in Fig. 2.9, a primary LTE cell with a master eNB carries the EPC control plane signaling from the Mobility Management Entity (MME) over the S1 interface, and optionally also user plane data traffic. A secondary NR cell served by an en-gNB, is configured by the primary cell to carry user plane data from the Serving Gateway (S-GW) over the S1-u interface to add more bandwidth and capacity. This arrangement is intended to facilitate an initial deployment phase of NR during which the systems overall area coverage is expanding. In the E-UTRAN and NR Dual Connectivity solution LTE is intended to support continuous coverage both for the user and control plane. NR can be seen as a complement for boosting the user plane performance toward lower latencies, higher data rates and an overall higher capacity when the coverage permits.

2.4.2.3. 5G radio protocol architecture

The NG-RAN radio protocol stack is divided in a control plane and a user plane. Fig. 2.10 depicts the radio protocol stack and the interfaces to the 5GC as seen from a device in case the device connects to a gNB over NR. Compared to the LTE radio protocol stack shown in Fig. 2.3 the Service Data Application Protocol (SDAP) is added to the user plane stack. SDAP is responsible for the QoS handling, and maps data packets to radio bearers according to their QoS flow. The QoS flow is associated with attributes such as the required packet delay budget and packet error rate. A SDAP header is added to the IP packets which contains a QoS Flow Identifier (QFI) identifying the QoS flow.
Next Section 2.4.2.4 provides a high-level overview of the NR physical layer while Section 2.4.2.5 introduces specified mechanisms for supporting NR, LTE-M and NB-IoT coexistence.

2.4.2.4. NR physical layer

2.4.2.4.1. Modulation
The NR physical layer is similar to LTE in that the physical layer definitions are based on the Orthogonal Frequency Division Multiplexing (OFDM) modulation. LTE supports cyclic prefix (CP) based OFDM (CP-OFDM) in the downlink and Single Carrier Frequency Division Multiple Access (SC-FDMA), also known as DFT-Spread-OFDM, in the uplink. NR supports CP-OFDM in both link directions, and SC-FDMA in the uplink. In the uplink CP-OFDM is intended to facilitate high throughput, e.g. by the use of multiple input multiple output (MIMO) transmissions, while SC-FDMA with is reduced peak to average power ratio is intended for coverage limited scenarios.
The NR waveforms support modulation schemes PI/2-BPSK, QPSK, 16QAM, 64QAM and 256QAM.
2.4.2.4.2. Numerology
For the 15   kHz subcarrier spacing the normal CP is identical to LTE with a length of 4.7 us. For the 30, 60, 120 and 240   kHz options, the NR normal CP length decreases in proportion to the increase in subcarrier spacing relative the basic 15   kHz subcarrier spacing option as seen in Fig. 2.11. The minimum required CP length is determined by the anticipated channel delay spread. The channel delay spread hence set a direct lower limit on the CP length, which for a given accepted CP overhead sets an upper limit on the acceptable subcarrier spacing. To support flexible use of the higher subcarrier spacing options the 60   kHz configuration in addition to the normal CP length of 1.2 us also supports an extended CP of length 4.2 us.
It's worth to mention that the delay spread is expected to be fairly independent of the carrier frequency. But it is typically larger for macro cells, in outdoor deployment scenarios, where low frequency bands are commonly used, than for indoor small cells where high frequency bands are a popular choice. The 15 and 30   kHz subcarrier spacing options with in absolute terms longer CP lengths are suitable for providing macro cell coverage, while the larger subcarrier spacings with shorter CP lengths are more suitable for small cell type of deployments with lower delay spreads. There are obviously exceptions to this rule, and the larger subcarrier spacings are useful as soon as the CP covers the delay spread anticipated in the targeted deployment.
2.4.2.4.3. Time and frequency resources
NR supports frequency bands of width up to 100   MHz in the lower frequency range, and up to 400   MHz in the mm-wave region. The bands can be divided in multiple parts, each known as a Bandwidth Part (BWP) with individually configurable numerology. Due to the potentially very large system bandwidth a device does not need to support the full system bandwidth as in LTE, with the exception of LTE-M, and can instead operate on one of the BWPs. BWP adaptation supports adaptation of receive and transmit bandwidths and frequency location as well as the used numerology. This allows for reduced device complexity, device power savings and the use of service optimized numerologies.
The NR frequency grid is defined by physical resource blocks (PRB) which just as for LTE is defined by 12 subcarriers. The absolute frequency width of a PRB scales with the configured subcarrier numerology.
Also, the frame structure is dependent on the chosen numerology. The basic radio frame is of length 10   ms and contains 10 subframes each 1   ms long. The subframe contains 1 slot for the 15   kHz numerology. The slot is defined by 14 OFDM symbols for the case of normal CP, and 12 in case of extended CP. The slot length decreases, and the number of slots per subframe increases, as the numerology scales up.
In NR the smallest scheduling format is no longer a subframe. The concept of mini-slots allows 2, 4 or 7 OFDM symbols to be scheduled in the downlink to support low latency services including cMTC. In the uplink a mini-slot can be of any length.
NR do just as LTE, support both frequency and time division duplex modes. In contrary to LTE, in the time division duplex bands the scheduling of subframes can flexibly be configured for uplink or downlink traffic. This to accommodate dynamically changing traffic patterns.
2.4.2.4.4. Initial access and beam management
NR initial access builds on concepts established in LTE. NR does just as LTE make use of the Primary Synchronization Signal and Secondary Synchronization Signal for physical cell ID acquisition and synchronization to the downlink frame structure of a cell. The NR PBCH carries the master information block which contains the most critical system information such as system frame number, system information scheduling details, and the cell barring indication.
The PSS, SSS and PBCH combination is referred to as the Synchronization Signal/Physical Broadcast Channel (SS/PBCH) block. The SS/PBCH block spans 240 subcarriers and 4 OFDM symbols. Contrary to LTE, where the PSS, SSS and PBCH has a fixed position in the center of the system bandwidth, NR supports a flexible SS/PBCH block location. To avoid blind decoding by the device, the SS/PBCH block format used for initial cell access is associated with a default numerology coupled to the NR frequency band.
Also, the uplink time and frequency resources dedicated for the NR Physical Random-Access Channel (PRACH) are associated with a range of narrow spatial beams. After determining which of the SS/PBCH blocks, and transmit beams, that offers the best coverage a device select a PRACH time and frequency resource that is associated with a receive beam providing similar spatial coverage as the transmit beam of the selected SS/PBCH block.
Two PRACH sequences are defined. The first is just as the LTE PRACH based on an 839 length Zadoff Chu code, while the second is based on a length 139 Zadoff Chu code. For the long Zadoff Chu sequence four different PRACH formats are defined, supporting cell radiuses roughly in the range 15–120   km. The three first formats have inherited the LTE PRACH subcarrier spacing of 1.25   kHz. The fourth is based on 5   kHz subcarrier spacing catering for high speed scenarios. For the short sequence 9 different formats are defined for subcarrier spacings 15, 30, 60 and 120   kHz. These are mainly intended for the high frequency bands.
2.4.2.4.5. Control and data channels
In the downlink NR support the Physical Downlink Control CHannel (PDCCH) and the Physical Downlink Shared CHannel (PDSCH). In the uplink the Packet Uplink Control CHannel (PUCCH) and Packet Uplink Shared CHannel (PUSCH) are specified. The functionality of these channels is inspired by and closely related to the corresponding LTE physical channels. Notable is that the PDCCH bandwidth, in contradiction to LTE, does not need to span the full system bandwidth. The frequency location of the PUCCH is also flexible, and not restricted to the edges of the system bandwidth as is the case for LTE.
NR support Low-Density Parity-Check coding, Polar coding and Reed-Muller block codes. The Low-Density Parity-Check coding is used for the NR data channels and offers good performance for large transport block sizes (TBS). The Polar code gives good performance for short block sizes and is used for the NR control and broadcast channels, with the exception for the shortest control messages where Reed-Muller block codes are used.
NR has significant focus on MIMO technologies and supports eight downlink layers and four uplink layers. Single user MIMO, multi-user MIMO, and beamforming are supported. Beamforming is of particular importance for the high frequency bands to overcome the high attenuation associated with mm-waves.

2.4.2.5. NR and LTE coexistence

2.5. MFA

The main technical work in MFA is performed in a set of working groups (WGs) that are organized under a single technical specification group (TSG). The Radio WG is the largest WG. It focuses on the lower layers in the radio protocol stack. The Minimum performance specification WG defines the MFA radio requirements. The End-to-end architecture WG focuses on architectural aspects and higher layer specifications.
Besides the TSG, MFA contains the Industry WG and the Certification group. The Industry working group is working on the identification of services requirements with a focus on industrial applications. In general MFA aims to give significant attention to industrial use cases. The Certification group defines the test specification for the certification of MFA devices.