This chapter aims to provide an introduction to the concept of broadband satellite networking and the related B-ISDN ATM technology. Although all networks are evolving towards all-IP networking, the new-generation Internet networks have started to adopt the basic principles and techniques developed for ATM networks to support quality of service (QoS), class of service (CoS), fast packet switching, traffic control and traffic management. When you have completed this chapter, you should be able to:
In the early 1990s, research and development in broadband communications based on ATM and fibre optic transmission generated a significant demand for cost-effective interconnection of private and public broadband ATM LANs (also called ATM islands) for cost-effective broadband access via satellite to these broadband islands. However, there was a shortage of terrestrial networks to provide broadband connections in wide areas, particularly in more remote or rural areas where terrestrial lines are expensive and uneconomical to install and operate. Satellite networking was considered as an alternative solution to ‘broadband for all’ to complement terrestrial broadband networks due to its flexibility and immediate global coverage. It was also expected to provide distribution and broadcasting services.
In the commercial arena, the need to provide broadband networks over satellites was also expected to increase significantly broadband services. Examples of the identified applications included linking remote office sites (e.g. oil rigs) to the enterprise backbone and providing broadband entertainment services to mobile platforms (e.g. aeroplanes, ships). Other examples included emergency and disaster relief scenarios and remote/rural medical care where the infrastructure was either disrupted or lacking.
One of the key networking issues was to provide interconnection and also access to geographically dispersed broadband islands in the context of B-ISDN ATM networks with the required QoS and bandwidth. Due to their global coverage and broadcasting nature, satellite networks can also be best used for broadband mobile and broadcasting services, where the major technology challenge is how to design small satellite terminals at low cost but with high-speed transmission for broadband services.
The design of satellite networks was also expected to be directly compatible with the terrestrial networks. It is widely recognised that the development of B-ISDN was not revolutionary but evolutionary. This also required satellites to be able to interconnect the broadband networks as well as existing data networks such as the LAN and MAN.
Like other packet networks, ATM is a set of protocols using asynchronous transfer mode to support broadband services; it is not a transmission technology, but can be transported over different types of transmission technologies and media including wireless, cable and satellite networks. It has been standardised by the ITU-T and the ITU-R to exploit the potential of satellite ATM networks.
By the late 1990s, the emerging WWW services and applications, based on the Internet, changed the landscape of the telecommunications and data communications networks industries. It became mandatory to support Internet protocol (IP) solutions, and also to support QoS. This led to the convergence of user terminals, networks and services and applications in the telecoms industry and Internet towards the next generation of Internet by taking advantage of both the IP and ATM networks.
The principal advantages of satellite systems are their wide coverage and broadcasting capabilities. There are many satellites to provide broadband connections anywhere in the world. The cost and complexity are independent of distance. There are clear advantages to extend the broadband capabilities to rural and remote areas. Satellite links are quick and easy to install with fewer geographical constraints. They make long-distance connections more cost-effective within the coverage areas, particularly for point-to-multipoint and broadcasting services. Satellites can also be complementary to the terrestrial networks and mobile networks.
In a broadband networking environment, satellite networking can be used for user access mode and also for network transit mode. In the user access mode, the satellite system is positioned at the border of the broadband network. It provides access links to a large number of users directly or via local networks. The interfaces to the satellite system in this mode are of the user network interface (UNI) type on one side and the network node interface (NNI) type on the other side.
In the network transit mode, the satellite systems provide high bit-rate links to interconnect the B-ISDN network nodes or network islands. The interfaces on both sides are NNI type. Figure 5.1 illustrates an example of a configuration of the satellite system for broadband network access and mobile access and Figure 5.2 shows the interconnection of broadband islands/networks.
Figure 5.1 Example of user access mode via satellite ATM network
Figure 5.2 Example of network transit mode via a satellite ATM network
The satellite networks are fundamentally different from terrestrial networks in terms of delay, error and bandwidth characteristics, and can have an adverse impact on the performance of network traffic, congestion control procedures and transport protocol operations.
The propagation delay for the packets of a connection consists of the following three quantities: from the source ground terminal to satellite uplink propagation delay (); the inter-satellite link propagation delays (
; if ISL are used); and from the satellite to destination ground terminal downlink propagation delay (
).
The uplink and downlink satellite–ground terminal propagation delays ( and
, respectively) represent the time taken for the signal to travel from the source ground terminal to the first satellite in the network (
) and the time taken for the signal to reach the destination ground terminal from the last satellite in the network (
). They can be calculated as the following:
The end-to-end delay also depends on LEO/MEO constellation designs. In contrast to GEO satellites, the LEO uplink and downlink propagation delay is much shorter but variable over time.
We can also note the transmission delay as , the inter-satellite link delay as
, the on-board switching and processing delay as
, the buffering delay as
and delay due to the terrestrial networks (terrestrial tail) as
. The inter-satellite, on-board switching, processing and buffering delays are cumulative over the path traversed by a connection. The delay variation is caused by orbital dynamics, buffering, adaptive routing (in LEO) and on-board processing. Then, the end-to-end delay (
) can be calculated as:
The transmission delay () is the time taken to transmit a single data packet at the network data rate as:
For broadband networks with high data rates, the transmission delays become negligible in comparison to the satellite propagation delays. For example, it only takes about 212 ms to transmit an ATM cell at a 2 Mbit/s link. This delay is much less than the propagation delays in satellites. Compared with the propagation delays, all the and
are very small, hence can be neglected in calculation.
The inter-satellite link delay () is the sum of the propagation delays of the inter-satellite links (ISL) traversed by the connection. It may be in-plane or cross-plane links. In-plane links connect satellites within the same orbit plane, while cross-plane links connect satellites in different orbit planes.
In GEO systems, ISL delays can be assumed to be constant over a connection's lifetime because GEO satellites are almost stationary over a given point on the earth, and with respect to one another. In LEO constellations, the ISL delays depend on the orbital radius, the number of satellites per orbit, and the inter-orbital distance (or the number of orbits). All in-plane links in circular orbits are considered to be constant. Cross-plane ISL delays change over time, break at highest latitudes and must be reformed. As a result, LEO systems can exhibit a high variation in ISL delay.
LEO satellites have lower propagation delays due to their lower altitudes, but many satellites are needed to form a satellite constellation to provide a global coverage and service. While LEO systems have lower propagation delays, they exhibit higher delay variation due to connection handovers and other factors related to orbital dynamics.
The large delays in GEO, and delay variations in LEO, affect both real-time and non-real-time applications. Many real-time applications are sensitive to the large delay experienced in GEO systems, as well as to the delay variation experienced in LEO systems. In an acknowledgement and time-out based congestion control mechanism, performance is inherently related to the delay–bandwidth product of the connection.
Moreover, round trip time (RTT) measurements are sensitive to delay variations that may cause false time-outs and retransmissions for acknowledgement-based data services. As a result, the congestion control issues for broadband satellite networks are somewhat different from those of low-latency terrestrial networks. Both interoperability as well as performance issues between satellite and terrestrial networks must be addressed before data, voice and video services can be provided over satellite networks.
The attenuation of free space (called free-space loss, ) represents the ratio of received and transmitted power in a link between two isotropic antennas:
Where is propagation distance and
is wavelength. A GEO satellite and a station situated exactly under the satellite is 35 786 km between the satellite and the station (equal to the altitude of the satellite). Therefore the
is of the order of 200 dB at C band and 207 dB at Ku band. Attenuation is also affected by other effects such as rain, clouds, snow, ice and gas in the atmosphere.
Satellite communication bandwidth being a limited resource will continue to be a precious asset. Achieving availability rates of 99.95% at very low bit error rate (BER) is costly. Lowering required availability rates by even 0.05% dramatically lowers satellite link costs. An optimum availability level must be a compromise between cost and performance.
There are constraints in general in choosing the satellite link parameters due to regulations, operational constraints and propagation conditions. The regulations are administered by the ITU-R, ITU-T and ITU-D. They define space radio-communication services in terms of transmission and/or reception of radio waves for specific telecommunication applications. The concept of a radio communication service is applied to the allocation of frequency bands and analysis of conditions for sharing a given band among compatible services. The operational constraints relate to realisation of a ratio, provision of an adequate satellite antenna beam for coverage of a service area with a specified value of satellite antenna gain, level of interference between satellite systems, orbital separation between satellites operating in identical frequency bands and minimum of total cost.
Therefore the design of high-speed transmission faces great challenges to achieve error performance objectives.
In this section, the discussion on GEO satellite based broadband networking architecture is based on the design of the CATALYST project. The CATALYST project was the first satellite project funded within the European Framework Programme Research in Advanced Communication in Europe phase II (RACE II) to develop an experimental broadband satellite network for interconnection to geographically dispersed broadband networks called ‘broadband islands’. The CATALYST demonstration took place in 1992–1993 and involved the first transmission of ATM cells over satellite in Europe.
A modular approach was used in the design to interface different networks and the satellite, converting network packets to and from ATM cells. The network architecture and concepts developed in the project are still applicable to modern broadband networks and services. The functions of the main building blocks of the demonstrator are described here.
To make use of the existing satellite systems, development has been mainly on the ground segment. Many modules were developed, where each module had buffer(s) for packet/cell conversion and/or traffic multiplexing. The buffers are also used for absorbing high-speed burst traffic.
Therefore, the satellite ATM system can be designed to be capable of interconnecting different networks with capacities in the range of 10–150 Mbit/s (10 Mbit/s for Ethernet, 34 Mbit/s for DQDB, 100 Mbit/s for FDDI and 150 Mbit/s for ATM networks). Figure 5.3 illustrates the model of the ground equipment.
Figure 5.3 Ground segment modules
For internetworking purpose, different modules were developed including the following:
In the demonstrator system, the EUTELSAT II satellite was used in good weather conditions making use of 36 MHz bandwidth of a transponder. It achieved transmission capacity of approximately 20 Mbit/s. The capacity has to be shared by a number of earth stations when multiple broadband islands are interconnected. It was a trade-off to provide good required QoS and efficient utilisation of the satellite resources (bandwidth and transmission power).
Compared to the propagation delay, the delay within the ground segment was insignificant. Buffering in the ground-segment modules could cause variation of delay, which was affected by the traffic load on the buffer. Most of the variation was caused in the TIM-ATM buffer. It caused an estimated average delay of 10 ms and worst-case delay of 20 ms. Cell loss occurred when the buffer overflowed. The effects of delay, delay variation and cell loss in the system could be controlled to the minimum by controlling the number of applications, the amount of traffic load and allocating adequate bandwidth for each application.
The TDMA system was used with frame length of 20 ms which was shared by the earth stations. Each earth station was limited to the time slots corresponding to the allocated transmission capacity up to maximum 960 cells (equivalent to 20.352 Mbit/s). The general TDMA format is shown in Figure 5.4.
Figure 5.4 TDMA frame format (earth station to satellite)
There are three levels of resource management (RM) mechanisms. The first level is controlled by the network control centre (NCC) and allocates the bandwidth capacity to each earth station. The allocation is in the form of burst time plans (BTP). Within each BTP, burst times are specified for the earth station, which limit the number of cells in bursts the earth stations can transmit. In the CATALYST demonstrator, the limit is that each BTP is less than or equal to 960 ATM cells and the sum of the total burst times is less than or equal to 1104 cells.
The second level is the management of the virtual paths (VPs) within each BTP. The bandwidth capacity that can be allocated to the VP is restricted by the BTP. The third level is the management of the virtual channels (VCs). It is subject to the available bandwidth resource of the VP. Figure 5.5 illustrates the resource management mechanisms of the bandwidth capacity. Each station is allocated a time slot within the burst time plan. Each time slot is further divided to be allocated according to the requirements of VPI and VCI. The allocation of the satellite bandwidth is done when the connections are established. Dynamic changing, allocation, sharing or re-negotiation of the bandwidth during the connection is also possible.
Figure 5.5 Satellite resource management
To effectively implement resource management, the allocation of the satellite link bandwidth can be mapped into the VP architecture in the ATM networks and each connection mapped into the VC architecture. The BTP can be a continuous burst or a combination of a number of sub-burst times from the TDMA frame.
The burst-time plan, data arrival rate and buffer size of the ground station have an important impact on the system performance. To avoid buffer overflow the system needs to control the traffic arrival rate, burst size or allocation of the burst-time plan. The maximum traffic rate allowed, to prevent the buffer overflow, is a function of the burst-time plan and burst size for a given buffer size, and the cell loss ratio is a function of traffic arrival rate and allocated burst-time plan for a given buffer size.
CAC is defined as the set of actions taken by the network at the call set-up phase in order to establish a connection if sufficient resources are available for the call through the whole network at its required QoS and maintain the agreed QoS of all the existing calls. This also applies to re-negotiation of connection parameters within a given call. In a B-ISDN environment, a call can require more than one connection for multimedia or multiparty services such as video-telephony or videoconference.
A connection may be required by an on-demand service, or by permanent or reserved services. The information about the traffic descriptor and QoS is required by the CAC mechanism to determine whether the connection can be accepted or not. The CAC in the satellite has to be the integrated part of the whole-network CAC mechanisms.
Networking policing functions make use of usage parameter control (UPC) between user terminals and network nodes and network parameter control (NPC) between the network nodes mechanisms. UPC and NPC monitor and control traffic to protect the network (particularly the satellite link) and enforce the negotiated traffic contract during the call. The peak cell rate has to be controlled for all types of connections. Other traffic parameters may be subject to control such as average cell rate, burstiness and peak duration.
At cell level, cells are allowed to pass through the connection if they comply with the negotiated traffic contract. If violations are detected, actions such as cell tagging or discarding are taken to protect the network.
Apart from UPC/NPC tagging users may also generate different priority traffic flows by using the cell loss priority bit. This is called priority control (PC). Thus, a user's low-priority traffic may not be distinguished by a tagged cell, since both user and network use the same CLP bit in the ATM header. Traffic shaping can also be implemented in the satellite equipment to achieve a desired modification of the traffic characteristics. For example, it can be used to reduce peak cell rate, limit burst length and reduce delay variation by suitably spacing cells in time.
Although preventive control tries to prevent congestion before it actually occurs, the satellite system may still experience congestion due to the earth-station multiplexing buffer or switch output buffer overflow. In this case, where the network relies only on the UPC and no feedback information is exchanged between the network and the source, no action can be taken once congestion has occurred.
Congestion is defined as the state where the network is unable to meet the negotiated QoS objectives for the connections already established. Congestion control (CC) is the set of actions taken by the network to minimise the intensity, spread and duration of congestion. Reactive CC becomes active when there is indication of any network congestion.
Many applications, mainly those handling data transfer, have the ability to reduce their sending rate if the network requires them to do so. Likewise, they may wish to increase their sending rate if there is extra bandwidth available within the network. These kinds of applications are supported by the ABR service class. The bandwidth allocated for such applications is dependent on the congestion state of the network.
Rate-based control is recommended for ABR services, where information about the state of the network is conveyed to the source through special control cells called resource management (RM) cells. Rate information can be conveyed back to the source in two forms:
The earth stations may determine congestion status either by measuring the traffic arrival rate or by monitoring the buffer status.
Until the launch of the first regenerative INTELSAT satellite in January 1991, all satellites were transparent satellites. Although the regenerative, multibeam and on-board switch satellites have potential advantages, they increased the complexity on reliability, the effect on flexibility of use, the ability to cope with unexpected changes in traffic demand (both volume and nature) and new operation procedures. Advanced broadband satellite networks tried to explore the benefit of on-board processing and switching, multibeam satellite and LEO/MEO constellation, although complexity is still the main concern for satellite payloads.
The radio access layer (RAL) for satellite access must take into account the performance requirements for satellite systems. A frequency-independent specification is preferred. Parameters to be specified include range, bit rates, transmit power, modulation/coding, framing formats and encryption. Techniques need to be considered for dynamically adjusting to varying link conditions and coding techniques for achieving maximum bandwidth efficiencies.
The medium access control (MAC) protocol is required to support the shared use of the satellite channels by multiple switching nodes. A primary requirement for the MAC protocol is to ensure bandwidth provisioning for all the traffic classes, as identified in UNI. The protocol should satisfy both the fairness and efficiency criteria.
The data link control (DLC) layer is responsible for the reliable delivery of the data frame across the satellite link. Since higher layer performance is extremely sensitive to cell loss, error control procedures need to be implemented. Special cases for operation over simplex (or highly bandwidth asymmetric) links need to be developed. DLC algorithms tailored to special specific QoS classes also need to be considered.
Wireless control is needed for support of control plane functions related to resource control and management of the physical, MAC and DLC layers specific to establishing a wireless link over satellites. This also includes meta-signalling for mobility support.
Transparent satellites consist of nothing more than amplifiers, frequency changers and filters. These satellites adapt to changing demands, but at the cost of high space segment tariffs and high-cost, complex earth terminals. OBP increases the complexity in the satellite, but reduces the cost of the use of the space segment and the cost of the earth terminals. There are varying degrees of processing on board satellites:
They may not all be present in one payload and the exact mix will depend on applications. The advantages rendered by the use of OBP are as summarised:
These add up to much reduced complexity and cheaper ground terminals.
There are potential advantages in performance and flexibility for the support of services by placing switching functions on board satellites. It is particularly important for satellite constellations with spot beam coverage and/or inter-satellite communications, as it allows building networks upon constellation satellites therefore relying less on ground infrastructure. Figure 5.6 illustrates the protocol stack on board satellite and on the ground.
Figure 5.6 Satellite with ATM on-board switch
In the case of ATM on-board switch satellites, the satellite acts as a switching point within the network (as shown in Figure 5.6) and is interconnected with more than two terrestrial network end points. The on-board switch routes ATM cells according to the VPI/VCI of the header and the routing table when connections are set up. It also needs to support the signalling protocols used for UNI as access links and for NNI as transit links.
On-board switching (OBS) satellites with high-gain multiple spot beams have been considered as key elements of advanced satellite communications systems. These satellites support small, cost-effective terminals and provide the required flexibility and increased utilisation of resources in a burst multimedia traffic environment.
Although employing an on-board switch function results in more complexity on board the satellite, the following are the advantages of on-board switches:
One of the most critical design issues for on-board processing satellites is the selection of an on-board baseband switching architecture. The following types of on-board switches are possible:
These have some advantages and disadvantages, depending on the services to be carried, which are summarised in Table 5.1.
Table 5.1 Comparison of various switching techniques
Switching architecture | Circuit switching | Fast packet switching | Hybrid switching | Cell switching (ATM switching) |
Advantages |
|
|
|
|
Disadvantages |
|
|
|
|
From a bandwidth efficiency point of view, circuit switching is advantageous under the condition that the major portion of the network traffic is constant, variable short burst smooth or long burst. However, for burst traffic, circuit switching results in a lot of wasted bandwidth capacity.
Fast packet switching may be an attractive option for a satellite network carrying both packet-switched traffic and circuit-switched traffic. The bandwidth efficiency for burst traffic will be slightly less due to packet overheads.
In some situations, a mixed-switch configuration, called a hybrid switch consisting of both circuit and packet switches, may provide optimal on-board processor architecture. However, the distribution of the traffic is unknown, which makes the implementation of such a switch a risk of over dimension or under dimension.
For satellite networking, fixed-size fast packet switching is an attractive solution for both circuit- and packet-switched traffic deterministic nature of fixed size. Using statistical multiplexing of packets, it could achieve the highest bandwidth efficiency despite a relatively large overhead per packet.
In addition, due to on-board mass and power-consumption limitations, packet switching is especially well suited to satellite switching because of the sole use of digital communications. It is important that satellite networking follows the trends of terrestrial technologies for seamless integration.
A multibeam satellite features several antenna beams which provide coverage of different service zones, as illustrated by Figure 5.7. As received on board the satellite, the signals appear at the output of one or more receiving antennas. The signals at the repeater outputs must be fed to various transmitting antennas.
Figure 5.7 Multibeam satellite
The spot-beam satellites provide advantages to the earth-station segment by improving the figure of merit on the satellite. It is also possible to reuse the same frequency band several times in different spot beams to increase the total capacity of the network without increasing the allocated bandwidth. However, there is interference between the beams.
One of the current techniques for interconnections between coverage areas is on-board satellite-switched TDMA (SS/TDMA). It is also possible to have packet-switching on-board multibeam satellites.
One of the major disadvantages of GEO satellites is caused by the distance between the satellites and the earth stations. They have traditionally mainly been used to offer fixed telecommunication and broadcast services. In recent years, satellite constellations of low/medium earth orbit (LEO/MEO) for global communication have been developed with small terminals to support mobility. The distance is greatly reduced. The MEO satellite constellation requires more satellites plus spares to provide global coverage, and a LEO requires even more satellites than the MEO.
Compared to GEO networks, LEO/MEO networks are much more complicated, but provide a lower end-to-end delay, less free-space loss and higher overall capacity. However, due to the relatively fast movement of satellites in LEO/MEO orbit relative to user terminals, satellite handover is an important issue.
Constellations of LEO/MEO satellites can also be an efficient solution to offer highly interactive services with a very short round-trip propagation time over the space segment (typically 20/100 ms for LEO/MEO as compared to 500 ms for geostationary systems). The systems can offer similar performances to terrestrial networks, thus allowing the use of common communication protocols and applications and standards.
The use of ISL for traffic routing has to be considered in LEO/MEO satellite constellations. It must be justified that this technology will bring a benefit, which would make its inclusion worthwhile or to what extent on-board switching, or some other form of packet switching, can be incorporated into its use.
The issues that need to be considered when deciding on the use of ISL include:
The mass and power consumption of ISL payloads are factors in the choice of whether to include them in the system, in addition to the possible benefits and drawbacks. Also the choice between RF and optical payloads is now possible because optical payloads have become more reliable and offer higher link capacity. The tracking capability of the payloads must also be considered, especially if the inter-satellite dynamics are high. This may be an advantage for RF ISL payloads.
Advantages of ISLs can be summarised as the following:
Disadvantages of ISLs can be summarised as the following:
Hand-over control is a basic mobile network capability that allows for the migration of terminals across the network backbone without dropping an ongoing call.
Because of the geographical distances involved, hand-over for access over GEO satellite is expected not to be an issue in most applications. In some instances, for example intercontinental flights, a slow hand-over between GEO satellites with overlapping coverage areas will be required. For LEO/MEO satellite networks, hand-over should be implemented to avoid and disruption to the existing connections.
Location management refers to the capability of one-to-one mapping between mobile node ‘name’ and current ‘routing-id.’ Location management primarily applies to the scenario involving switching on board the satellite.
Satellite constellations can use the Ku band (11/14 GHz) for connections between user terminals and gateways. High-speed transit links between gateways will be established using either the Ku or the Ka band (20/30 GHz). Research has also been carried out for Q band (40 GHz) and V band (50 GHz) to achieve much high transmission rate up to Terabit/s for the future broadband satellite communication networks.
According to the ITU radio regulation, GEO satellite networks have to be protected from any harmful interference from non-geostationary systems. This protection is achieved through angular separation using a predetermined hand-over procedure based on the fact that the positions of geostationary and constellation satellites are permanently known and predictable. When the angle between a gateway, the LEO/MEO satellite in use by the gateway and the geostationary satellite is smaller than one degree, the LEO/MEO transmissions are stopped and handed over to another LEO/MEO satellite, which is not in similar interference conditions.
The constellations provide a cost-effective solution offering a global access to broadband services. The architectures should be capable of supporting a large variety of services; reducing costs and technical risks related to the implementation of the system; ensuring a seamless compatibility and complement with terrestrial networks; providing flexibility to accommodate service evolution with time as well as differences in service requirements across regions; and optimising the use of the frequency spectrum.
ITU-T-I356 defines parameters for quantifying the ATM cell transfer performance of a broadband ISDN (B-ISDN) connection. This ITU recommendation includes provisional performance objectives for cell transfer, some of which depend on the user's selection of QoS class.
ITUT-I356 defines a layered model of performance for B-ISDN, as shown in Figure 5.8.
Figure 5.8 Layered model of performance for B-ISDN (Source: ITU 2000 [3]. Reproduced with permission of ITU.)
It can be seen that the network performance (NP) provided to B-ISDN users depends on the performance of three layers:
ITU-T I.356 also defines a set of ATM cell transfer performance parameters using the cell transfer outcomes. All parameters may be estimated on the basis of observations at the measurement points (MPs). Following is a summary of ATM performance parameters:
Figure 5.9 Cell delay variation parameter definitions (Source: ITU 2000 [3]. Reproduced with permission of ITU.)
The two-point CDV for cell
between
and
is the difference between the absolute cell transfer delay
of cell
from MP1 to MP2 and a defined reference cell transfer delay
:
.
The absolute cell transfer delay of cell
between
and
is the difference between the cell's actual arrival time at
and the cell's actual arrival time at
. The reference cell transfer delay
between
and
is the absolute cell transfer delay experienced by cell 0 from MP1 to MP2.
ATM was designed for transmission on a physical medium with excellent error characteristics, such as optical fibre, which has improved dramatically in performance since the 1970s. Therefore, many of the features included in protocols that cope with an unreliable channel were removed from ATM. While this results in considerable protocol simplification in the optical fixed networks ATM was designed for, it also causes severe problems when ATM is transmitted over an error-prone channel, such as the satellite, wireless and mobile networks.
The most important impact of burst errors on the functioning of the ATM layer is the dramatic increase in the cell loss ratio (CLR). The eight-bit error control (HEC) field in the ATM cell header can correct only single-bit errors in the header. However, in a burst error environment, if a burst of errors hits a cell header, it is likely that it will corrupt more than a single bit. Thus the HEC field becomes ineffective for burst errors and the CLR rises dramatically.
It has been shown by a simplified analysis and confirmed by actual experiments that for random errors, CLR is proportional to the square of the bit error rate (BER); and for burst errors, CLR is linearly related to BER. Hence, for the same BER, in the case of burst errors, the CLR value (proportional to BER) is orders of magnitude higher than the CLR value for random errors (proportional to the square of BER). Also, since for burst errors, CLR is linearly related to BER, the reduction in CLR with reduction in BER is not as steep as in the case of channels with random errors (where CLR is proportional to the square of BER). Finally, for burst errors, the CLR increases with decreasing average burst length. This is because for the same number of total bit errors, shorter error bursts mean that a larger number of cells are affected.
Another negligible but interesting problem is that of misinserted cells. Since eight HEC bits in the ATM cell header are determined by 32 other bits in the header, there are only valid ATM header patterns out of
possibilities (for 40 ATM header bits). Thus for a cell header, hit by a burst of errors, there is a
chance that corrupted header is a valid one. Moreover, if the corrupted header differs from a valid header by only a single bit, HEC will ‘correct’ that bit and accept the header as a valid one. Thus for every valid header bit pattern (out of
possibilities), there are 40 other patterns (obtained by inverting one bit out of 40) that can be ‘corrected’. The possibility that the ‘error burst’ hit the header in one of these patterns is
. Thus overall, there is a
chance that a random bit pattern, emerging after an ATM cell header is hit by a burst of errors, will be taken as a valid header. In that case a cell, that should have been discarded, is accepted as a valid cell. (Errors in the payload must be detected by the transport protocol at the end points.) Such a cell is called a ‘misinserted’ cell. Also, the probability
that a cell will be misinserted in a channel with burst errors is around one-sixth of the cell loss ratio on the channel, that is:
Since CLR can be written as a constant times BER, the misinserted cell probability is also a constant times BER, that is:
The cell insertion rate, , the rate at which cells are inserted in a connection, is obtained by multiplying this probability by the number of ATM cells transmitted per second (r), divided by total possible number of ATM connections
, that is:
Because of the very large number of total possible ATM connections, the cell insertion rate is negligible (about one inserted cell per month) even for high BER (≈ 10−4) and data rates (≈ 34 Mbits/s). Therefore, transition from random errors to burst errors causes the ATM CLR metric to rise significantly.
The cyclic error detection codes employed by AAL protocols type 1, 3/4 and 5 are susceptible to error bursts in the same way as the ATM HEC code. A burst of errors that passes undetected through these codes may cause failure of the protocol's mechanism or corruption in data. AAL type 1's segmentation and reassembly (SAR) header consists of four bits of sequence number (SN) protected by a three-bit CRC code and a single-bit parity check. There is a 15/255 1/17 chance that an error burst on the header will not be detected by the CRC code and parity check. Such an undetected error at the SAR layer may lead to synchronisation failure at the receiver's convergence sublayer. AAL 3/4 uses a 10-bit CRC at the SAR level.
Here, burst errors and scrambling on the satellite channel increase the probability of undetected error. However, full byte interleaving of the ATM cell payload can reduce undetected error rate by several orders of magnitude by distributing the burst error into two AAL 3/4 payloads. The price paid for distributing burst error into two AAL payloads will double the detected error rate and AAL 3/4 payload discard rate. AAL type 5 uses a 32-bit CRC code that detects all burst errors of length 32 or less. For longer bursts, the error detection capability of this code is much stronger than that of AAL 3/4 CRC. Moreover, it uses a length check field, which finds out loss or gain of cells in an AAL 5 payload, even when CRC code fails to detect it. Hence it is unlikely that a burst error in AAL 5 payload would go undetected.
It can be seen that ATM AAL 1 and 3/4 are susceptible to burst errors, as there are less redundant bits used for protections. AAL 5 is more robust against burst errors by using more redundant bits.
There are three types of error control mechanisms: re-transmission mechanism, forward error control (FEC) and interleaving techniques to improve quality for broadband traffic over satellite.
Satellite ATM networks try to maintain BER below in clear sky operation 99% of the time. The burst error characteristics of FEC-coded satellite channels adversely affect the performance of physical, ATM and AAL protocols. The interleaving mechanism reduces the burst error effect of the satellite links.
A typical example of FEC is to use an outer Reed–Solomon (RS) coding/decoding in concatenation with ‘inner’ convolutional coding/Viterbi decoding. Outer RS coding/decoding will perform the function of correcting error bursts resulting from inner coding/decoding. RS codes consume little extra bandwidth (e.g. 9% at 2 Mbit/s).
HEC codes used in ATM and AAL layer headers are able to correct single bit errors in the header. Thus, if the bits of headers are interleaved before encoding and de-interleaved after decoding, the burst of errors will get spread over
headers such that two consecutive headers emerging after de-interleaving will most probably never have more than a single bit in error. Now the HEC code will be able to correct single bit errors and by dual mode of operation, no cell/AAL PDU will be discarded. Interleaving involves reshuffling of bits on the channel and there is no overhead bit involved. However, the process of interleaving and de-interleaving requires additional memory and introduces delay at both sender and receiver.
Burst errors can be mitigated by using FEC and ‘interleaving’ techniques. The performance of these schemes is directly related to the code rate (bandwidth efficiency) and/or the coding gains (power efficiency), provided the delay involved is acceptable to the application.
In broadband satellite networks, we have to exploit the FEC coding and interleaving, and trade off between transmission quality in terms of bit error performance and satellite resources such as bandwidth and power:
Hence some enhancement techniques can be developed to make the transmission of ATM cells over the satellite link more robust. The performance of these techniques is directly related to the code rate (bandwidth efficiency) and/or the coding gain (power efficiency) with additional processing delay.
For large earth stations operating at high data rates, the enhancement techniques try to deal with burst errors.
For small and portable terminals, rapid deployment and relocation are important requirements. The transmission bit rates can be up to but normally below 2.048 Mbit/s. When inter-cell interleaving is not feasible because only a few cells may be transmitted from the terminal, mechanisms which protect single cells have to be found. Interleaving within an entire ATM cell (not only the header), so-called intra-cell interleaving, leads to very small performance gain.
It can be improved by using additional coding to protect the ATM cells. Note that this introduces additional overheads and therefore reduces the useful data bit rate. There are several reasons why FEC or concatenated FEC may not be suitable for enhancing ATM performance over satellite links. First, if only FEC coding is used, than symbol interleaving is usually used to spread the burst errors over several ATM cell headers. The resulting interleaving delay (which is inversely proportional to the data rate) may be too large at a low rate for certain applications. Second if RS codes are used to correct burst of errors in concatenation with FEC either additional bandwidth has to be provided or the data rate has to be reduced.
It is also possible to improve network performance by enhancing equipment which optimises the protocols over a satellite link. This allows the data link layer to be optimised using a combination of protocol conversions and error control techniques. At the transmitter, standard ATM cells are modified to suit the satellite link. At the receiver, error recovery techniques are performed and the modified ATM cells (S-ATM cells) are converted into standard ATM cells.
The main aim of modifying standard ATM cell is to minimise the rather large ATM header overhead which is 5 bytes per 48 byte payload. Of the ATM header information, the address field (which is divided into the VPI and VCI) occupies 24 bits. This allows up to 16 million VC to be set up. Considering that in particular constant bit rate (CBR) connection cells all carry the same address information in the header, there may be methods not to duplicate the same information. The use of 24 bits for address space may be considered a waste of bandwidth for this scenario.
One method to protect the ATM cell header is, when interleaving is not possible, to compress the 24 bits address space to eight bits so that the saved bits can be used to store the duplicate header information (except the HEC field) of the previous cell. The HEC is still computed over the first four bytes of the header and inserted into the fifth byte of the header. Therefore if a cell header contains errors, the receiver can store the payload in a buffer and recover the header information from the next cell provided that its header does not also contain errors. This method does not intend to protect payload. Studies show that this method provides considerable improvements in CLR compared to standard ATM transmission and even compared to interleaving.
Another alternative is to use three-byte HEC instead of one-byte HEC, which is inadequate for the satellite environment.
While fibre optics is rapidly becoming the preferred carrier for broadband communication services, satellite systems can still play an important role. The satellite network configuration and capacity can be used to complement the terrestrial broadband networks during the evolution toward broadband solution for all the populations.
The role of satellites in broadband networking will evolve according to the evolution of the terrestrial networks. However, two main roles can be identified in two scenarios of the broadband network development:
In the first scenario, satellite links provide high bit rate links between broadband nodes or broadband islands. The CATALYST demonstrator provided an example for this scenario and considerations for compatibility between satellite and terrestrial networks. The interfaces with satellite links in this mode are of the NNI type. This scenario is characterised by a relatively small number of large earth stations with a relatively higher speed.
In the second scenario the satellite can also be located at the border of broadband networks to provide access links to a large number of users. This scenario is characterised by a large number of earth stations whose average and peak bit rates are limited. The traffic at the earth station is expected to show large fluctuations. Dynamic bandwidth allocation mechanisms are used for flexible multiple access.
The problem for efficient use of satellite resources is due to the unpredictable nature of burst traffic and the long delay of the satellite link to reallocate and manage satellite resources. It remains a challenge research topic for further studies on on efficient multiple access schemes for satellite systems. The use of OBP satellites with switching capabilities and spot beams would half this delay and bring several advantages for interconnecting a high number of users. By using on-board switching the utilisation of the satellite bandwidth can be maximised by statistically multiplexing the traffic in the sky.
The use of GEO satellites to deliver broadband services has proven feasible. However, delivery of high speed broadband services to transportable or mobile terminals via satellite is still a big challenge to achieve low delays, low terminal power requirements and high minimum elevation angles. It is a natural evolution path to exploit satellites at much lower altitudes such as MEO and LEO orbit heights. Satellites at these lower altitudes have much smaller delays and lower terminal power requirements than satellites in GEO orbit. Research is still going on to find the most suitable orbit, new suitable frequency bands and multiple access schemes to deliver broadband services to small portable and mobile terminals.
The major factor affecting the direction of satellite broadband networking comes from terrestrial networks where networks are evolving towards all-IP solutions. Therefore, it is a logical step to investigate all IP solutions for satellite networking and IP routers on board satellites.
1 Explain the design issues and concepts concerning B-ISDN ATM over broadband satellites.
2 Explain the CATALYST GEO satellite ATM networking and advanced satellite networking with LEO/MEO constellations.
3 Use a sketch to explain the major roles of satellites in broadband networks and also the protocol stacks of the broadband network interconnection and terminal access configurations.
4 Explain the differences between satellites with transparent and on-board switching payload for broadband networks, and discuss advantages and disadvantages.
5 Explain broadband network performance issues and enhancement techniques for satellite networks.
6 Explain the concepts on-board processing and on-board switching techniques, and discuss their advantages and disadvantages.
7 Discuss the advantages and disadvantages of broadband networks based on GEO, MEO and LEO satellites.