Chapter 8
Next Generation Internet (NGI) over Satellite

This chapter aims to introduce next generation Internet (NGI) over satellite. Satellites are considered as an integrated part of the Internet. Future networks and services are evolving towards an all-IP solution. First this chapter introduces new services and applications, modelling and traffic engineering and multi-protocol label switching (MPLS), then it introduces Internet protocol version 6 (IPv6) including addressing and transitions, and particularly it explains IPv6 over satellite including tunnelling and translation techniques of IPv6 over satellite networks. Finally, as a conclusion, it discusses the future development of satellite networking. When you have completed this chapter, you should be able to:

8.1 Introduction

In recent years, we have seen the tremendous spread and growth of mobile networks. The new generations of mobile telephones have become more and more sophisticated, with increasing capabilities of email, WWW access, multimedia messaging, streaming voice and video broadcasting, which go far beyond the original definition of mobile phones.

In terms of software, the mobile phone is more like a computer than a telephone. There are full Internet protocol stacks (TCP/IP) implemented plus transmission technologies (infrared, wireless, USB, etc.), and various peripheral devices. In computer networks, Ethernet and wireless LANs dominate LANs. In mobile networks, the GSM and 3G/4G mobile networks are evolving towards 5G networks. They are converging towards an all-IP solution. The Internet protocol (IP) has also evolved to cope with demands from networking technologies and new services and applications.

Inevitably, satellite networking is also evolving towards an all-IP solution and is following the trends in the terrestrial mobile and fixed networks. In addition to user terminals, services and applications are also converging, that is, satellite network terminals aim to be the same as terrestrial network terminals providing the same user interface and functionality. As the current satellite networks integrate with terrestrial networks, it is not difficult to see that future satellite terminals will be fully compatible with standard terrestrial network terminals, but with a different air interface in the lower layers of the protocol stack (physical and link layers only).

In traditional computer networks, network designers were not very concerned with QoS and traffic engineering. For real-time services, QoS and traffic engineering are important and have been successfully implemented in telephony networks for nearly a century. As more and more people own portable computers and smartphones, mobility is now a new requirement. More and more business transactions, commercial and public uses of the Internet make security a very important Internet issue. More and more TV programmes are streamed through the Internet.

The original design of the Internet did not take all these new requirements and the large scale of today's Internet into consideration. Though IPv4 is soon run out of IP addresses, there are still more new devices required to be connected to the Internet such as electricity meters, gas meters, warter meters, even microwaves, rice cookers, wash machines, cars and so on, in addition to the laptop computers, smartphones and TVs. Although IPv6 has started to address these issues, we are still a long way from a perfect solution.

So far, we have completed our discussion about the transition from physical layer to transport layer. Now it is time to discuss the application layer, new services and applications (starting from information processing) and the development of satellite networks and related issues, including traffic modelling and characterisation, MPLS, traffic engineering and IPv6.

8.2 New Services and Applications

We have discussed various kinds of network services which we expect to support over satellite networks. The services information has to be encoded in proper formats to be suitable for transmission, and decoded at the receiver. The new services and applications can include major components of high-quality digital voice, image and video (and combinations of these) across broadband networks. Here we briefly discuss some of these related topics.

8.2.1 Internet Integrated Services

One of the principal functions of network layer protocols is to offer universal connectivity, and a uniform service interface, to higher layer protocols—in particular, to transport layer protocols—independent of the nature of the underlying physical network. Correspondingly, the function of transport layer protocols is to provide session control services (e.g. reliability) to applications, without being tied to particular networking technologies.

Unless applications run over common network and transport protocols, interoperability for the same applications running on different networks would be difficult, if not impossible. Most multimedia applications will continue to build upon enhancements of current Internet protocols, and deploy a wide variety of high-speed Internet networking technologies.

In the specific case of IP, the Internet Engineering Task Force (IETF) has developed the notion of Internet integrated services. This envisages a set of enhancements to IP to allow it to support integrated or multimedia services. These enhancements include traffic management mechanisms that closely match the traffic management mechanisms of telecommunication networks.

Network protocols rely upon a flow specification to characterise the expected traffic patterns for streams of IP packets, which the network can process through packet-level policing, shaping and scheduling mechanisms to deliver a requested QoS. In other words, a flow is a layer-three connection, since it identifies and characterises a stream of packets, even though the protocol remains connectionless.

8.2.2 Elastic and Inelastic Traffic

There are two main classifications of Internet traffic generated by services and applications:

  • Elastic traffic:this type of traffic is essentially based on TCP, that is, it uses TCP as the transport protocol. Elastic traffic is defined as traffic that is capable of adapting its flow rate according to changes in delay and throughput across the network. This capability is built-in to the TCP flow control mechanisms. This type of traffic is also known as opportunistic traffic, that is, if resources are made available, these applications would try to consume them; on the other hand if the resources are temporarily unavailable they can wait (withholding transmission) without adversely affecting the applications. Examples of elastic traffic include email, file transfers, network news, interactive applications such as remote login (telnet) and web access (HTTP). These applications can cope well with delay and variable throughput in the network. This type of traffic can be further categorised into long-lived and short-lived responsive flows, depending on the length of time the flows are active. FTP is an example of a long-lived responsive flow while HTTP represents a short-lived flow.
  • Inelastic traffic:this type of traffic is essentially based on UDP, that is, it uses UDP as the transport protocol. Inelastic traffic is exactly the opposite of elastic traffic—the traffic is incapable of varying its flow rate when faced with changes in delay and throughput across the network. A minimum amount of resources is required to ensure the application works well; otherwise, the applications will not perform adequately. Examples of inelastic traffic include conversational multimedia applications such as voice or video over IP, interactive multimedia applications such as network games or distributed simulations and non-interactive multimedia applications such as distance learning or audio/video broadcasts where a continuous stream of multimedia information is involved. These real-time applications can cope with small delays but cannot tolerate jitter (variations in average delay). This stream traffic is also known as long-lived non-responsive flow.

In terms of applications, the Internet has to carry the existing computer data traffic. Traditional applications include file transfer (using the FTP protocol), remote login sessions (telnet) and email (SMTP). However, these applications have been somewhat overshadowed by the World Wide Web (HTTP). Voice over IP and video and audio streaming over IP applications are quickly emerging and are contributing significantly to the composition of Internet traffic. However, they are expected to be the major bandwidth consumers in the future. While the protocol composition remains roughly the same in proportion, UDP applications are expected to have an increase in the RTP/RTCP portion (real time). This is due to increases in audio/video streaming and online gaming applications. Potentially, HDTV and 3D TV programmes are are also streamed across the Internet.

8.2.3 QoS Provision and Network Performance

As defined in the QoS architecture, best-effort service is the default service that a network gives to an IP datagram between the source and destination in today's Internet networks. Among other implications, this means that if a datagram changes to a best-effort datagram, all flow controls that apply normally to a best-effort datagram also apply to the datagram.

The controlled load service is intended to support a broad class of applications in the Internet, but is highly sensitive to overloaded conditions. Important members of this class are the ‘adaptive real-time applications’. These applications work well on networks with light load conditions, but degrade quickly with overload conditions. A service which mimics an unloaded network serves these applications well.

Guaranteed service means that a datagram will arrive within a limited time with limited packet loss ratio, if the flow's traffic stays within its specified traffic parameters. This service is intended for applications requiring a firm guarantee of delay within a certain time limit for the traffic to reach its destination. For example, some audio and video ‘playback’ applications are intolerant of any datagram arriving after their playback time. Applications that have hard real-time requirements also require guaranteed service.

In playback applications, datagrams often arrive far earlier than the delivery deadline and have to be buffered at the receiving system until it is time for the application to process them.

8.3 Traffic Modelling and Characterisation

Future network infrastructures will have to handle a huge amount of IP traffic from different types of services, including a significant portion of real-time services. The multi-service characteristics of such a network infrastructure demand a clear requisite: the ability to support different classes of services with different QoS requirements. Moreover, Internet traffic is more variable with time and data rate, with respect to traditional traffic in telecommunication networks, and it is not easily predictable. This means that networks have to be flexible enough to react adequately to traffic changes. Besides the requirements of flexibility and multi-service capabilities that lead to different levels of QoS requirements, there is also a need to reduce complexity.

8.3.1 Traffic Engineering Techniques

Multi-services networks need to support a varied set of applications. These applications contain either one or (often more than one) a combination of the following components: data, audio and video. More widely termed as multimedia applications, these components, together with the applications' requirements, will generate a heterogeneous mixture of traffic with different statistical and temporal characteristics. These applications and services require resources to perform their functions. Of special interest is resource sharing among application, system and network. Traffic engineering is a network function that controls a network's response to traffic demands and other stimuli (such as failures) and encompasses traffic and capacity/resource management. In order for the multi-services networks to efficiently support these applications, while at the same time optimally utilises the networks' resources, traffic engineering mechanisms need to be devised. These mechanisms relate intrinsically to the characteristics of the traffic getting into the network. To devise efficient resource and traffic management schemes requires an understanding of the source traffic characteristics and the development of appropriate traffic models. Hence, source traffic characterisation and modelling is a crucial first step in the overall network design and performance evaluation process. Indeed traffic modelling is identified as one of the key subcomponents of the traffic engineering process model.

8.3.2 Traffic Modelling

Traffic characterisation describes what traffic patterns the application/user generates. The goal is to develop an understanding of the nature of the traffic and to devise tractable models that capture the important properties of the data traffic, which can eventually lead to accurate performance prediction. Tractability is an important feature as it infers that the traffic models used in subsequent analysis readily lend themselves to numerical computation, simulation and analytical solutions. They also have wide range of time scales.

Traffic modelling summarises the expected behaviour of an application or an aggregate of applications. Among the primary uses of traffic characteristics are:

  • long-range planning activities (network planning, design and capacity management);
  • performance prediction, real-time traffic control/management and network control.

Traffic models can be utilised in three different applications:

  • As a source for generating synthetic traffic to evaluate network protocols and designs. This complements the theoretical part of the analysis, which increases in complexity as networks become complicated.
  • As traffic descriptors for a range of traffic and network resource management functions. These include call admission control (CAC), usage parameter control (UPC) and traffic policing. These functions are the key to ensure meeting certain network QoS levels while achieving high multiplexing gains.
  • As source models for queuing analysis, they use queuing systems extensively as the primary method for evaluating network performances and as a tool in network design. A reasonably good match to real network traffic will make analytical results more useful in practical situations.

8.3.3 Statistical Methods for Traffic Modelling

The main aim of traffic modelling is to map accurately the statistical characteristics of actual traffic to a stochastic process from which synthetic traffic can be generated.

For a given traffic trace (TT), the model finds a stochastic process (SP) defined by a small number of parameters such that:

  • TT and SP give the same performance when fed into a single server queue (SSQ) for any buffer size and service rate.
  • TT and SP have the same mean and autocorrelation (goodness-of-fit).
  • Preferably, SP and SSQ are amenable to analysis.

There are many traffic models that have been developed over the years.

8.3.4 Renewal Models

A renewal process is defined as a discrete-time stochastic process, c08-math-0001, where c08-math-0002 are independent identically distributed (iid), non-negative random variables with a general distribution function. Independence here implies that observation at time c08-math-0003 does not depend on the past or future observation, that is, there is no correlation between the present observation and previous observations.

Analysing renewal processes is mathematically simple. However, there is one major shortcoming with this model: the absence of an autocorrelation function. Autocorrelation is a measure of the relationship between two time instances of a stochastic process. It is an important parameter and is used to capture the temporal dependency and burst of the traffic. As mentioned previously, temporal dependencies are important in a multimedia traffic stream while burst traffic expect to dominate broadband networks. Therefore, models, which capture the autocorrelated nature of traffic, are essential for evaluating the performance of these networks.

Therefore, because of its simplicity, the renewal process model is still widely used to model traffic sources. Examples of a renewal process include the Poisson and Bernoulli processes.

8.3.5 Markov Models

The Poisson and Bernoulli processes display memory-less properties in the sense that the future does not depend on the past, that is, the occurrences of new arrivals do not depend on the history of the process. This in turn results in the non-existence of the autocorrelation function since there is no dependency among the random sequence of events.

Markov-based traffic models overcome this shortcoming by introducing dependency into the random sequence. Consequently, autocorrelation is now non-zero and this can capture the characteristics of traffic burst. Markov dependency or a Markov process is defined as a stochastic process c08-math-0004 where, for any c08-math-0005 and given the values of c08-math-0006, the distribution of c08-math-0007 only depends on c08-math-0008. This implies that the next state in a Markov stochastic process only depends on the current state of the process and not on states assumed previously; this is the minimum possible dependence that can exist between successive states. How the process arrives at the current state is irrelevant.

Another important implication of this Markov property is that the next state only depends on the current state and not on how long the process has already been in that (current) state. This means that the state residence times (also called sojourn times) must be random variables with memory-less distribution. Examples of Markov models include on-off and Markov modulated Poisson process (MMPP).

8.3.6 Fluid Models

Fluid traffic models view traffic as a stream of fluid characterised by a flow rate (e.g. bits per second), that traffic volume is better than traffic count in the model. Fluid models are based on the assumption that the number of individual traffic units (packets or cells) generated during the active period is so large that it appears like a continuous flow of fluid. In other words, a single unit of traffic in this case would have little significance and its impact on the overall flow is negligible, that is, individual units will only add infinitesimal information to the traffic stream.

An important benefit of the fluid traffic model is that it can achieve enormous savings in computing resources when simulating streams of traffic as described above. For example, in an ATM network scenario supporting the transmission of high-quality video, it requires a large number of ATM cells for a compressed video at 30 frames per second. If a model were to distinguish between cells and consider the arrival of each ATM cell as a separate event, processing cell arrivals would quickly consume vast amounts of CPU and memory, even if the simulated time were in the order of a few minutes.

By assuming the incoming fluid flow remains (roughly) constant over much longer periods, a fluid flow simulation performs well. A change in flow rate signals the event that traffic is fluctuating. Because these changes can occur far less frequently than the arrival of individual cells the computing overhead involved is greatly reduced.

8.3.7 Auto-regressive and Moving Average Models

Auto-regressive traffic models define the next random variable in the sequence c08-math-0009 as an explicit function of previous variables within a time window stretching from present to past. Some of the popular auto-regressive models are:

  • Linear auto-regressive processes, AR(p), described as:
    equation

    where c08-math-0011 are the family of random variables, c08-math-0012 are real constants and c08-math-0013 are zero-mean, uncorrelated random variables also called white noise, which are independent of c08-math-0014.

  • Moving average processes, MA(q), described as:
    equation
  • Auto-regressive moving average processes, ARMA (p, q), described as:
    equation

8.3.8 Self-similar Models

The development of these models were based on the observation that Internet traffic dynamics resulting from interactions among users, applications and protocols, is best represented by the notion of ‘fractals’ a well-established theories with wide applications in physics, biology and image processing. Therefore, it is natural to apply traffic models that are inherently fractal for characterisation of Internet traffic dynamics and generating synthesised traffic in a computationally efficient manner.

Wavelet modelling offers a powerful and flexible technique for mathematically representing network traffic at multiple time scales. A wavelet is a mathematical function having principles similar to that of Fourier analysis; it is widely used in digital signal processing and image compression techniques.

8.4 The Nature of Internet Traffic

Internet traffic is due to a very large pool of uncoordinated, that is, independent users accessing and using the various applications. Each Internet communication consists of a transfer of information from one computer to another, for example for downloading web pages or sending/receiving emails. Packets containing bits of information transmitted over the Internet are the result of simultaneous active communications between two or more computers or smartphones on the Internet.

8.4.1 World Wide Web (WWW)

A web page typically consists of multiple elements or objects when downloaded. These objects are loaded using separate HTTP GET requests, serialised in one or more parallel TCP connections to the corresponding server(s). In practice, web access is request–response oriented with bursts of numerous requests and small, unidirectional responses. Retrieval of a complete web page requires separate requests for text and each embedded image, thus making traffic inherently burst. Figure 8.1 illustrates a typical message sequence in a web surf session.

c08f001

Figure 8.1 Web surfing message sequence

The characteristics of web traffic have been studied over the years to understand the nature of the traffic. One of the key findings is that web traffic comes in bursts, rather than in steady flows, and the same patterns of bursts repeat themselves regardless of whether the time interval studied is a few seconds long or a millionth of a second. This particular type of traffic is called the self-similar or fractal or scale-invariant traffic. Fractals are objects whose appearances are unchanged at different time scales. Essentially, a self-similar process behaves in a similar way (looks statistically the same) over all time scales.

Analyses of actual web traffic traces can help to understand the causes of this phenomenon. Parameters of traffic traces in statistics include size of HTTP files, number of files per web page and user browsing behaviour (user think times and successive document retrievals) and the distribution properties of web requests, file sizes and file popularity. Studies indicated that the self-similarity phenomenon is highly variable depending on measurement. It was found that the best distributional model for such highly variable datasets as file sizes and requests inter-arrivals is one with a heavy tail. This self-similarity phenomenon is due to the superposition of many on/off sources, each of which exhibits the infinite variance syndrome.

Processes with heavy-tailed sojourn-time distributions have long-term (slowly decaying) correlations, also known as long-range dependence (LRD). The following formula shows the autocorrelation function of such processes:

8.1 equation

The autocorrelation function thus decays hyperbolically, which is much slower than exponential decay. In addition, the sum of the autocorrelation values approaches infinity (the autocorrelation function is non-summable) since c08-math-0018. One of the consequences of the non-degenerative correlation is the ‘infinite’ influence of LRD on the data. Aggregation of LRD sources produces a traffic stream with self-similar characteristics as indicated by actual traffic traces.

8.4.2 Pareto Distribution Model for Self-similar Traffic

One of the classes of distributions that are heavy-tailed is the Pareto distribution and its probability distribution function (pdf) is defined as:

8.2 equation

Its cumulative distribution function (cdf) is given by:

8.3 equation

The mean and variance of the Pareto distribution are given respectively by:

8.4 equation

c08-math-0023 is the shape parameter and c08-math-0024 the location parameter; hence c08-math-0025 for this distribution to have a finite mean and variance. However from Equation (8.5), when c08-math-0026 for the heavy tail definition to hold, Pareto distribution becomes a distribution with an infinite variance. A random variable whose distribution has an infinite variance implies that the variable can take on extremely large values with non-negligible probability.

8.4.3 Fractional Brownian Motion (FBM) Process

There have been advances in the development of reliable analytical models representing self-similar traffic. The fractional Brownian motion (FBM) process can be used as the basis of a workload model for generating synthesised self-similar traffic, resulting in a simple but useful relationship between the number of customers in the system, c08-math-0027 and the system utilisation, c08-math-0028. Assuming an infinite buffer with a constant service time, this relationship is given as:

8.6 equation

where c08-math-0030 is the Hurst parameter c08-math-0031, a parameter that is often used as a measure of the degree of self-similarity in a time series. Note that when c08-math-0032 the above equation reduces to the classical result, the M/M/1 relationship (a queuing system with exponential inter-arrival time and exponential service time). Hence, a value of 0.5 represents a memory-less process whereas a value of one corresponds to a process, which is the same in all respects, at whatever time scale.

Using the above relationship, we plotted the distribution of the average number of packets in the system as a function of the system utilisation for a range of c08-math-0033 values and compared this with the exponential traffic (see Figure 8.2). We can see that the traces showed the same trend, with a characteristic ‘knee’ beyond which the number of packets increases rapidly. We can also see that as the c08-math-0034 parameter value increases, that is, the traffic becomes more self-similar, it is difficult to achieve high utilisation factors. In order to operate at high system utilisation, it requires considerable buffer provisions to avoid overflow. That is to say if we were to design the system according to what is predicted by the exponential traffic, we would not be able to operate at high utilisation when subjected to self-similar traffic because the buffer would very quickly overflow.

c08f002

Figure 8.2 Comparison between self-similar traffic and exponential traffic

8.4.4 Consideration of User Behaviour in Traffic Modelling

Traffic sources are random or stochastic in nature and the only tool to describe it is in statistical terms. Numerous models have been developed to capture and represent the randomness of this behaviour in the form of tractable mathematical equations.

Among the traffic, characteristics of interest include arrival rate, inter-arrival time, packet sizes, burst, duration of connection and distribution of arrival times between application invocations. Another important characteristic is the correlation between successive arrivals or between arrivals from different sources. Correlation functions are important measurements used to describe temporal dependencies between sources and bursts of traffic. A temporal or timing relation between sources is especially important in multimedia traffic.

The most widely used assumption in modelling these characteristics has considered them as independent identically distributed (iid) random arrival events. It describes the joint distribution of two or more random variables; in such cases, there is no correlation between the variables. This implies that users are independent of each other; the generation of traffic from one user does not affect the behaviour of another user. This property can simplify the mathematical analysis and gives rise to a unique formula representing certain characteristics of interest. While this assumption has been useful, it also gives rise to an independent, uncorrelated arrival process. In real-life scenarios, traffic often has complex correlation structures especially with video applications.

Several modelling approaches have attempted to capture these correlation structures. There are two modelling approaches—auto-regressive model and Markov-modulated fluid model—which capture the effects of coded video within a scene. We can also use an augmented auto-regressive model to capture the effects of scene changes or consider multimedia source as a superposition of on-off processes to model the individual components of the multimedia source (voice/audio, video and data).

User behaviour is another important factor that can have an effect on the characteristics of traffic. This is even more so with the explosive growth of the Internet and the corresponding increase in Internet-related traffic. Models, which capture this behaviour (also called behavioural modelling), would be useful to model both packet generation and user interaction with applications and services by representing the user behaviour as a profile. This profile defines a hierarchy of independent processes and different types of stochastic process for modelling these processes. Another related characteristic with regards to Internet or web traffic that is being researched currently is the structure of the web server itself as this has a bearing on the web page response times (document transfer times), which in turn affects the user session. The development in traffic modelling often results in development of specialised software to generate workload for stress-testing web servers.

8.4.5 Voice Traffic Modelling

Here we consider multi-service packet-switch networks. Analogue speech is first digitised into pulse code modulation (PCM) signals by a speech/voice codec (coder–decoder). The PCM samples pass on to a compression algorithm, which compresses the voice into packet format prior to transmission on packet-switched network. At the destination end, the receiver performs the same functions in reverse order. Figure 8.3 shows this flow of end-to-end packet voice. Voice applications that utilise IP-based packet networks are commonly referred to as Internet telephony or voice over IP (VoIP).

c08f003

Figure 8.3 Packet voice end-to-end flow

The most distinctive feature of speech signals is that in conversational speech there are alternating periods of signal (speech) and no signal (silence). Human speech consists of an alternating sequence of active interval (during which time one is talking) followed by silence or inactive interval (during which one pauses or listens to the other party). Since the encoded bit rate for speech signals is at most 64 kbit/s, it is acceptable to treat the maximum rate during intervals of speech as 64 kbit/s. However, there are speech-coding techniques, which result in 32, 16 or 8 kbit/s rates, in which case the maximum rate during intervals of speech can assume the corresponding coding rates.

As an example, the G.729 coder-decoder (codec) has a default payload size of two voice samples of 10 ms, each sampled at 8 kHz rate. With a coding rate of 8 kbit/s, this results in a payload size of 20 bytes. This payload is then packetised, in the case of VoIP, into IP packets consisting of real-time transport protocol (RTP)/UDP/IP and multi-link PPP (MLPPP) headers. The RTP is a media packet protocol for transmitting real-time media data. It provides mechanisms for sending and receiving applications to support streaming data (facilitates the delivery and synchronisation of media data). The MLPPP is an extension of point to point protocol (PPP) that allows the combination of multiple PPP links into one logical data pipe. Note that the addition of this header is dependent on the link layer; in this case, it is the PPP link.

Without RTP header compression, the RTP/UDP/IP overhead amounts to 40 bytes (this reduces to 2 bytes with compression, offering significant bandwidth saving, while the MLPPP header is 6 bytes. The resulting voice packet size is then 66 bytes with RTP or 28 bytes with compressed RTP (cRTP). Table 8.1 shows the voice payload and packet sizes for the different speech codecs.

Table 8.1 Parameters for G.711, G.729, G.723.1 and G.726 codecs

Codec Bit rate (kbit/s) Frame size (ms) Voice payload (bytes) Voice packet (bytes)
With cRTP Without cRTP
G.711 64 10 160 168 208
G.729 Annex A 8 10 20 28 66
G.723.1 (MP-MLQ) 5.3 30 20 28 66
G.723.1 (CS-ACELP) 6.4 30 24 32 70
G.726 32 5 80 88 146

An important feature of the above codecs is the voice activity detection (VAD) scheme. When voice is packetised both speech and silence packets are packetised. Using VAD, packets of silence can be suppressed to allow data traffic to interleave with packetised voice traffic to allow for more efficient utilisation of the finite network bandwidth. It is estimated that VAD can save a further 30–40% of bandwidth.

VoIP is a real-time service, that is, data representing the actual conversation must be processed as it is created. This processing affects the ability to carry out conversation over the communications channel (in this case the Internet). Excessive delays will mean that this ability is severely restricted. Variations in this delay (jitter) can possibly insert pauses or even break up words making the voice communication unintelligible. This is why most packetised voice applications use UDP to avoid recovering any packet loss or error.

The ITU-T considers network delay for voice applications in Recommendation G.114. This recommendation defines three bands of one-way delay as shown in Table 8.2.

Table 8.2 Network delay specifications for voice applications (ITU-T, G114)

Range (ms) Description
0–150 Acceptable for most services and applications by users
150–400 Acceptable provided that administrators are aware of the transmission time and its impact on the transmission quality of user applications
c08-math-0035 Unacceptable for general network planning purposes, however, only some exceptional cases exceed this limit

8.4.6 On-off Model for Voice Traffic

It is widely accepted that modelling packet voice can be conveniently based on mimicking the characteristics of conversation—the alternating active and silent periods. A two-phase on-off process can represent a single packetised voice source. Measurements indicate that the average active interval is 0.352 s in length while the average silent interval is 0.650 s. An important characteristic of a voice source to capture is the distribution of these intervals. A reasonable good approximation for the distribution of the active interval is an exponential distribution; however, this distribution does not represent the silent interval well. Nevertheless, it often assumes that both these intervals are exponentially distributed when modelling voice sources. The duration of voice calls (call holding time) and inter-arrival time between the calls can be characterised using telephony traffic models.

During the active (on) interval, voice generates fixed size packets with a fixed inter-packet spacing. This is the nature of voice encoders with fixed bit rate and fixed packetisation delay. This packet generation process follows a Poisson process with exponentially distributed inter-arrival times of mean c08-math-0036 second or packet per second (pps) c08-math-0037. As mentioned above, both the on and off intervals are exponentially distributed, giving rise to a two-state MMPP model. No packets are generated during the silent (off) interval. Figure 8.4 represents a single voice source.

c08f004

Figure 8.4 A single voice source, represented by a two-state MMPP

The mean on period is c08-math-0038 while the mean off period is c08-math-0039. The mean packet inter-arrival time is c08-math-0040 s. A superposition of c08-math-0041 such voice sources results in the following c08-math-0042-state birth–death model, Figure 8.5, where a state represents the number of sources in the on state.

c08f005

Figure 8.5 Superposition of N voice sources with exponentially distributed inter-arrivals

This approach can model different voice codecs, with varying mean opinion score (MOS). MOS is a system of grading the voice quality of telephone connections. A wide range of listeners judges the quality of a voice sample on a scale of one (bad) to five (excellent). The scores are averaged to provide the MOS for the codec. The respective scores are 4.1 (G.711), 3.92 (G.729) and 3.8 (G.726). The parameters for this model are given in Table 8.1 with the additional parameter representing packet inter-arrival time calculated using the following formula:

8.7 equation

where:

8.8 equation

The mean off interval is typically 650 ms, while the mean on interval is 350 ms.

8.4.7 Video Traffic Modelling

An emerging service of future multi-service networks is packet video communication. Packet video communication refers to the transmission of digitised and packetised video signals in real time. The recent development in video compression standards, such as ITU-T H.261, ITU-T H.263, ISO MPEG-1, MPEG-2 and MPEG-4, has made it feasible to transport video over the Internet. Video images are represented by a series of frames in which the motion of the scene is reflected in small changes in sequentially displayed frames. Frames are displayed on the screen at some constant rate (e.g. 30 frames/s) enabling the human eye to integrate the differences within the frame into a moving scene.

In terms of the amount of bandwidth consumed, video streaming is high on the list. Uncompressed, a one-second worth of video footage with a 300 × 200 pixels resolution at a playback rate of 30 frames/s would require 1.8 Mbyte/s. Apart from the high throughput requirements, video applications also put a stringent requirement in terms of loss and delay.

There are several factors affecting the nature of video traffic. Among these are compression techniques, coding time (on- or off-line), adaptiveness of the video application, supported level of interactivity and the target quality (constant or variable). The output bit rate of the video encoder can either be controlled to produce a constant bit-rate stream which can significantly vary the quality of the video (CBR encoding), or left uncontrolled to produce a more variable bit-rate stream for a more fixed quality video (VBR encoding). Variable bit-rate encoded video is expected to become a significant source of network traffic because of its advantages in statistical multiplexing gains and consistent video quality.

Statistical properties of a video stream are quite different from that of voice or data. An important property of video is the correlation structure between successive frames. Depending on the type of video codecs, video images exhibit the following correlation components:

  • Line correlation is defined as the level of correlation between data at one part of the image with data at the same part of the next line; also called spatial correlation.
  • Frame correlation is defined as the level of correlation between data at one part of the image with data at the same part of the next image; also called temporal correlation.
  • Scene correlation is defined as the level of correlation between sequences of scenes.

Because of this correlation structure, it is no longer sufficient to capture the burst of video sources. Several other measurements are required to characterise video sources as accurately as possible. These measurements include:

  • autocorrelation function:measures the temporal variations;
  • coefficient of variation:measures the multiplexing characteristics when variable rate signals are statistically multiplexed;
  • bit-rate distribution:indicates together with the average bit rate and the variance, an approximate requirement for the capacity.

As mentioned previously, VBR encoded video source is expected to be the dominant video traffic source in the Internet. There are several statistical VBR source models. The models are grouped into four categories—auto-regressive (AR)/Markov-based models, self-similar and analytical/IID. These models were developed based on several attributes of the actual video source.

For instance, a video conferencing session, which is based on the H.261 standards, would have very little scene changes and it is recommended to use the dynamic AR (DAR) model. To model numerous scene changes (as in MPEG-coded movie sequences), Markov-based models or self-similar models can be used. The choice of which one to use is based on the number of parameters needed by the model and the computational complexity involved. Self-similar models only require a single parameter (Hurst or c08-math-0045 parameter) but their computational complexity in generating samples is high (because each sample is calculated from all previous samples). Markov chain models on the other hand, require many parameters (in the form of transitional probabilities to model the scene changes), which again increase the computational complexity because it requires many calculations to generate a sample.

8.4.8 Multi-layer Modelling for WWW Traffic

The Internet operations consist of a chain of interactions between the users, applications, protocols and the network. This structured mechanism can be attributed to the layered architecture employed in the Internet—a layering methodology was used in designing the Internet protocol stack. Hence, it is only natural to try to model Internet traffic by taking into account the different effects each layer of the protocol stack has on the resulting traffic.

The multi-layer modelling approach attempts to replicate the packet generation mechanism as activated by the human users of the Internet and the Internet applications themselves. In a multi-layer approach, packets are generated in a hierarchical process. It starts with a human user arriving at a terminal and starting one or more Internet applications. This action of invoking an application will start the chain of a succession of interactions between the application and the underlying protocols on the source terminal and the corresponding protocols and application on the destination terminal, culminating in the generation of packets to be transported over the network.

These interactions can generally be seen as ‘sessions’; the definition of a session is dependent on the application generating it. An application generates at least one, but usually more, sessions. Each session comprises one or more ‘flows’; each flow in turn comprises packets. Therefore, there are three layers or levels encountered in this multi-layer modelling approach—session, flow and packet levels.

Take a scenario where a user arrives at a terminal and starts a WWW application by launching a web browser. The user clicks on a web link (or types in the web address) to access the web sites of interest. This action generates what we call HTTP sessions. The session is defined as the downloading of web pages from the same web server over a limited period; this does not discount the fact that other definitions of a session are also possible. The sessions in turn generate flows. Each flow is a succession of packets carrying the information pertaining to a particular web page and packets are generated within flows. This hierarchical process is depicted in Figure 8.6.

c08f006

Figure 8.6 Multi-layer modelling

Depicted in the diagram are the suggested parameters for this model. More complex models attempting to capture the self-similarity of web traffic might include the use of heavy-tailed distributions to model any of the said parameters. Additional parameters such as user think time and packet sizes are also modelled by heavy-tailed distributions. While this type of model might be more accurate in capturing the characteristics of web traffic, it comes with the added parameters and complexity.

8.5 Traffic Engineering

A dilemma emerges for carriers and network operators: the cost to upgrade the infrastructure as it is nowadays for fixed and mobile telephone networks, is too high to be supported by revenues coming from Internet services. Actually, revenues coming from voice-based services are reduced with respect to the ones derived by current Internet services. The increase of the revenues is also very slow in comparison with the quick increase of the Internet traffic. Therefore, to obtain cost effectiveness it is necessary to design and management of the networks that make an effective use of bandwidth or, in a broader sense, of network resources.

Traffic engineering (TE) is a solution that enables the fulfilment of all those requirements, since it allows network resources to be used when necessary, where necessary and for the desired amount of time. TE can be regarded as the ability of the network to control traffic flows dynamically in order to prevent congestion, to optimise the availability of resources, to choose routes for traffic flows while taking into account traffic loads and network state, to move traffic flows towards less congested paths, to react to traffic changes or failures timely.

The Internet has seen such a tremendous growth in the past few years. This growth has correspondingly increased the requirements for network reliability, efficiency and service quality as well as revenues. In order for the Internet service providers to meet these requirements, they need to examine every aspect of their operational environment critically, assessing the opportunities to scale their networks and optimise performance.

However, this is not a trivial task. The main problem is with the simple building block on which the Internet was built—namely IP routing based on the destination address and simple metrics like hop count or link cost. While this simplicity allows IP routing to scale to very large networks, it does not always make good use of network resources. Traffic engineering (TE) has thus emerged as a major consideration in the design and operation of large public Internet backbone networks. While its beginnings can be traced back to the development of the public switched telephone networks (PSTN), TE is fast finding a more crucial role to play in the design and operation of the Internet.

8.5.1 Traffic Engineering Principles

Traffic engineering is concerned with the performance optimisation of networks. It seeks to address the problem of efficient allocation of network resources to meet user constraints and to maximise service provider benefit. The main goal of TE is to balance service and cost. The most important task is to calculate the right amount of resources; if too much of the resources are allocated, the cost can be excessive, too little will result in loss of business or lower productivity. As this service/cost balance is sensitive to the changes in business conditions, TE is thus a continuous process to maintain an optimum balance.

TE is a framework of processes whereby a network's response to traffic demand (in terms of user constraints such as delay, throughput and reliability) and other stimuli such as failure can be efficiently controlled. Its main objective is to ensure the network is able to support as much traffic as possible at their required level of quality and to do so by optimally utilising its (the network's) shared resources while minimising the costs associated with providing the service. To do this requires efficient control and management of the traffic. This framework encompasses:

  • traffic management through control of routing functions and QoS management;
  • capacity management through network control;
  • network planning.

Traffic management ensures that network performance is maximised under all conditions including load shifts and failures (both node and link failures). Capacity management ensures that the network is designed and provisioned to meet performance objectives for network demands at minimum cost. Network planning ensures that the node and transport capacity is planned and deployed in advance of forecasted traffic growth. These functions form an interacting feedback loop around the network as shown in Figure 8.7.

c08f007

Figure 8.7 The traffic engineering process model

The network (or system) shown in the figure is driven by a noisy traffic load (or signal) comprising predictable average demand components added to unknown forecast errors and load variation components. The load variation components have different time constants ranging from instantaneous variations, hour-to-hour variations, day-to-day variations and week-to-week or seasonal variations. Accordingly, the time constants of the feedback controls are matched to the load variations and function to regulate the service provided by the network through routing and capacity adjustments. Routing control typically applies on minutes, days or possibly real-time time scales while capacity and topology changes are much longer term (months to a year).

Advancement in optical switching and transmission systems enables ever-increasing amounts of available bandwidth. The effect is that the marginal cost (i.e. the cost associated with producing one additional unit of output) of bandwidth is rapidly being reduced: bandwidth is getting cheaper. The widespread deployment of such technologies is accelerating and network providers are now able to sell high-bandwidth transnational and international connectivity simply by over provisioning their networks. Logically, it would seem that in the face of such developments and the abundance of available bandwidth, the need for TE would be invalidated. On the contrary, TE still maintains its importance due principally to the fact that both the number of users and their expectations are exponentially increasing in parallel to the exponential increase in available bandwidth.

A corollary of Moore's law says: ‘As you increase the capacity of any system to accommodate user demand, user demand will increase to consume system capacity’. Companies that have invested in such over provisioned networks will want to recoup their investments. Service differentiation charging and usage-proportional pricing are mechanisms widely accepted for doing so. To implement these mechanisms, simple and cost-effective mechanisms for monitoring usage and ensuring customers are receiving what they are requesting are required to make usage-proportional pricing practical. Another important function of TE is to map traffic onto the physical infrastructure to utilise resources optimally and to achieve good network performance. Hence, TE still performs a useful function for both network operators and customers.

8.5.2 Internet Traffic Engineering

Internet TE is defined as that aspect of dealing with the issue of performance evaluation and performance optimisation of operational IP networks. Internet TE encompasses the application of technology and scientific principles to the measurement, characterisation, modelling and control of Internet traffic. One of the main goals of Internet TE is to enhance the performance of an operational network, both in terms of traffic-handling capability and resource utilisation. Traffic-handling capability implies that IP traffic is transported through the network in the most efficient, reliable and expeditious manner possible. Network resources should be utilised efficiently and optimally while meeting the performance objectives (delay, delay variation, packet loss and throughput) of the traffic.

There are several functions contributing directly to this goal. One of them is the control and optimisation of the routing function, to steer traffic through the network in the most effective way. Another important function is to facilitate reliable network operations. Mechanisms should be provided that enhance network integrity and by embracing policies emphasising network survivability. This results in a minimisation of the vulnerability of the network to service outages arising from errors, faults and failures occurring within the infrastructure.

Effective TE is difficult to achieve in public IP networks due to the limited functional capabilities of conventional IP technologies. One of the major problems lies in mapping traffic flows onto the physical topology. In the Internet, mapping of flows onto a physical topology was heavily influenced by the routing protocols used. Traffic flows simply followed the shortest path calculated by interior gateway protocols (IGP) used within autonomous systems (AS) such as open shortest path first (OSPF) or intermediate system—intermediate system (IS-IS) and exterior gateway protocols (EGP) used to interconnect ASs such as border gateway protocol 4 (BGP-4).

These protocols are topology-driven and employ per-packet control. Each router makes independent routing decisions based on the information in the packet headers. By matching this information to a corresponding entry of a local instantiation of a synchronised routing area link state database, the next hop or route for the packet is then determined. This determination is based on shortest path computations (often equated to lowest cost) using simple additive link metrics.

While this approach is highly distributed and scalable, there is a major flaw—it does not consider the characteristics of the offered traffic and network capacity constraints when determining the routes. The routing algorithm tends to route traffic onto the same links and interfaces, significantly contributing to congestion and unbalanced networks. This results in parts of the network becoming over-utilised while other resources along alternate paths remain under-utilised. This condition is commonly referred to as hyper aggregation. While it is possible to adjust the value of the metrics used in calculating the IGP routes, it soon became too complicated as the Internet core grows. Continuously adjusting the metrics also adds instability to the network. Hence, congested parts are often resolved by adding more bandwidth (over provisioning), which is not treating the actual symptom of the problem in the first place resulting in poor resource allocation or traffic mapping.

The requirements for Internet TE is not that much different than that of telephony networks—to have a precise control over the routing function in order to achieve specific performance objectives both in terms of traffic-related performance and resource-related performance (resource optimisation). However, the environment in which Internet TE is applied is much more challenging due to the nature of the traffic and the operating environment of the Internet itself. Traffic on the Internet is becoming more multi-class (compared to fixed 64 kbit/s voice in telephony networks) with different service requirements but contending for the same network resources.

In this environment, TE needs to establish resource-sharing parameters to give preferential treatment to some service classes in accordance with a utility model. The characteristics of the traffic are also proving to be a challenge—it exhibits very dynamic behaviour, which is still to be understood and tends to be highly asymmetric. The operating environment of the Internet is also an issue. Resources are augmented constantly and they fail on a regular basis. Routing of traffic, especially when traversing autonomous system boundaries, makes it difficult to correlate network topology with the traffic flow. This makes it difficult to estimate the traffic matrix, the basic dataset needed for TE.

An initial attempt at circumventing some of the limitations of IP with respect to TE was the introduction of a secondary technology with virtual circuits and traffic management capabilities (such as ATM) into the IP infrastructure. This is the overlay approach that it consists of ATM switches at the core of the network surrounded by IP routers at the edges. The routers are logically interconnected using permanent virtual circuit (PVC), usually in a fully meshed configuration. This approach allows virtual topologies to be defined and superimposed onto the physical network topology. By collecting statistics on the PVC, a rudimentary traffic matrix can be built. Overloaded links can be relieved by redirecting traffic to under-utilised links.

Packet switching like ATM was used mainly because of its superior switching performance compared to IP routing at that time. It also afforded QoS and TE capabilities. However, there are fundamental drawbacks to this approach. First, two networks of dissimilar technologies need to be built and managed, adding to the increased complexity of network architecture and design. Reliability concerns also increase because the number of network elements existing in a routed path increases. Scalability is another issue especially in a fully meshed configuration whereby the addition of another edge router would increase the number of PVC required by c08-math-0046, where c08-math-0047 is the number of nodes (the ‘c08-math-0048-squared’ problem). There is also the possibility of IP routing instability caused by multiple PVC failures following single link impairment in the core network.

8.6 Multi-protocol Label Switching (MPLS)

To improve on the best-effort service provided by the IP network layer protocol, new mechanisms such as differentiated services (Diffserv) and integrated services (Intserv), have been developed to support QoS. In the Diffserv architecture, services are given different priorities and resource allocations to support various types of QoS. In the Intserv architecture, resources have to be reserved for individual services. However, resource reservation for individual services does not scale well in large networks, since a large number of services have to be supported, each maintaining its own state information in the network's routers.

Flow-based techniques such as multi-protocol label switching (MPLS) have also been developed to combine layer 2 and layer 3 functions to support QoS requirements. MPLS introduces a new connection-oriented paradigm, based on fixed-length labels (RFC 3031). This fixed-length label-switching concept is similar but not the same as that utilised by ATM. Among the key motivation for its development was to provide a mechanism for the seamless integration of IP and ATM.

However, the architectural differences between the two technologies prove to be a stumbling block for their smooth interoperation. Overlay models have been proposed as solutions but they do not provide the single operating paradigm, which would simplify network management and improve operational efficiency.

MPLS is a peer model technology. Compared to the overlay model, a peer model integrates layer 2 switching with layer 3 routing, yielding a single network infrastructure. Network nodes would typically have integrated routing and switching functions. This model also allows IP routing protocols to set up ATM connections and do not require address resolution protocols (RFC 3035). While MPLS has successfully merged the benefits of both IP and ATM, another application area in which MPLS is fast establishing its usefulness is traffic engineering (TE). This also addresses other major network evolution problems—throughput and scalability.

8.6.1 MPLS Forwarding Paradigm

MPLS is a technology that combines layer 2 switching technologies with layer 3 routing technologies. The primary objective of this new technology is to create a flexible networking fabric that provides increased performance and scalability. This includes TE capabilities. MPLS is designed to work with a variety of transport mechanisms in response to various inter-related problems with the current IP infrastructure.

These problems include scalability of IP networks to meet growing demands, enabling differentiated levels of IP services to be provisioned, merging disparate traffic types into a single network and improving operational efficiency in the face of tough competition. Network equipment manufacturers were among the first to recognise these problems and worked individually on their own proprietary solutions including tag switching, IP switching, aggregate route-based IP switching (ARIS) and cell switch router (CSR). MPLS draws on these implementations in an effort to produce a widely applicable standard.

Because the concepts of forwarding, switching and routing are fundamental in MPLS, a concise definition of each one of them is given below:

  • Forwarding is the process of receiving a packet on an input port and sending it out on an output port.
  • Switching is forwarding process following the choosen path based information or knowledge of current network resources and loading conditions. Switching operates on layer 2 header information.
  • Routing is the process of setting routes to understand the next hop a packet should take towards its destination within and between networks. It operates on layer 3 header information.

The conventional IP forwarding mechanism (layer 3 routing) is based on the source–destination address pair gleaned from a packet's header as the packet enters an IP network via a router. The router analyses this information and runs a routing algorithm. The router will then choose the next hop for the packet based on the results of the algorithm calculations (which are usually based on the shortest path to the next router). More importantly, this full packet header analysis must be performed on a hop-by-hop basis, that is, at each router traversed by the packet. Clearly, the IP packet forwarding paradigm is closely coupled to the processor-intensive routing procedure.

While the efficiency and simplicity of IP routing is widely acknowledged, there are a number of issues brought about by large routed networks. One of the main issues is the use of software components to realise the routing function. This adds latency to the packet. Higher speed, hardware-based routers are being designed and deployed, but these come at a cost, which could easily escalate for large service providers' or enterprise networks. There is also difficulty in predicting the performance of a large meshed network based on traditional routing concepts.

Layer 2 switching technologies such as ATM and frame relay utilise a different forwarding mechanism, which is essentially based on a label-swapping algorithm. This is a much simpler mechanism and can readily be implemented in hardware, making this approach much faster and yielding a better price/performance advantage when compared to IP routing. ATM is also a connection-oriented technology, between any two points, traffic flows along a predetermined path are established prior to the traffic being submitted to the network. Connection-oriented technology makes a network more predictable and manageable.

8.6.2 MPLS Basic Operation

MPLS tries to solve the problem of integrating the best features of layer 2 switching and layer 3 routing by defining a new operating methodology for the network. MPLS separates packet forwarding from routing, that is, separating the data-forwarding plane from the control plane. The same concept is also used in the software defined network (SDN).

While the control plane still relies heavily on the underlying IP infrastructure to disseminate routing updates, MPLS effectively creates a tunnel underneath the control plane using packet tags called labels. The concept of a tunnel is the key because it means the forwarding process is no more IP-based and classification at the entry point of an MPLS network is not relegated to IP-only information. The functional components of this solution are shown in Figure 8.8, which do not differ much from the traditional IP router architecture.

c08f008

Figure 8.8 Functional components of MPLS

The key concept of MPLS is to identify and mark IP packets with labels. A label is a short, fixed-length, unstructured identifier that can be used to assist in the forwarding process. Labels are analogous to the VPI/VCI used in an ATM network. Labels are normally local to a single data link, between adjacent routers and have no global significance (as would an IP address). A modified router or switch will then use the label to forward/switch the packets through the network. This modified switch/router termed label switching router (LSR) is a key component within an MPLS network. LSR is capable of understanding and participating in both IP routing and layer 2 switching. By combining these technologies into a single integrated operating environment, MPLS avoids the problem associated with maintaining two distinct operating paradigms.

Label switching utilised in MPLS is based on the so-called MPLS shim header inserted between the layer 2 header and the IP header. The structure of this MPLS shim header is shown in Figure 8.9. Note that there can be several shim headers inserted between the layer 2 and IP headers. This multiple label insertion is called label stacking, allowing MPLS to utilise a network hierarchy, provide virtual private network (VPN) services (via tunnelling) and support multiple protocols [RFC3032].

c08f009

Figure 8.9 MPLS shim header structure

The MPLS forwarding mechanism differs significantly from conventional hop-by-hop routing. The LSR participates in IP routing to understand the network topology as seen from the layer 3 perspective. This routing knowledge is then applied, together with the results of analysing the IP header, to assign labels to packets entering the network. Viewed on an end-to-end basis, these labels combine to define paths called label switched paths (LSP).

LSP are similar to VCs utilised by switching technologies. This similarity is reflected in the benefits afforded in terms of network predictability and manageability. LSP also enable a layer 2 forwarding mechanism (label swapping) to be utilised. As mentioned earlier, label swapping is readily implemented in hardware, allowing it to operate at typically higher speeds than routing.

To control the path of LSP effectively, each LSP can be assigned one or more attributes (see Table 8.3). These attributes will be considered in computing the path for the LSP. There are two ways to set up an LSP—control-driven (i.e. hop-by-hop) and explicitly routed LSP (ER-LSP). Since the overhead of manually configuring LSP is very high, there is a need on service providers' behalf to automate the process by using signalling protocols. These signalling protocols distribute labels and establish the LSP forwarding state in the network nodes. A label distribution protocol (LDP) is used to set up a control-driven LSP while RSVP-TE and CR-LDP are the two signalling protocols used for setting up ER-LSP (RFC 3468; RFC 5036).

Table 8.3 LSP attributes

Attribute name Meaning of attribute
Bandwidth The minimum requirement on the reserverable bandwidth for the LSP to be set up along that path
Path attribute An attribute that decides whether the path for the LSP should be manually specified or dynamically computed by constraint-based routing
Setup priority The attribute that decides which LSP will get the resource when multiple LSPs compete for it
Holding priority The attribute that decides whether an established LSP should be pre-empted by a new LSP
Affinity An administratively specified property of an LSP to achieve some desired LSP placement
Adaptability Whether to switch the LSP to a more optimal path when one becomes available
Resilience The attribute that decides to re-route the LSP when the current path fails

The label swapping algorithm is a more efficient form of packet forwarding, compared to the longest address match-forwarding algorithm used in conventional layer 3 routing. The label-swapping algorithm requires packet classification at the point of entry into the network from the ingress label edge router (LER) to assign an initial label to each packet. Labels are bound to forwarding equivalent classes (FEC). An FEC is defined as a group of packets that can be treated in an equivalent manner for purposes of forwarding (share the same requirements for their transport). The definition of FEC can be quite general. FEC can relate to service requirements for a given set of packets or simply on source and destination address prefixes. All packets in such a group get the same treatment en route to the destination.

In a conventional packet forwarding mechanism, FEC represent groups of packets with the same destination address; then the FEC should have their respective next hops. However, it is the intermediate nodes processing the FEC grouping and mapping. As opposed to conventional IP forwarding, in MPLS, it is the edge-to-edge router assigning packets to a particular FEC when the packet enters the network. Each LSR then builds a table to specify how to forward packets. This forwarding table, called a label information base (LIB), comprises FEC-to-label bindings.

In the core of the network, LSR ignore the header of network layer packets and simply forward the packet using the label with the label-swapping algorithm. When a labelled packet arrives at a switch, the forwarding component uses the pairing, c08-math-0049input port number/incoming interface, incoming label valuec08-math-0050, to perform an exact match search of its forwarding table. When a match is found, the forwarding component retrieves the pairing, c08-math-0051output port number/outgoing interface, outgoing label valuec08-math-0052, and the next-hop address from the forwarding table. The forwarding component then replaces the incoming label with the outgoing label and directs the packet to the outbound interface for transmission to the next hop in the LSP.

When the labelled packet arrives at the egress LER (point of exit from the network), the forwarding component searches its forwarding table. If the next hop is not a label switch, the egress LSR discards (pop-off) the label and forwards the packet using conventional longest match IP forwarding. Figure 8.10 shows the label swapping process.

c08f010

Figure 8.10 Label swapping and forwarding process

LSP can also allow minimising the number of hops, to meet certain bandwidth requirements, to support precise performance requirements, to bypass potential points of congestion, to direct traffic away from the default path, or simply to force traffic across certain links or nodes in the network. Label swapping gives a huge flexibility in the way that it assigns packets to FEC. This is because the label swapping forwarding algorithm is able to take any type of user traffic, associate it with an FEC, and map the FEC to an LSP that has been specifically designed to satisfy the FEC requirements; therefore allowing a high level of control in the network. These are the features, which lend credibility to MPLS to support traffic engineering (TE). We will discuss further the application of MPLS in TE in a later section.

8.6.3 MPLS and Diffserv Interworking

The introduction of a QoS enabled protocol into a network supporting various other QoS protocols would undoubtedly lead to the requirement for these protocols to interwork with each other in a seamless fashion. This requirement is essential to the QoS guarantees to the packets traversing the network. It is an important issue of interworking MPLS with Diffserv and ATM.

The combination of MPLS and Diffserv provides a scheme, which is mutually beneficial for both. Path-oriented MPLS can provide Diffserv with a potentially faster and more predictable path protection and restoration capabilities in the face of topology changes, as compared to conventional hop-by-hop routed IP networks. Diffserv, on the other hand, can act as QoS architecture for MPLS. Combined, MPLS and Diffserv can provide the flexibility to provide different treatments to certain QoS classes requiring path protection.

IETF 3270 specifies a solution for supporting Diffserv behaviour aggregates (BA) and their corresponding per hop behaviours (PHB) over an MPLS network. The key issue for supporting Diffserv over MPLS is how to map Diffserv to MPLS. This is because LSR cannot see an IP packet's header and the associated DSCP values, which links the packet to its BA and consequently to its PHB, as PHB determines the scheduling treatment and, in some cases, the drop probability of a packet. LSR only look for labels, read their contents and decide the next hop. For an MPLS domain to handle a Diffserv packet appropriately, the labels must contain some information regarding the treatment on the packet.

The solution to this problem is to map the six-bit DSCP values to the three-bit EXP field of the MPLS shim header. This solution relies on the combined use of two types of LSP:

  • A LSP that can transport multiple ordered aggregates, so that the EXP field (now has been renamed as the traffic class field) of the MPLS shim header conveys to the LSR with the PHB applied to the packet (covering both information about the packet's scheduling treatment and its drop precedence). An ordered aggregate (OA) is a set of BAs sharing an ordering constraint. Such an LSP refers to as EXP-Inferred-PSC-LSP (E-LSP), when defining PSC as a PHB scheduling class. The set of one or more PHB applies to the BAs belonging to a given OA. With this method, it can map up to eight DSCPs to a single E-LSP.
  • A LSP that can transport only a single ordered aggregate, so that the LSR exclusively infer the packet scheduling treatment exclusively from the packet label value. The packet drop precedence is conveyed in the EXP field of the MPLS shim header or in the encapsulating link layer specific selective drop mechanism, where in such cases the MPLS shim header is not used (e.g. MPLS over ATM). Such LSP refer to label-only-inferred-PSC-LSP (L-LSP). With this method, an individual L-LSP has a dedicated Diffserv code point.

8.6.4 MPLS and ATM Interworking

MPLS and ATM can interwork at network edges to support and bring multiple services into the network core of an MPLS domain. In this instance, ATM connections need to be transparent across the MPLS domain over MPLS LSP. Transparency in this context means that ATM-based services should be carried over the domain unaffected.

There are several requirements that need to be addressed concerning MPLS and ATM interworking. Some of these requirements are:

  • the ability to multiplex multiple ATM connections (VPC and/or VCC) into an MPLS LSP;
  • support for the traffic contracts and QoS commitments made to the ATM connections;
  • the ability to carry all the AAL types transparently;
  • transport of RM cells and CLP information from the ATM cell header.

Transport of ATM traffic over the MPLS uses only the two-level LSP stack. The two-level stack specifies two types of LSP. A transport LSP (T-LSP) transports traffic between two ATM-MPLS interworking devices located at the boundaries of the ATM-MPLS networks. This traffic can consist of a number of ATM connections, each associated with an ATM service category. The outer label of the stack (known as a transport label) defines a T-LSP, that is, the S field of the shim header is set to 0 to indicate it is not the bottom of the stack.

The second type of LSP is an interworking LSP (I-LSP), nested within the T-LSP (identified by an interworking label), which carries traffic associated with a particular ATM connection, that is, one I-LSP is used for an ATM connection. I-LSP also provides support for VP/VC switching functions. One T-LSP may carry more than one I-LSP. Because an ATM connection is bi-directional while an LSP is unidirectional, two different I-LSPs, one for each direction of the ATM connection, are required to support a single ATM connection.

Figure 8.11 shows the relationship between T-LSP, I-LSP and ATM connections. The interworking unit (IWU) encapsulates ATM cells in the ATM-to-MPLS direction, into a MPLS frame. For the MPLS-to-ATM direction, the IWU reconstructs the ATM cells.

c08f011

Figure 8.11 ATM-MPLS networks interworking. (a) ATM-MPLS network interworking architecure. (b) The relationship between transport LSP, interworking LSP and ATM link

With regarding to support of ATM traffic contracts and QoS commitments to ATM connections, the mapping of ATM connections to I-LSP and subsequently to T-LSP must take into consideration the TE properties of the LSP. There are two methods to implement this.

First, a single T-LSP can multiplex all the I-LSP associated to several ATM connections with different service categories. This type of LSP is termed class multiplexed LSP. It groups the ATM service categories into groups and maps each group into a single LSP. As an example for the second scenario, it groups the categories initially into real-time traffic (CBR and rt-VBR) and non-real-time traffic (nrt-VBR, ABR, UBR). It transports the real-time traffic over one T-LSP while the non-real-time traffic over another T-LSP. It can implement class multiplexed LSP by using either L-LSP or E-LSP. Class multiplexed L-LSP must meet the most stringent QoS requirements of the ATM connections transported by the LSP. This is because L-LSP treats every packet going through it the same. Class multiplexed E-LSP, on the other hand, identifies the scheduling and dropping treatments applied to a packet based on the value of the EXP field inside the T-LSP label. Each LSR can then apply different scheduling treatments for each packet transported over the LSP. This method also requires a mapping between ATM service categories and the EXP bits.

Second, an individual T-LSP is allocated to each ATM service class. This LSP is termed class based LSP. There can be more than one connection per ATM service class. In this case, the MPLS domain would search for a path that meets the requirement of one of the connections.

8.6.5 MPLS with Traffic Engineering (MPLS-TE)

An MPLS domain still requires IGP such as OSPF and IS-IS to calculate routes through the domain. Once it has computed a route, it uses signalling protocols to establish LSP along the route. Traffic that satisfies a given FEC associated with a particular LSP is then sent down the LSP.

The basic problem addressed by TE is the mapping of traffic onto routes to achieve the performance objectives of the traffic while optimising the resources at the same time. Conventional IGP such as open shortest path first (OSPF), makes use of pure destination address-based forwarding. It selects routes based on simply the least cost metric (or shortest path). Traffic from different routers therefore converge on this particular path, leaving the other paths under-utilised. If the selected path becomes congested, there is no procedure to off-load some of the traffic onto the alternative path.

For TE purposes, the LSR should build a TE database within the MPLS domain. This database holds additional information regarding the state of a particular link. Additional link attributes may include maximum link bandwidth, maximum reserverable bandwidth, current bandwidth utilisation, current bandwidth reservation and link affinity or colour (an administratively specified property of the link). These additional attributes are carried by TE extensions of existing IGP—OSPF-TE and IS-IS TE. This enhanced database will then be used by the signalling protocols to establish ER-LSP.

The IETF has specified LDP as the signalling protocol for setting up LSP (RFC 5036). LDP is usually used for hop-by-hop LSP set up, whereby each LSR determines the next interface to route the LSP based on its layer 3 routing topology database. This means that hop-by-hop LSP follow the path that normal layer 3 routed packets will take. There are two signalling protocols: RSVP-TE (RSVP with TE extension; RFC 5151) and CR-LDP (constraint-based routing LDP) control the LSP for TE applications (RFC 3468). These protocols are used to establish traffic-engineered ER-LSP. An explicit route specifies all the routers across the network with a precise sequence of steps from ingress to egress. Packets must follow this route strictly. Explicit routing is useful to force an LSP down a path that is different from the one offered by the routing protocol. Explicit routing can also be used to distribute traffic in a busy network, to route around failed or congestion hot spots, or to provide pre-allocated back-up LSP to protect against network failures.

8.7 Internet Protocol Version 6 (IPv6)

Research and development for the next generation Internet (NGI) started in 1990s (RFC 3035). This led to the new IP protocol IPv6 in the late 1990s (RFC 2460). Recently, the transition and deployment of IPv6 started officially on 6 June 2012. The protocol itself is very easy to understand. Like any new protocol and network, it faces a great challenge in compatibility with the existing operational networks, balancing economic cost and benefit of the evolution towards IPv6, and smooth change over from IPv4 to IPv6. It is also a great leap. However, most of these are out of the scope of this book. Here we only discuss the basics of IPv6 and issues on IPv6 networking over satellites.

8.7.1 Basics of Internet Protocol Version 6 (IPv6)

The IP version 6 (IPv6), which the IETF have developed as a replacement for the current IPv4 protocol, incorporates support for a flow label within the packet header, which the network can use to identify flows, much as VPI/VCI are used to identify streams of ATM cells. RSVP helps to associate with each flow a flow specification (flow spec) that characterises the traffic parameters of the flow, much as the ATM traffic contract is associated with an ATM connection.

IPv6 can support integrated services with QoS with such mechanisms and the definition of protocols like RSVP. It extends the IPv4 protocol to address the problems of the current Internet to:

  • support more host addresses;
  • reduce the size of the routing table;
  • simplify the protocol to allow routers to process packets faster;
  • have better security (authentication and privacy);
  • provide QoS to different types of services including real-time data;
  • aid multicasting (allow scopes);
  • allow mobility (roam without changing address);
  • allow the protocol to evolve;
  • permit coexisting of old and new protocols.

Compared to IPv4, IPv6 has made significant changes to the IPv4 packet format in order to achieve the objectives of the next generation Internet with the network layer functions. Figure 8.12 shows the IPv6 packet header format.

c08f012

Figure 8.12 IPv6 packet header format

The functions of its fields are summarised as the following:

  • The version field (four bits) has the same function as IPv4. It is six for IPv6 and four for IPv4.
  • The traffic class field (eight bits) identifies packets with different real-time delivery requirements.
  • The flow label field (20 bits) is used to allow source and destination to set up a pseudo-connection with particular properties and requirements.
  • The payload field (16 bits) is the number of bytes following the 40-byte header, instead of total length in IPv4.
  • The next header field (eight bits) tells which transport handler to pass the packet to, like the protocol field in the IPv4.
  • The hop limit field (eight bits) is a counter used to limit packet lifetime to prevent the packet staying in the network forever, like the time to live field in IPv4.
  • The source and destination addresses (128 bits each) indicate the network number and host number, four times more bits than IPv4.
  • There are also extension headers like the options in IPv4. It is coded in separate headers between the IPv6 header and the upper-layer header. Table 8.4 shows the IPv6 extension header.

Table 8.4 IPv6 extension headers

Extension header Description
Hop-by-hop options Miscellaneous information for routers
Destination options Additional information for the destination
Routing Loose list of routers to visit
Fragmentation Management of datagram fragments
Authentication Verification of the sender's identity
Encrypted security payload Information about the encrypted contents

Each extension header consists of next header field, and fields of type, length and value. In IPv6, the optional features become mandatory features: security, mobility, multicast and transitions. IPv6 tries to achieve an efficient and extensible IP datagram in that:

  • The IP header contains less fields that enable efficient routing and performance.
  • Extensibility of header offers better options.
  • The flow label gives efficient processing of IP datagram.

8.7.2 IPv6 Addressing

IPv6 has introduced a large addressing space to address the shortage of IPv4 addresses. It uses 128 bits for addresses, four times the 32 bits of the current IPv4 address. It allows about 3.4 × 1038 possible addressable nodes, equivalent to 1030 addresses per person on the planet. Therefore, we should never exhaust IPv6 addresses in the future Internet.

In IPv6, there is no hidden network and host. All hosts can be servers and are reachable from outside. This is called global reachability. It supports end-to-end security, flexible addressing and multiple levels of hierarchy in the address space.

It allows autoconfiguration, link-address encapsulation, plug and play, aggregation, multi-homing and renumbering.

The address format is x:x:x:x:x:x:x:x, where x is a 16-bit hexadecimal field. For example, herewith is an IPv6 address:

equation

It is case-insensitive and is the same as the following address:

equation

Leading zeros in a field are optional:

equation

Successive fields of 0 can be written as ‘::’. For example:

equation

We can also rewrite the following addresses:

equation

But we can only use ‘::’ once in an address. An address like this is not valid:

equation

IPv6 addresses are also different in a URL. It only allows fully qualified domain names (FQDN). An IPv6 address is enclosed in brackets such as http://[2001:1:4F3A::20F6:AE14]:8080/index.html. Therefore, URL parsers have to be modified, and it could be a barrier for users.

IPv6 address architecture defines different types of address: unicast, multicast and anycast. There are also unspecified and loop back addresses. Unspecified addresses can be used as a placeholder when no address is available, such as in an initial DHCP request and duplicate address detection (DAD). Loop back addresses identify the node itself as the local host using 127.0.0.1 in IPv4 and :0:0:0:0:0:0:1 or simply ::1 in IPv6. It can be used for testing IPv6 stack availability, for example, ping6::1.

The scope of IPv6 addresses allows link-local and site-local. It allows aggregatable global addresses including multicast and anycast, but there is no broadcast address in IPv6.

The link-local scoped address is new in IPv6: ‘scope c08-math-0059 local link’ (i.e. WLAN, subnet). It can only be used between nodes of the same link, but cannot be routed. It allows autoconfiguration on each interface using a prefix plus interface identifier (based on MAC address) in the format of ‘FE80:0:0:0:<interface identifier>’. It gives every node an IPv6 address for start-up communications.

The site-local scoped address has ‘scope c08-math-0060 site (a network of links)’. It can only be used between nodes of the same site, but cannot be routed outside the site, and is very similar to IPv4 private addresses. There is no default configuration mechanism to assign it. It has the format of ‘FEC0:0:0:<subnet id>:<interface id>’ where the <subnet id> has 16 bits capable of addressing 64 000 subnets. It can be used to number a site before connecting to the Internet or for private addresses (e.g. local printers).

The aggregatable global address is for generic use and allows globally reach. The address is allocated by The Internet Assigned Number Authority (IANA) with a hierarchy of tier-1 providers as top-level aggregator (TLA), intermediate providers as next-level aggregator (NLA), and finally sites and subnets at the bottom, as shown in Figure 8.13.

c08f013

Figure 8.13 Structure of the aggregatable global address

IPv6 support multicast, that is, one-to-many communications. Multicast is used instead, mostly on local links. The scope of the addresses can be node, link, site, organisation and global. Unlike IPv4, it does not use time to live (TTL). IPv6 multicast addresses have a format of ‘FF<flags><scope>::<multicast group>’. Any IPv6 node should recognise the following addresses as identifying itself (see Table 8.5):

  • link-local address for each interface;
  • assigned (manually or automatically) unicast/anycast addresses;
  • loop back address;
  • all-nodes multicast address;
  • solicited-node multicast address for each of its assigned unicast and anycast address;
  • multicast address of all other groups to which the host belongs.

Table 8.5 Some reserved multicast addresses

Address Scope Use
FF01::1 Interface-local All nodes
FF02::1 Link-local All nodes
FF01::2 Interface-local All routers
FF02::2 Link-local All routers
FF05::2 Site-local All routers
FF02::1:FFXX:XXXX Link-local Solicited nodes

The anycast address is one-to-nearest, which is great for discovery functions. Anycast addresses are indistinguishable from unicast addresses, as they are allocated from the unicast address space. Some anycast addresses are reserved for specific uses, for example, router-subnet, mobile IPv6 home-agent discovery and DNS discovery. Table 8.6 shows the IPv6 address architecture.

Table 8.6 IPv6 addressing architecture

Prefix Hex Size Allocation
0000 0000 0000-00FF 1/256 Reserved
0000 0001 0100-01FF 1/256 Unassigned
0000 001 0200-03FF 1/128 NSAP
0000 010 0400-05FF 1/128 Unassigned
0000 011 0600-07FF 1/128 Unassigned
0000 1 0800-0FFF 1/32 Unassigned
0001 1000-1FFF 1/16 Unassigned
001 2000-3FFF 1/8 Aggregatable:
IANA to registry
010, 011, 100, 101, 110 4000-CFFF 5/8 Unassigned
1110 D000-EFFF 1/16 Unassigned
1111 0 F000-F7FF 1/32 Unassigned
1111 10 F800-FBFF 1/64 Unassigned
1111 110 FC00-FDFF 1/128 Unassigned
1111 1110 0 FE00-FE7F 1/512 Unassigned
1111 1110 10 F800-FEBF 1/1024 Link-local
1111 1110 11 FEC0-FEFF 1/1024 Site-local
1111 1111 FF00-FFFF 1/256 Multicast

When a node has many IPv6 addresses, to select which one to use for the source and destination addresses for a given communication, one should address the following issues:

  • scoped addresses are unreachable depending on the destination;
  • preferred versus deprecated addresses;
  • IPv4 or IPv6 when DNS returns both;
  • IPv4 local scope (169.254/16) and IPv6 global scope;
  • IPv6 local scope and IPv4 global scope;
  • mobile IP addresses, temporary addresses, scope addresses and so on.

8.7.3 IPv6 Networks over Satellites

We have learnt through the book to treat the satellite networks as generic networks with different characteristics and IP networks interworking with other different networking technologies. Therefore, all the concepts, principles and techniques can be applied to IPv6 over satellites. Though IP has been designed for internetworking purposes, the implementation and deployment of any new version or new type of protocol always face some problems. These also have potential impacts on all the layers of protocols including trade-offs between processing power, buffer space, bandwidth, complexity, implementation costs and human factors. To be concise, we will only summarise the issues and scenarios on internetworking between IPv4 and IPv6 as the following:

  • Satellite network is IPv6 enabled:this raises issues on user terminals and terrestrial IP networks. We can imagine that it is not practical to upgrade them all at the same time. Hence, one of the great challenges is how to evolve from current IP networking over satellite towards the next generation network over satellites. Tunnelling from IPv4 to IPv6 or from IPv6 to IPv4 is inevitable, hence generating great overheads. Even if all networks are IPv6 enabled, there is still a bandwidth efficiency problem due to the large overhead of IPv6.
  • Satellite network is IPv4 enabled:this faces similar problems to the previous scenario, however, satellite networks may be forced to evolve to IPv6 if all terrestrial networks and terminals start to run IPv6. In terrestrial networks when bandwidth is plentiful, we can afford to delay the evolution. In satellite networks, such a strategy may not be practical. Hence, timing, stable IPv6 technologies and evolution strategies all play an important role.

8.7.4 IPv6 Transitions

The transition of IPv6 towards next-generation networks is a very important aspect. Many new technologies failed to succeed because of the lack of transition scenarios and tools. IPv6 was designed with transition in mind from the beginning. For end systems, it uses a dual stack approach, as shown in Figure 8.14; and for network integration, it uses tunnels (some sort of translation from IPv6-only networks to IPv4-only networks; RFC 4213).

c08f014

Figure 8.14 Illustration of dual stack host

Figure 8.14 illustrates a node that has both IPv4 and IPv6 stacks and addresses. The IPv6-enabled application requests both IPv4 and IPv6 destination addresses. The DNS resolver returns IPv6, IPv4 or both addresses to the application. IPv6/IPv4 applications choose the address and then can communicate with IPv4 nodes via IPv4 or with IPv6 nodes via IPv6.

8.7.5 IPv6 Tunnelling Through Satellite Networks

Tunnelling IPv6 in IPv4 is a technique use to encapsulate IPv6 packets into IPv4 packets with protocol field 41 of the IP packet header (see Figure 8.15). Many topologies are possible including router to router, host to router and host to host. The tunnel endpoints take care of the encapsulation. This process is ‘transparent’ to the intermediate nodes. Tunnelling is one of the most vital transition mechanisms.

c08f015

Figure 8.15 Encapsulation of IPv6 packet into IPv4 packet

In the tunnelling technique, the tunnel endpoints are explicitly configured and they must be dual stack nodes. If the IPv4 address is the endpoint for the tunnel, it requires reachable IPv4 addresses. Tunnel configuration implies manual configuration of the source and destination IPv4 addresses and the source and destination IPv6 addresses. Tunnel configuration cases can be between two hosts, one host and one router (as shown in Figure 8.16) or two routers of two IPv6 networks (as shown in Figure 8.17).

c08f016

Figure 8.16 Host to router tunnelling through satellite access network

c08f017

Figure 8.17 Router to router tunnelling through satellite core network

8.7.6 The 6to4 Translation via Satellite Networks

The 6to4 translation is a technique used to interconnect isolated IPv6 domains over an IPv4 network with automatic establishment of a tunnel. It avoids the explicit tunnels used in the tunnelling technique by embedding the IPv4 destination address in the IPv6 address. It uses the reserved prefix ‘2002::/16’ (2002::/16 c08-math-0061 6to4). It gives the full 48 bits of the address to a site based on its external IPv4 address. The IPv4 external address is embedded: 2002:<ipv4 ext address>::/48 with the format, ‘2002:<ipv4add>:<subnet>::/64’. Figures 8.18 and 8.19 show the tunnelling techniques.

c08f018

Figure 8.18 The 6to4 translation via satellite access network

c08f019

Figure 8.19 The 6to4 translation via satellite core network

To support 6to4, the egress router implementing 6to4 must have a reachable external IPv4 address. It is a dual-stack node. It is often configured using a loop back address. Individual nodes do not need to support 6to4. The prefix 2002 may be received from router advertisements. It does not need to be dual stack.

8.7.7 Issues with 6to4

IPv4 external address space is much smaller than IPv6 address space. If the egress router changes its IPv4 address, then it means that the full IPv6 internal network needs to be renumbered. There is only one entry point available. It is difficult to have multiple network entry points to include redundancy.

Concerning application aspects of IPv6 transitions, there also other problems with IPv6 at the application layer: the support of IPv6 in the operating systems (OS) and applications is unrelated; dual stack does not mean having both IPv4 and IPv6 applications; DNS does not indicate which IP version to be used; and it is difficult to support many versions of applications.

Therefore, the application transitions of different cases can be summarised as the following (also see Figure 8.20):

  • For IPv4 applications in a dual-stack node, the first priority is to port applications to IPv6.
  • For IPv6 applications in a dual-stack node, use IPv4-mapped IPv6 address ‘::FFFF:x.y.z.w’ to make IPv4 applications work in IPv6 dual stack.
  • For IPv4/IPv6 applications in a dual-stack node, it should have a protocol-independent API.
  • For IPv4/IPv6 applications in an IPv4-only node, it should be dealt with on a case-by-case basis, depending on applications/OS support.
c08f020

Figure 8.20 IPv6 application transitions

8.7.8 Future Development of Satellite Networking

It is difficult to predict the future, sometime impossible, but it is not too difficult to predict the trends towards future development if we have enough past and current knowledge. In addition to integrating satellites into the global Internet infrastructure, one of the major tasks is to create new services and applications to meet the needs of people. Figure 8.21 illustrates an abstract vision of future satellite networking.

c08f021

Figure 8.21 An illustration of future development of satellite networking

The main difficulties are due to evolution, integration and convergence:

  • It becomes difficult to separate satellite networking concepts from others.
  • It will not be easy to tell the differences between protocols and satellite-friendly protocols due to network convergence (see Figure 8.22), except in the physical and link layers.
c08f022

Figure 8.22 Protocol convergence

The trends are due to the following reasons:

  • The services and applications will converge to common applications for both satellite networking terminals and terrestrial mobile networking terminals. Even satellite-specific services such as global positioning systems (GPS) have been integrated with the new generation of 3G and 4G mobile terminals (see Figures 8.21 and 8.22).
  • The hardware platforms and networking technologies will be well developed, powerful and standardised. This will allow quick and economic development of specialised user terminals.
  • We will see significant development in system software, and face the challenge of managing complexity of large software.

In the last 35 years, satellite capacities have increased tremendously due to technology development. The weight of satellites has increased from 50 kg to 3000 kg, and power from 40 W to 1000 W. Weight and power will increase to 10 000 kg and 20 000 W in the near future. Satellite earth terminals have decreased from 20–30 m to 0.5–1.5 m. Handheld terminals have also been developed. Such trends will continue but perhaps in different ways, such as constellations and clusters of satellites. User terminals can also function as interworking devices to private networks or a hub of sensor networks.

From a satellite networking point of view, we will see end systems such as servers providing information services directly from onboard satellites with multimedia terminals on board satellites to watch and safeguard our planet, and routers on board as network nodes to extend our Internet into space.

We will see more and Ka band satellites for broadband satellites with a capacity of several hundred Mbit/s and even Gigabit/s launched in these few years as the results of decades of research. Research to explore even higher bands (40/50 GHz and 90 GHz) as well as laser for communications with up to Tbit/s is on its way.

We will also see that the satellite terminal becomes much smaller, compatible with standard smartphones. The integration of satellite systems with terrestrial systems will be much tighter. The satellite networks will be able to connect a much wider range of terminals beyond the normal communications and broadcasting systems.

We will also see different types of satellite inter-working together in the future for applications of remote sensoring, global positioning, environment and space and so on.

Though the coverage of this book is quite limited, the potential of the future roles satellite will play and their applications are much greater.

It is also worth watching the recent development of terrestrial communications systems and networks. More control and management systems are relying on software, allowing more integration and flexibility for future satellite systems and networks.

Satellites are mysterious stars. We create them and know them better than any other stars. The capability of satellite technologies and human creativity will exceed our current imaginations. Thank you for reading through this book and please feel free to contact me should you need any help on teaching satellite networking based on this textbook.

Further Readings

  1. [1] RFC 3209, RSVP-TE: Extensions to RSVP for LSP Tunnels, IETF, Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V. and Swallow, G., December 2001.
  2. [2] RFC 3272, Overview and Principles of Internet Traffic Engineering, IETF (Informational), Awduche, D., Chiu, A., Elwalid, A., Widjaja, I. and Xiao. X., May 2002.
  3. [3] RFC 2475, An Architecture for Differentiated Services, IETF (Informational), Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z. and Weiss, W., December 1998.
  4. [4] RFC 1633, Integrated Services in the Internet Architecture: an Overview, IETF (Informational), Braden, R., Clark, D. and Shenker, S., June 1994.
  5. [5] RFC 2205, Resource ReSerVation Protocol (RSVP)—Version 1 Functional Specification, IETF (Standard Track), Braden, R., Zhang, L., Berson, S., Herzog, S. and Jamin, S., September 1997.
  6. [6] RFC 1752, The Recommendation for the IP Next Generation Protocol, IETF (Standard Track), Bradner, S. and Mankin, A., January 1995.
  7. [7] RFC 3035, MPLS using LDP and ATM VC Switching, IETF (Standard Track), Davie, B., Lawrence, J., McCloughrie, K., Rosen, E., Swallow, G., Rekhter, Y. and Doolan, P., January 2001.
  8. [8] RFC 3246, An Expedited Forwarding PHB (Per Hop Behaviour), IETF (Proposed Standard), Davie, B., Charny, A., Bennet, J.C.R., Benson, K., Le Boudec, J-Y., Courtney, W., Davari, S., Firoiu, V. and Stiliadis, D., March 2002.
  9. [9] RFC 2460, Internet Protocol, version 6 (IPv6) Specification, IETF (Standard Track), Deering, S. and Hinden, R., December 1998.
  10. [10] RFC 3270, Multi-Protocol Label Switching (MPLS) support of Differentiated Services, IETF (Standard Track), Faucheur, F. Le, Davie, B., Davari, S., Vaananen, P., Krishnan, R., Cheval, P. and Heinanen, J., May 2002.
  11. [11] RFC 2893, Transition Mechanisms for IPv6 Hosts and Routers, IETF (Standard Track), Gilligan, R. and Nordmark, E., August 2000.
  12. [12] RFC 2597, Assured Forwarding PHB Group, (Standard Track), Heinanen, J., Baker, F., Weiss, W. and Wroclawski, J., June 1999.
  13. [13] ISO/IEC 11172, Information technology: coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s, (MPEG-1), 1993.
  14. [14] ISO/IEC 13818, Generic coding of moving pictures and associated audio information, (MPEG-2), 1996.
  15. [15] ISO/IEC 14496, Coding of audio-visual objects, (MPEG-4), 1999.
  16. [16] ITU-T G.723.1, Speech coders: dual rate speech coder for multimedia communications transmitting at 5.3 and 6.3 kbit/s, 05/2006.
  17. [17] ITU-T G.729, Coding of speech at 8 kbit/s using conjugate-structure algebraic-code-excited linear-prediction (CS-ACELP), 06/2012.
  18. [18] ITU-T E.800, Terms and definitions related to quality of service and network performance including dependability, 09/2008.
  19. [19] ITU-T H.261, Video codec for audiovisual services at px64 kbit/s, 03/1993.
  20. [20] ITU-T H.263, Video coding for low bit rate communication, 01/2005.
  21. [21] RFC 5151, Inter-Domain MPLS and GMPLS Traffic Enginnering—Resource Reservation Protocol—Traffic Engineering (RSVP-TE) Extension, A. Farrel, A.A. Ayyangar, J.P. Vasseur, IETF, 02/2008.
  22. [22] RFC 2474, Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers, IETF (Standard Track), Nichols, K., Blake, S., Baker, F. and Black, D., December 1998.
  23. [23] RFC2375, IPv6 Multicast Address Assignments, R. Hinden, July 1998.
  24. [24] RFC2529, Transmission of IPv6 over IPv4 Domains without Explicit Tunnels, B. Carpenter, C. Jung, IETF, March 1999.
  25. [25] RFC 4699, Reasons to Move the Network Address Translation—Protocol Translation (NAT-PT) to Historic Status, G. Tsirtsis, P. Srisuresh, IETF, July 2007.
  26. [26] RFC 6535, Dual Stack Hosts using ‘Bump-in-the-Stack’ (BIS), B. Huang, H. Deng, T. Savolainen, IETF, February 2012.
  27. [27] RFC 4213, Transition Mechanisms for IPv6 Hosts and Routers, E. Nordmark, R. Gilligan, IETF, October 2015.
  28. [28] RFC 3031, Multiprotocol Label Switching Architecture, IETF (Standard Track), Rosen, E., Viswanathan, A. and Callon, R., January 2001.
  29. [29] RFC 2998, A Framework for Integrated Service Operation over Diffserv Networks, Y. Bernet, P. Ford, R. Yavatkar, F. Baker, L. Zhang, M.Speer, R. Braden, B. Davie, J. Wroclawski, IETF, November 2000.
  30. [30] RFC 3032, MPLS Label Stack Encoding, E. Rosen, D. Tappan, G. Fedorkow, Y. Rekhter, D. Farinacci, A. Conta, IETF, January 2001.
  31. [31] RFC 2212, Specification of Guaranteed Quality of Service, IETF (Standard Track), Shenker, S., Partridge, C. and Guerin, R., September 1997.
  32. [32] RFC 5036, LDP Specification, L. Anderson, I. Minci, B. Thomas, IETF, October 2007.
  33. [33] RFC 3468, The Multiprotocol Label Switch (MPLS) Working Group Decision on MPLS Signaling Protocols, L. Anderson, G. Swallow, February 2003.
  34. [34] RFC 3053, IPv6 Tunnel Broker, A. Durand, P. Fasano, I. Guardini, D. Lento, IETF, January 2001.
  35. [35] RFC 3056, Connection of IPv6 Domains via IPv4 Clouds, B. Carpenter, K. Moore, February 2001.
  36. [36] ITU-T G.114, One-Way Transmission Time, 03/2005.
  37. [37] ITU-T G.726, 40, 32, 24, 16 kbit/s Adaptive Differential Pulse Code Modulation (ADPCM) Corresponding ANSI-C Code is Available in the G.726 Module of the ITU-T G.191 Software Tools Library, 12/1990.
  38. [38] T. Nadean, P. Pan, Framework for software defined networks, draft-nadean-sdn-framework-01, 31 October, 2011.

Exercises

1 Explain the concepts of new services and applications in future networks and terminals.

2 Discuss the basic principles and techniques for traffic modelling and traffic characterisation.

3 Describe the concepts of traffic engineering in general and Internet traffic engineering in particular.

4 Explain the principles of MPLS and interworking with different technologies and traffic engineering concepts.

5 Explain IPv6 and its main differences from IPv4.

6 Explain different techniques for IPv6 over satellites such as IPv6 tunnelling thorough satellite networks and 6to4 translation through satellite networks.

7 Discuss the new development of IPv6 over satellites and future development of satellite networking.