Chapter 1

Introduction 1

1.1. Introduction

Packet-switched networks form a very complex and difficult to control world. With circuit switching networks, if all circuits are busy, the network cannot accept additional clients. With networks that move information in packets, the limit where they stop accepting new clients is vague. The primary objective of IP network control is to determine that limit. Other major objectives are: avoiding congestion when a node is completely blocked, putting in place security components, managing client mobility, etc.

This chapter is meant as an overview of some of the important control mechanisms in IP networks. We will start with flow control, which can be done in different ways, such as the opening of another node with generally important priorities on that node or the statistical utilization of resources, as will be shown with DiffServ technology.

The major requirement for efficient control is based on the presence of messages capable of transmitting control information. The system that will generate these messages is called a signaling network: events and decisions must be flagged. Signaling information transmission is a major component of network infrastructure. One can go so far as to say that the future of networks resides in our capacity to drive and automate their configuration. Signaling objective means flagging information, for example, the control and set-up activation of a new route or reserving a part of the infrastructure in order for a software application to run efficiently. Signaling has long been studied by normalization groups, especially the ITU-T. It has greatly evolved in the last 10 years and must continue to adjust as the IP world changes. The Internet’s normalization group, IETF, has partially taken over particularly the integration of telephony over IP environments.

Internet flows also require control. If we want to achieve QoS (Quality of Service), it is imperative that we control the flows and the network has to be capable of slowing down or accelerating them according to their importance. Another way of controlling a network is to implement rules according to users’ requests. This solution has been developed a few years ago and is called Policy-Based Management (PBM).

Some network functionalities also require rigorous control, such as security and mobility. Let us start by introducing security control mechanisms and then move to mobility management in a network where terminal units can move while remaining connected to the network. In this book, we will detail these extremely important control mechanisms. Finally, we will go to the core of the networks as we will discuss optical networks.

These control mechanisms will be examined briefly in this chapter. The first section of this chapter is a quick overview of signaling. This section will introduce some basic notions with examples, then we will examine flow and congestion control mechanisms, followed by PBM and security and mobility management. We will finish with a discussion on the management of the core of the network.

1.2. Signaling

Signaling means the steps that need to be put in place in order for the information to be transmitted, such as the set-up or closing of a path. It is present in all networks, including those such as IP, that need signaling in its most basic form in order to preserve the system’s simplicity. Signaling must therefore be able to function in all network environments, especially IP networks.

Signaling usually needs to function in routing mode. Indeed, it is essential to indicate to whom the signaling is addressed and, in order to do that, the complete address of the receiver must be indicated in the signaling packet. Therefore, all switched networks need a routing process in order to activate signaling.

Signaling functionality is capable of taking over services at different levels of the architecture. For example, it must be able to negotiate SLA (Service Level Agreement) in order to request user authentification, to collect information on available resources, etc. Signaling protocols must be expandable in order to easily accept new services. Furthermore, signaling protocols must be modular and flexible in order to respond accurately to the needs of each software application. Modularity facilitates the addition of new modules during development phases.

1.2.1. Signaling operation

A signaling protocol has two operation modes: inband and outband. In inband mode, signaling messages are transmitted in the data path, whereas in outbound mode, they are independent of the path followed by the data.

Another characteristic of signaling is path-coupling or path-decoupling possibilities. In the case of path-coupling, signaling follows inband or outband data using the same node order. For example, the RSVP protocol is path-coupled and the SIP protocol is path-decoupled.

Signaling must be able to operate in inter-domain or intra-domain modes. Signaling must also function in end-to-end, border-to-border and end-to-edge (signaling between an end-host and an edge-node) modes.

In the current heterogenous Internet environment, there are a good number of signaling protocols, generally adapted to the multiple existing applications. This has led the IETF to create the NSIS (Next Step in Networking) task force whose responsibility it is to come up with a new single standard designed to combine all previous protocols.

As a general rule, a signaling protocol must be able to cooperate with other protocols. In order to do this, it must be able to transport messages from other signaling protocols. It is also possible to define interfaces making it possible to transform a message concerning a protocol into a message concerning another protocol.

Signaling must support the management of all resources in the network. It controls the information flow enabling applications to request allocation and reservation of resources. To achieve this, signaling interacts with specific entities, such as resource management servers, like bandwidth brokers. Finally, signaling must support SLA negotiation between a user and a supplier or between suppliers and the configuration of the entities within the new SLA network.

Signaling can support monitoring of services and entity states in the network and control the invoicing of services within the network.

In order to validate service requests from a user, signaling is also responsible for authentification. In this case, it will allow the transmission of appropriate information required for this interaction. This transmission must be open enough to authorize existing and future mechanisms.

1.2.2. Signaling for security

Signaling is a very important component of network security. In the first place, signaling must secure itself. The primitives must authenticate themselves to guarantee they are not coming from hackers. Signaling must also implement ways to protect signaling messages against malicious tampering. It must furthermore be able to detect if an old message is reused, thus avoiding replay and be able to hide network topology information. Finally, it must support confidentiality of information mechanisms, such as encryption.

Signaling protocols are able to cooperate with authentification protocols and key agreements, in order to negotiate with security associations.

Signaling must also have ways to negotiate security mechanisms based on the needs of applications and users.

1.2.3. Signaling for mobility management

Signaling plays an important role in mobility management. It intervenes in multiple operations to complete when the mobile changes cell, when it roams, when it negotiates its SLA or for the execution of an application.

When a handover happens, signaling must be able to quickly and efficiently reconnect and reconstruct the installed states in the new base station. The recovery process may be local or end-to-end. If the mobile network is overloaded, handover signaling must have a higher priority than the signaling from a new connection.

1.2.4. Signaling for network flow management

In a normal situation, signaling traffic uses only a small part of the overall network traffic. However, in certain congestion situations, failure or problem for example, signaling traffic can increase significantly and create a serious signaling congestion within the network. For example, a signaling packet routing error can start a chain reaction explosion of notification messages. A signaling protocol must be able to maintain signaling stability.

Signaling must be robust, effective and use the least amount of resources in the network. It must be able to function even when there is massive congestion.

The network must be able to give priority to signaling messages. This will reduce signaling transit delays for high priority applications. Attacks by denial of service are also a threat to be aware of, as they can overload the network with high priority signaling messages.

Signaling protocol must allow for grouping of signaling messages. This may include, for example, grouping of refresh messages, such as RSVP, thus avoiding individually refreshing soft-states.

Signaling must be scalable, meaning it has to be able to function within a small network as well as in a major network with millions of nodes. It must also be able to control and modify the multiple security mechanisms according to the applications’ performance needs.

1.3. Flow control and management techniques

Flow control and management techniques are imperative in the networking world. Frame or packet-transfer networks are like highways: if there is too much traffic, nobody is going anywhere. It is therefore imperative to control both the network and the flow of traffic within that network. Flow control acts as preventive; it limits the amount of information transferred by the physical capacity of transmission. The objective of congestion control is to avoid congestion within the nodes and to resolve jams when they appear.

Both terms, flow control and congestion control, can be defined in more detail. Flow control is an agreement between two entities (source and destination) to limit service transmission flow by taking into account the available resources in the network. Congestion control is made up of all the actions undertaken to avoid and eliminate congestions caused by a lack of resources.

Under these definitions, flow control can be considered as a unique part of congestion control. Both help ensure QoS.

QoS is defined by the ITU-T’s E.800 recommendation which states: “collective effect of the service of performance which determines the satisfaction degree of a system user”. This very broad definition is explained in more detail in the I.350 recommendation, which defines QoS and network performance (NP).

NP is evaluated according to the parameters significant to the network operator and which are used to measure the system, its configuration, its behavior and maintenance. NP is defined independently of the machine and the user’s actions. QoS is measured in variable conditions that can be monitored and measured wherever and whenever the user accesses the service.

Figure 1.1 illustrates how QoS and NP concepts can be applied in a networked environment. Table 1.1 establishes the distinctions between QoS and NP.

A 3 × 3 matrix has been developed by ITU-T in the appendix of recommendation I.350 to help determine the parameters to take into account when evaluating QoS and the NP.

This matrix is illustrated in Figure 1.2. It is composed of six zones that must be explicitly defined. For example, if we look at the first column, the access capacity, the user information transfer capacity and finally the maximum capacity that might need to be maintained when a user disengages must be determined. The second column corresponds to the parameters that ensure validity of access, transfer and disengagement actions. The last column takes into account the parameters that control the secure operation of the access, transfer and disengagement.

Figure 1.1. Quality of Service (QoS) and network performance (NP)

ch1-fig1.1.gif

Table 1.1. Distinction between QoS and NP

Quality of Service (QoS)

Network performance (NP)

Client oriented

Network oriented

Service attribute

Connection attribute

Observable by user

Controls planning,
performance control maintenance

Between access nodes

Between the two network connections

Figure 1.2. 3 × 3 matrix defining QoS and NP

ch1-fig1.2.gif

1.3.1. Flow control techniques

The ITU-T and the IETF have defined a multitude of flow control techniques. Among the leading techniques, we find the following.

UPC/NPC (Usage Parameter Control/Network Parameter Control)

Usage Parameter Control/Network Parameter Control (UPC/NPC) consolidates all actions taken by the network to monitor and control user access traffic and compliance of the open connection to network access. The main objective of this technique is to protect the network against violations of traffic shaper that can lead to a degradation of the quality of service to other user connections.

Priority management

Priority management generally controls three service classes. The first high priority class is designed for real-time applications such as voice telephony, one average priority class maintains good packet transmission but offers no guarantee on transit times and a low priority class has no guarantee whatsoever.

NRM (Network Resource Management)

Network Resource Management (NRM) groups together the forecasts of network resource allocations to optimize traffic spacing according to the service properties.

Feedback technique

Feedback techniques are the set of actions taken by the users and the network to regulate traffic on its many connections. This solution is used in operator networks like ATM networks with procedures such as ABR (Available Bit Rate) as well as in IP networks with Slow Start and Congestion Avoidance technique, which lowers the value of the transmission window as soon as the return time goes beyond a certain limit.

Among traffic management methods making it possible to avoid network overload, we find traffic control mechanisms whose function is to ensure that traffic is compliant with the traffic shaper, Fast Reservation Protocol (FRP) and Explicit Forward Congestion Indication/Backward Congestion Notification (EFCI/BCN).

The biggest challenge is to constantly design flow control mechanisms that will enable efficient utilization of network resources and satisfy the required QoS. In traditional networks, control of flow by window mechanism is the most widely used. In ATM networks, on the other hand, “send and wait” type protocols do not perform adequately because the propagation delay is too long compared to transmission time. Many other adaptive flow control methods can also be implemented in the upper layers. In general, these control methods work on the size of the window or throughput layer and parameter values are decided by the destination node according to the state of the network.

This system’s implicit assumptions, such as information on the state of the network or the receipt on time of information on the state of the network, can also cause problems. Even if congestion is detected in the network, it is difficult to estimate duration, to locate the congested node in time, to measure the importance of the congestion and therefore to reduce the size of the window adequately.

ITU-T has defined numerous access and traffic control mechanisms. The role of the user parameter control and of the UPC/NPC network is to protect network resources from malicious users and from involuntary operations that have the ability to degrade the QoS of previously established connections. The UPC/NPC is used to detect shaper violations and to take appropriate actions.

In order to avoid cell loss at UPC/NPC level, the UPC/NPC emulation can be executed at sender level. This function is called Source Traffic Smoothing (STS), so that it can be distinguished from the UPC/NPC. From the user’s standpoint, the STS function is a nuisance since it means an additional delay and needs more buffer space.

The Virtual Scheduling Algorithm (VSA) recommended in norm I.37 represents the first possibility to detect irregular situations and bring back an acceptable flow from the traffic shaper. Its role is to monitor the peak rate of a connection while at the same time guaranteeing a jitter limit. Simply put, if a frame arrives sooner than expected, it is put on hold until the moment where it should have arrived. Only at that moment is it transmitted on the network and it becomes once again compliant. If the frame arrives later than expected, either it arrives in a short enough time frame to stay compliant with the jitter – and it becomes compliant – or it arrives too late to be within the acceptable limit and it then becomes non-compliant.

The Leaky Bucket (LB) is another mechanism of the UPC/NPC and STS. It consists of a counter (c), a threshold (t) and a leaky rate (l). The counter is incremented of one frame each time it arrives in the buffer and decremented by the leaky rate. If a frame arrives when the counter value is equal to the threshold, it is not memorized in the buffer. In other words, when the buffer is full, the arriving frame is rejected.

1.3.2. Congestion control methods

Congestion control methods also vary and even more so whether we are referring to label switching (also called packet-switching) or routing transfer (also called packet-routed) networks. Packet-switched networks correspond to telecommunications operators’ views: packets of the same flow have to follow a path. Packet-routed networks are symbolized by the IP world: packets of the same flow can be routed on different routes.

In a packet-switched network, even if each source respects its traffic shaper, congestions may happen from piggy backing of multiple traffics. Several recommendations have been to put in place a method to selectively reject frames in order to ease traffic on the network when there is congestion. For example, in an ATM environment, when the cell header CLP (Cell Loss Priority) bit is marked (CLP = 1), the cell is destroyed first when a congestion is detected. These methods can be useful to relieve the network without much degradation of the QoS. However, they can result in a waste of network resources and of its intermediary nodes, especially if the congestion duration is overly long. The CLP can also be marked either by source terminals, indicating that the cell has inessential information, or by the UPC/NPC method, specifying that the cell violates the traffic limit negotiated with the CAC.

In the case of packet-routed networks, congestion control methods are handled by the packets themselves, independently from the network’s structure. The most traditional solution is to put an interval timer in the packet which, when it expires, destroys the packet. This interval timer is positioned within the TTL (Time To Live) field of IP packets. In fact, in order to simplify time comparison procedure on timers that are rarely synchronous, the IP world prefers rounded values in the TTL field which decrements at each node pass, so that when the packet makes more than a certain number of hops, 16 for example, the packet is destroyed. This way may be somewhat removed from a congestion control solution but it favors the destruction of lost packets or the execution of loops on the network.

1.3.3. Priority technique

A flow control solution, that we have not examined yet, consists of associating a priority to a packet or a frame and to process these entities by their priority. This priority can be either fixed or variable in time. The latter are called variable priorities. Several publications have shown that the priority sequencing method in a transfer node could bring about a relatively high resource usage rate of the node. The fixed priority method is the simplest one.

In IP networks, “premium” (or platinum) clients always have a higher priority than the ones in the class just below, “Olympic” clients (Olympic clients are generally subdivided into three classes: gold, silver and bronze), who have themselves a higher priority than the lower class clients, the “best effort” clients.

As opposed to a fixed priority method, a variable priority method will change according to the control point. For example, delay sensitive services become a priority for frames coming out of the buffer. However, an operator can, if he wishes, put the packets back in the Olympic flow in order to transmit at a lower cost, if the transmission rate is sufficiently powerful. Services sensitive to loss have priority for frames entering the buffer: if a frame sensitive to loss requests entry in a memory during overflow, a delay sensitive frame will be rejected. There are several variable priority methods:

– In the Queue Length Threshold (QLT) method, priority is given to frames sensitive to loss if the number of frames in the queue crosses a threshold. Otherwise, delay sensitive frames have priority.

– In the Head Of the Line with Priority Jumps (HOL-PJ) method, several priority classes are taken into account. The highest priority is given to the traffic class that requires strict delays. Non-pre-emptive priority is given to high priority frames. Finally, low priority frames can pass through to a higher priority queue when the maximum delay has been reached.

– In the push-out method or partial buffer sharing, selective rejection is executed within the switching elements. An unmarked frame can enter a saturated buffer if marked cells are awaiting transmission. One of the marked cells is rejected and the unmarked cell enters the buffer. If the buffer only counts unmarked cells, the arriving unmarked cell is rejected. In the partial buffer sharing method, when the number of cells within the buffer reaches a predetermined threshold, only unmarked cells can now enter the buffer. The push-out method can be improved in many ways. For example, instead of destroying the oldest or most recent marked frame in the queue, it would be possible to destroy a larger number of marked frames that correspond to a unique message. Indeed, if we destroy one frame, all frames belonging to the same message will also be destroyed at arrival; it would make sense to destroy them directly within the network. That is the goal of the improved push-out method.

1.3.4. Reactive congestion control

Reactive congestion control is essential when simultaneous bursts generate instant overloads in the nodes. Congestion can happen following uncertainty on the traffic or during incorrect modeling of statistical behavior of traffic sources.

The EFCI/BCN mechanism was first introduced by the ITU-T among their recommendations. The role of the Explicit Forward Congestion Indication (EFCI) mechanism is to transmit congestion information along the path between the transmitter and the receiver. The frames or packets that go through an overloaded node are marked in the heading. In the ATM networks, the destination node, receipt of cells marked by congestion indicators (PTI = 010 or 011) indicate congestion in certain nodes of the path. The Backward Congestion Notification (BCN) mechanism enables the return of information throughout congestion to the transmission node. This node can then decrease its traffic. The notification to the transmitter is executed through a supervision flow. This method requires an efficient flow control mechanism, reactive to the internal congestion.

In traditional networks, window-based flow control has been the most widely used. Recent studies propose adaptive methods of window-based flow control. The window size is calculated by the recipient or automatically increased by the arrival of an acknowledgement. These methods were developed for data service and can be linked to error control. The very long propagation delay compared to the transmission time makes the use of a window-based flow control mechanism difficult. Furthermore, these methods use strong assumptions, such as the knowledge of the network’s state or that the propagation time will be short enough to return control information in adequate time.

1.3.5. Rapid resource management

It is possible, by adapting resources reservation within the network in the entering traffic, to have better control. Obviously, this control would be complicated to implement because of the discrepancy between transmission speed and the propagation delay. The Fast Reservation Protocol (FRP) method has had strong support at the beginning of the 1990s to attain QoS. It is made up of two variables, FRP/DT (Fast Reservation Protocol/Delayed Transmission) and FRP/IT (Fast Reservation Protocol/Immediate Transmission). In the first case, the source transmits only after securing the necessary resources for the flow of frames at every intermediary node level. In the second version, the frames are preceded by a resource allocation request frame and are followed by a resource deallocation frame.

1.4. Policy-based management

Telecommunications operators and network administrators must automate their node configuration and network processes. The two goals of this automation are to control information flow being transmitted in these nodes and to manage networks more easily. These needs have been translated into a policy-based management system, to which we can include control, an integral part of any management system.

The goal of this section is to present this new paradigm, consisting of maintaining and controlling networks through policies. We start by introducing the policies themselves and then detaining the architecture associated to the signaling protocol used in this environment.

A policy takes the form “if condition then action”. For example, “if application is voice over telephone type, then make all packets Premium priority”. Chapter 6 reviews in detail the policies and their control utilization, as well as the signaling protocol responsible for deploying policy parameters and the different solutions available to put in place policy-based management. You will find here some of the basic elements of policy-based management.

A policy can be defined at multiple levels. The highest level corresponds to the user level, since the choice of a policy is determined by a consultation between the user and the operator. This consultation can be done using the natural language or rules put in place by the network operator. In this case, the user can only choose the policy he wants to see applied from the network operator’s rules. This policy is based on the business level and it must be translated into a network language in order to determine the network protocol for quality of service management and its relevant parameters. Finally, this network language must be translated into a lower level language that will be used to program the network nodes or node configuration.

These different levels of language, business, network and configuration are maintained by an IETF workgroup called Policy. The final model comes from another workgroup, DMTF (Distributed Management Task Force) and is called CIM (Common Information Model). Nowadays the two workgroups work together to develop the extensions.

The goal of the standardization of information models for the different language levels process is to create a template that can be used as information models by domain, as well as an independent representation of equipment and implementations. Chapter 6 is dedicated to this solution.

1.5. Security

Security is at the heart of all networks. Since we do not directly see the person with whom we communicate, we must have a way to identify him. Since we do not know where all of our information goes, we need to encrypt it. Since we do not know if someone will modify our transmission, we must verify its integrity. We could go on and on about security issues which networks have to be able to handle all the time.

Globally, security can be divided into two parts: security when we open a session and security during the transmission of data. There are a great number of techniques used to run these two security modes and new ones are invented every day. Each time an attack is blocked, hackers find new ways to thwart systems. This game of pursuit does not make it easy for the presentation and implementation of security mechanisms. In this book, we will limit ourselves to the control of security in network environments without analyzing security of the equipment and software applications themselves.

This section offers a general overview of the security elements within a network, following ISO’s security recommendations. These recommendations were done at the same time as the reference model. Then we will present the more traditional security control mechanisms, such as authorization, authentification, encryption, signature, etc.

1.5.1. General overview of security elements

The security of information transmission is a major concern in network environments. For many years, complete system security required the machine to be in total isolation from external communication. It is still the case in many instances today.

In IT, security means everything surrounding the protection of information. The ISO has researched and taken account of all necessary measures to secure data during transmission. These proceedings have helped put in place an international architecture standard, ISO 7498-2 (OSI Basic Reference Model-Part 2: Security Architecture). This architecture is very useful for anyone who wants to implement security elements in a network as it details the major capabilities and their positioning within the reference model.

Three major concepts have been defined:

– security functions, determined by the actions that can compromise the security of a company;

– security mechanisms, that define the algorithms to put in place;

– security services, which are the applications and hardware that hold the security mechanisms so that users can have the security functions that they need.

Figure 1.3 explains security services and the OSI architecture levels where they must be put in place.

Five security service types have been defined:

confidentiality, which must ensure data protection from unauthorized attacks;

authentification, which must make sure that the person trying to connect is the right one corresponding to the name entered;

integrity, which guarantees that the information received is exactly the same as the one transmitted by the authorized sender;

non-repudiation, which ensures that a message has really been sent by a known source and received by a known recipient;

access control, which controls prevention/notification of access to resources under certain defined conditions and by specific users.

Within each one of these services, there can be special conditions, explained in Figure 1.3.

Figure 1.3. Security and OSI architecture levels

ch1-fig1.3.gif

By using the five security services presented earlier and studying the needs of the sender and recipient, we obtain the following process:

1. The message must only get to its recipient.

2. The message must get to the correct recipient.

3. The message sender must be identified with certainty.

4. There must be identity between the received message and the sent message.

5. The recipient cannot contest the receipt of the message.

6. The sender cannot contest the sending of the message.

7. The sender can access certain resources only if authorized.

Number 1 corresponds to confidentiality, numbers 2 and 3 correspond to authentification, number 4 to data integrity, numbers 5 and 6 to non-repudiation, and number 7 corresponds to access control.

1.6. Mobile network control

Networks are becoming global: a client can connect at any moment, anywhere with large throughput. Access networks that allow Internet access are wireless networks or mobile networks. Resources are limited and control management is imperative in order to ensure quality of service. Furthermore, clients can be nomads or mobiles. Nomad clients can connect in different places and get back to whatever they were working on earlier. Mobile clients can continue to work while moving, staying connected. Handovers, i.e. changing receiving antennas, can happen without affecting communication. These environments need control management so that their features can be verified.

Together, nomadism and mobility are part of a bigger setting, which we call global mobility. Global mobility is a vast concept combining terminal mobility, personal mobility and services mobility. This global mobility has become the decisive advantage of third generation networks over today’s mobile networks. Controlling this global mobility is an important concept and, in this chapter, we want to delve deeper into this issue.

Terminal mobility is the capacity of the terminal to access telecommunications services, wherever the terminal may be and regardless of its traveling speed. This mobility implies that the network is able to identify, locate and follow the users, regardless of their moves, and then to route the calls to their location. A precise mapping of the user and his terminal’s location must be maintained. Roaming is linked to the terminal’s mobility, since it allows a user to travel from one network to another.

Personal mobility corresponds to the capacity of a user to access inbound and outbound telecommunications services from any terminal, anywhere. On the basis of a unique personal number, the user can make and receive calls from any terminal. Personal mobility implies that the network is able to identify users when they travel in order to service them according to their services profile and to locate the user’s terminal in order to address, route and bill the user’s calls.

Services mobility, also called services portability, refers to the capacity of the network to supply subscribed services wherever the terminal and user are. The actual services that the user can request on his terminal depend on the terminal’s capacity at this location and on the network which serves this terminal. Portability of services is ensured by regular updates of the user’s profile and queries for this profile if necessary. Services mobility links services to a user and not to a particular network access. Services must follow the users when they travel.

Linked to services mobility, VHE (Virtual Home Environment) takes care of roaming users, enabling them to access the services supplied by their services providers in the same way, even when they are out of their area. Due to VHE, a user has access to his services in any network where he is located, in the same way and with the same features as when he is within his own subscriber network. He then has at his disposal a personalized services environment, which follows him everywhere he goes. VHE is offered as long as the networks visited by the user are able to offer the same capabilities as the user’s subscriber network.

Within the user mobility concept, terminal mobility and personal mobility are often grouped.

The control of these mobilities is closely studied by normalization and promotion groups, from the ETSI and the 3GPP or the 3GPP2, to the IETF and the IEEE.

1.7. Optical network control

In the previous sections, we have mostly been interested in the control of the network edge and of the local loop, that is on telecommunication ways that provide user access to the heart of the operator.

We must also mention core networks, especially optical networks that today make up the central part of the interconnection networks.

Optical network control means optimization of bandwidth usage. The technique used until now is circuit switching. The concern with this comes from the power of the wavelengths that enable throughputs of 10, even 40 Mbps and soon 160 Gbps. No user alone is capable of utilizing a wavelength allowing end-to-end communication of this throughput. It is therefore important to control multiplexing of users. Other solutions are being studied, such as the opening and closing of optical circuits corresponding to very short times like Burst Switching techniques. Burst Switching is basically optical packet switching but of very long packets which can consist of hundreds of microseconds. Control of these bursts is tricky since it is not possible to memorize bytes of the packet within intermediary elements.

It is also important to control the reliability of core networks. For example, to ensure good telephony, the reliability of the network must reach the 5 “9”, that is 99.999% of the time. Control will have something to say in this functionality.

We will examine all these controls within optical networks at the end of this book.

1.8. Conclusion

An uncontrolled IP network cannot work. A minimal control takes us to the Internet as we know it today. Trying to introduce QoS is a complex task but it is quickly becoming a necessity. In order to achieve this, an excellent knowledge of networks is essential. To achieve this knowledge, a high level of control is mandatory.

1.9. Bibliography

[ADA 99] ADAMS C., LLOYD S., KENT S., Understanding the Public-Key Infrastructure: Concepts, Standards, and Deployment Considerations, New Riders Publishing, 1999.

[AFI 03] AFIFI H., ZEGHLACHE D., Applications & Services in Wireless Networks, Stylus Pub, 2003.

[AUS 00] AUSTIN T., PKI: A Wiley Tech Brief, Wiley, 2000.

[BAT 02] BATES R. J., Signaling System 7, McGraw-Hill, 2002.

[BLA 99] BLACK D. P., Building Switched Networks: Multilayer Switching, Qos, IP Multicast, Network Policy, and Service Level Agreements, Addison Wesley, 1999.

[BOS 02] BOSWORTH S., KABAY M. E., Computer Security Handbook, Wiley, 2002.

[BRA 97] BRADEN B., ZHANG L., BERSON S., HERZOG S., JAMIN S., Resource ReSerVation Protocol (RSVP)-Functional Specification, IETF RFC 2205, September 1997.

[BRO 95] BRODSKY I., Wireless: The Revolution in Personal Telecommunications, Artech House, 1995.

[BUR 02] BURKHART J. et al., Pervasive Computing: Technology and Architecture of Mobile Internet Applications, Addison Wesley, 2002.

[COO 01] COOPER M., NORTHCUTT S., FEARNOW M., FREDERICK K., Intrusion Signatures and Analysis, New Riders Publishing, 2001.

[DOR 00] DORNAN A., The Essential Guide to Wireless Communications Applications, from Cellular Systems to WAP and M-Commerce, Prentice Hall, 2000.

[DRY 03] DRYBURGH L., HEWETT J., Signaling System No. 7 (SS7/C7): Protocol, Architecture, and Applications, Pearson Education, 2003.

[DUR 00] DURHAM D., BOYLE J., COHEN R., HERZOG S., RAJAN R., SASTRY A., “The COPS (Common Open Policy Service) Protocol”, IETF RFC 2748, January 2000.

[DUR 02] DURKIN J. F., Voice-Enabling the Data Network: H.323, MGCP, SIP, QoS, SLAs, and Security, Pearson Education, 2002.

[GOL 04] GOLDING P., Next Generation Wireless Applications, John Wiley & Sons, 2004.

[HAR 02] HARTE L., Telecom Basics: Signal Processing, Signaling Control, and Call Processing, Althos, 2003.

[HAR 02] HARTE L., DREHER R., BOWLER D., Signaling System 7 (SS7) Basics, Althos, 2003.

[HAR 03] HARTE L., Introduction to SS7 and IP: Call Signaling using SIGTRAN, SCTP, MGCP, SIP, and H.323, Althos, 2003.

[HAR 04] HARTE L., BOWLER D., Introduction to SIP IP Telephony Systems: Technology Basics, Services, Economics, and Installation, Althos, 2004.

[HEI 03] HEINE G., GPRS - Signaling and Protocol Analysis – Volume 2: The Core Network, Artech House, 2003.

[HER 00] HERZOG S., BOYLE J., COHEN R., DURHAM D., RAJAN R., SASTRY A., COPS Usage for RSVP, IETF RFC 2749, January 2000.

[HOU 01] HOUSLEY R., POLK T., Planning for PKI: Best Practices Guide for Deploying Public Key Infrastructure, Wiley, 2001.

[JAN 04] JANCA T. R., Principles & Applications of Wireless Communications, Thomson Learning, 2004.

[JOH 04] JOHNSTON A. B., SIP: Understanding the Session Initiation Protocol, 2nd edition, Artech House, 2004.

[KAU 02] KAUFMAN C. et al., Network Security: Private Communication in a Public World, Prentice Hall, 2002.

[KOS 01] KOSIUR D., Understanding Policy-Based Networking, Wiley, 2001.

[LIN 00] LIN Y. B., CHLAMTAC I., Wireless and Mobile Network Architectures, Wiley, 2000.

[MAC 97] MACARIO R. C. V., Cellular Radio, Principles and Design, 2nd edition, Macmillan, 1997.

[MAX 02] MAXIM M., POLLINO D., Wireless Security, McGraw-Hill, 2002.

[MCC 01] MCCLURE S., SCAMBRAY J., KURTZ G., Hacking Exposed: Network Security Secrets & Solutions, McGraw-Hill, 2001.

[MUL 95] MULLER N. J., TYKE L. L., Wireless Data Networking, Artech House, 1995.

[NIC 01] NICHOLS R. K., LEKKAS P. C., Wireless Security: Models, Threats, and Solutions, McGraw-Hill, 2001.

[NIC 98] NICHOLS K., BLAKE S., BAKER F., BLACK D., Definition of the Differentiated Services Field (DS Field) in the IP4 and IP6 Headers, IETF RFC 2474, December 1998.

[PRA 98] PRASAD R., Universal Wireless Personal Communications, Artech House, July 1998.

[STR 03] STRASSNER J., Policy-Based Network Management: Solutions for the Next Generation, Morgan Kaufmann, 2003.

[TOH 01] TOH C. K., Ad-Hoc Mobile Wireless Networks: Protocols and Systems, Prentice Hall, 2001.

[VAN 97] VAN BOSSE J., Signaling in Telecommunication Networks, Wiley-Interscience, 1997.

[VER 00] VERMA D., Policy-Based Networking: Architecture and Algorithms, Pearson Education, 2000.

[WEL 03] WELZL M., Scalable Performance Signaling and Congestion Avoidance, Kluwer Academic Publishing, 2003.

[YAR 00] YAVATKAR R., PENDARAKIS D., GUERIN R., “A Framework for Policy-Based Admission Control”, IETF RFC 2753, January 2000.


1 Chapter written by Guy PUJOLLE.