Chapter 13

Internet Security 1

13.1. Introduction

The importance of security has greatly increased following attacks against certain large-scale sites such as Amazon or Yahoo. These attacks, along with increasing fears of cyber-terrorism since September 11th, 2001, have encouraged researchers to find means and methods of protection for users and machines.

The aim of this chapter is to give a view of the state of the art of security in the Internet. The first part addresses a few elements of security. Next, we will present Internet security according to its field of application. Firstly, we will address the security of user data. Then, we will discuss the security of the Internet infrastructure. Finally, we will look at protecting user installations.

13.2. Elements of security

The international NS standard ISO 7498-2 [NF 90] defines security services and the associated mechanisms. Firstly, we will give the definitions of some of the security services. Then, we will give a few elements of cryptography before citing a few security mechanisms cited in the ISO document. Finally, we will broach the problem of key management in the Internet.

13.2.1. Security services

Authentication makes it possible to identify a communicating entity and/or the source of data.

Confidentiality ensures the protection of data against all unauthorized disclosures.

Integrity services neutralize the modification of the message between its source and its destination.

Access control ensures protection against all unauthorized use of resources accessible via the OSI architecture. These resources can be OSI or non-OSI resources, reachable via OSI protocols. In the context of the Internet, the definitions linked to OSI are extended to TCP/IP protocols and the resources accessible via these protocols. These resources can be TCP/IP protocols or the resources accessible via these protocols.

The objective of non-repudiation is to provide the receiver with a means to disallow emissions from the emitter, or to the emitter a means that disallows the receiver to deny reception.

13.2.2. Cryptography

The principle of cryptography is to transform an illegible message for all into a message readable only by authorized people. The basic principle is that cryptography is bi-directional: it enables the encryption and decryption of a message by restoring its original form.

Symmetrical cryptography requires the source and the destination key information, used for encrypting and decrypting. The main issue with this method is that the message sender must communicate its key to the receiver via a secure channel.

In the case of asymmetric cryptography, the sender and receiver each have a private key and a public key, both determined by mathematical methods. This method guarantees that data encrypted with a private key can be decrypted with the corresponding public key.

13.2.3. Security mechanisms

Encrypting can ensure confidentiality and play a role in a number of other security mechanisms.

The process of digital signing implies either the encryption of a unit of data, or the creation of a cryptographic control value for the unit of data by using the signer’s private information as a private key. The signature can only be created by using the private information of the signer. Therefore, when the signature is verified, we can prove that only the unique holder of the private information can have produced the signature. The signature ensures non-repudiation.

Access control can use the authenticated identity of an entity or information relative to the entity to determine and apply access rights of the entity. If the entity tries to use an unauthorized resource, the access control will reject this attempt.

The determining of unique data integrity implies two processes, one for the emitter and the other for the receiver. The emitting entity adds a value to a data unit, the value being a function of the data. This value can be supplementary information, such as a block control code or a cryptographic control value, and can also be encrypted. The destination entity generates a corresponding value and compares it to the received value to determine if the data was modified during transit.

The following are some of the techniques that can be applied to authentication exchanges:

– the use of authentication information, such as a password (provided by an emitting entity and controlled by the destination entity);

– cryptographic techniques;

– the use of characteristics and/or information unique to the entity.

Properties which are relative to data communicated between two or more entities, such as their integrity, their origin, their date and their destination, can be guaranteed by providing a notarization mechanism. The guarantee is provided by a notary (third party) that is trusted by the communicating entities and holds the necessary information for providing the required guarantee in a verifiable manner. When calling on this notarization mechanism, the data is communicated between the communicating entities via the protected communication instances and the notary.

13.2.4. Key management issue

We have seen that asymmetrical cryptography does not require the encryption key, which means it is particularly adapted to applications that use networks (such as the Internet). In fact, it is only the public key that is transmitted via a directory and it is only used during encryption.

However, nothing guarantees that the public key is really the one associated with the user. A hacker can corrupt the public key by replacing it with his own public key. The hacker is then in a position to decrypt all messages having been encrypted with the current directory key. It is for these reasons that the notion of a certificate was implemented. A certificate enables the association of a public key to an entity (a person, a machine, etc.) to ensure validity. The certificate is in some way the identity card of the public key. It is delivered by a trusted third party (TTP) called a certification authority (CA). We speak then of public key infrastructure (PKI) [PKI 04].

The private key must always be protected in a fixed and safe location. These days, the users are more and more driven to mobility. They can be called to change their work tools. It is both risky and complicated to ask users to carry their own private keys. With the technological advances in the chip card industry, a private key is being associated with it more and more. This guarantees that the private key cannot be copied and remains in the control of the subject.

The use of biometrics presents a great interest in security. Identifying a person uniquely by their physiological characteristics (retina, iris, digital imprints, voice, etc.) can have different applications. For example, imagine its use in PKI, where users would have a chip card with digital imprint identification.

13.3. User data security

Securing data on the Internet requires an analysis in order to clearly define the security needs. What data needs to be secured? Between points A and B, does the entire stream need to be secured or just a critical sub-set? Also, must the protected stream be completely protected or only partially so? In other words, must we apply security mechanisms to the application by adding a protective field? Or can we simply ensure a protection in the transfer? Unless we do not want to get to the IP level to implement a VPN IPSec [IPS 04] from end to end or only on a portion of the network. The Internet, although built on TCP/IP protocols, requires IP packets to be transported via frames (level 2) that can take various forms between a source and a destination. So even if the highest security is available at the level of IP and higher, what should de done if level 2 is poorly protected? We will think notably of the problems linked to the transmission method in Ethernet networks and wireless network broadcasting, such as Wi-Fi whose use is increasing more and more. Finally, the Internet relies also on physical infrastructures. Despite available security protocols for level 2 and higher, how will we protect ourselves against failures or malicious use of property linked to the physical environment?

We will present an aspect of security at each level and will attempt to answer these few questions.

13.3.1. Application level security

Applications, in the framework of the Internet, include the flow that uses the services of sub-layers starting with the transport layer (or the network layer in the case of raw IP packets). The traditional network applications are concerned, such as databases or network games (often using UDP for transportation); also concerned are the network services that make the Internet a success, such as FTP for file transfer, SMTP and POP for messaging, DNS for machine naming and others.

We will see forms of security brought to this level. Firstly, we will see application security with two examples: PGP and tattooing. Secondly, we will see security extensions applied to an application protocol, in this case RSVP.

13.3.1.1. Application security

PGP (pretty good privacy) [IPG 04] is a cryptography software in the public domain. It is based on asymmetric cryptography (RSA) and on symmetric cryptography (IDEA at 128 bits). Its most current use is in the protection of emails. It can also offer confidentiality by encryption and digital signature authentication. PGP uses IDEA for the text with a secret random key that is different for every session. This key is encrypted with the destination public key and transmitted with the message. Upon reception, PGP decrypts the secret key with the private RSA key. PGP decrypts the data with the previously obtained secret IDEA key.

Before speaking of tattooing, let us recall steganography. Steganography enables the hiding of an image in a document. One of its interesting applications is watermarking. It no longer hides an image within a document, but marks it indelibly. The main purpose is the protection of copyright. Information relating to the author is inscribed in the document such that nobody can appropriate it. This is applicable to images as well as to software. Then, each copy of the document contains the same mark, that of the legal owner.

In these two examples, we use cryptographic properties in the interest of applying mechanisms to application flows. The security services can be considered as a processing layer between the initial information and the TCP/IP layers.

13.3.1.2. Security extensions

RSVP [BRA 97] is a protocol that requires resource reservations in the network. It is an indication whose objective is to ensure QoS for data streams. RSVP messages rely on UDP/TCP.

Take two users: Alice and Bob. Alice wants to have a videoconference with a certain QoS with Bob; Alice initiates an RSVP signal in the network so that the adequate resources are allocated.

Here, various needs linked to security appear. First there is the authentication of the entity or of the user. We have to make sure that it is really Alice’s entity who is making the request and that this entity communicates with the network entities she trusts. If this is not the case, another entity can usurp the identity of Alice’s machine and use the service for which Alice pays. Also, the request of Alice’s entity can be redirected to a malicious node for perverse reasons. User authentication can be added to entity authentication to ensure the identification of the user of the improved service.

In RSVP, extensions were brought for services other than authentication, namely integrity and anti-replay. In fact, a usurper could modify Alice’s request so that it was never accepted by the network, for example, by excessively modifying the QoS parameters. Also, this same usurper could, after hearing Alice’s messages, decide to replay the same messages to attempt to access the improved services for which Alice is paying.

To counter such risks, the IETF decided to enrich the structure of RSVP messages. A new RSVP object was created to ensure the authentication of entities (origin of data) but also to arm itself with possible modifications (integrity) of other RSVP objects, as well as the replay of RSVP messages: this is the Integrity object [BAK 00].

Here is an illustration of the contents of this object, followed by a brief description of the attributes of interest.

Figure 13.1. RSVP Object Integrity

ch13-fig13.1.gif

The keyed message digest is used to ensure the integrity of indication messages by calculating a condensed version of the entire RSVP message with the help of a key. The sequence number is used by the receiver to distinguish between a new and a replayed message.

We note that the use of a condensed version is not proper to the RSVP Integrity object. In fact, this technique is largely used to address integrity and authentication protection of the origin of the data at any level.

13.3.2 . Transport level security

In this section, we will give an overview of transport-level security. The most widespread protocol is called security socket layer (SSL) [FRE 96], proposed in 1994 by Netscape.

SSL is a client/server protocol that enables the authentication of the origin of the data, confidentiality and integrity of exchanged data. It is independent of communication protocols. It is an intermediate layer between application protocols like HTTP (access to www servers), FTP, Telnet, etc., and the TCP layer. SSL is composed of a key generator, of chopping functions, of RC4, MD5, SHA and DSS encryption algorithms, of negotiation and session management protocols whose main one is the handshake protocol, and of X509 certificates.

SSL is especially used by the HTTP application (called HTTPS). Its success is notably due to its ease of use and to its integration in all the browsers on the market1. Many commercial companies have found here a means of communicating securely with their clients, notably to obtain payment for services rendered. Typical uses are the transfer of credit card codes for online sales sites and viewing bank account information.

The better-known implementations in HTTP are in Netscape and Apache Web servers, which use, in France, 40-bit RSA keys2. However, today SSL is known for its vulnerability to brute force attacks (exhaustive) when using 40-bit keys. It is therefore recommended that 128-bit keys are used instead.

Today, a workgroup called transport layer security (TLS) [TLS 04] is active within the IETF to ensure the proper functioning, standardization and the eventual evolution of the SSL protocol. The last evolution is currently known as TLS, which takes SSL and improves it, but is not compatible.

The security of SSL/TLS distinguishes itself by its capacity to adapt to an application and does not need additional application security functions.

In terms of security services, we see once again the use of a message authentication code ensuring the authentication of the origin and the integrity; we also have the possibility of making the data confidential.

13.3.3. Network level security

SSL or TSL is not a layer that secures all applications. Moreover, this protocol was conceived to ensure generally punctual transactions. These are neither regular nor periodic. Although this security addresses current Internet needs, securing the entire stream, and permanently so, is sometimes necessary. Also, it would be preferable to act without modifying the application itself3. This is proposed in the continuation4 of IP security or IPSec protocols [IPS 04].

IPSec provides security services to the IP layer by enabling a selection of security protocols (AH [KEN 98a] or ESP [KEN 98b]), algorithm determination (DES, 3DES, SHA1, MD5, etc.) and keys to be used for these services (through the IKE negotiation protocol). IPSec offers access control services, connection-less data integrity, data origin authentication, protection against packet loss, confidentiality. These services offered in IP are independent from higher-layer protocols (TCP, UDP, ICMP, etc.).

There exists the notion of mode in IPSec, with transport mode and tunnel mode. In transport mode, security services are applied to the data of higher-layer protocols and in the tunnel mode, security services are applied to IP packets that are encapsulated into IP packets.

Among the notable defaults in IPSec are parameter setting, key management and “too-full” security.

Parameter setting in IPSec is quite complex given the multitude of data to be managed between two IPSec nodes A and B. Among these are the authentication method, the mode, the protocols, the algorithms and the lifespan of security associations. Today, even if two IPSec stock editors are meant to support some of these mandatory parameters according to IETF standards, there are always interoperability problems, either because the standards are not applied to the letter, or due to an incorrect configuration of IPSec nodes.

The key management problem touches not only IPSec but also the entire field of data security in general. Whichever the authentication method is used by the nodes, a secret must be implemented on each node before establishing an IPSec security association. If we consider the X509 certificate that is widely used these days, following the example of SSL/TLS, a request will first have to be lodged with a CA to obtain a signed certificate. The difference relative to a current use of SSL/TLS is that the client/server mode no longer exists: each of the IPSec nodes will have to obtain its certificate. On most Web servers using SSL/TLS, the latter possess their certificate and the client that connects obtains the server’s certificate to authenticate it. The “medium” client will not have to worry about creating his certificate with a CA. However, with IPSec, he will have to manage this task. IPSec ensures a mutual authentication, whereas in most cases with SSL/TLS, only the server will be authenticated by the client.

Up until now, we have had an overview of the available security in the TCP/IP layers. Even if for many the Internet can be restricted to these principal layers, the proper functioning relies also on a network’s link and physical layers.

13.3.4. Link level security

Today, a user who connects to the Internet often does it through a local network, a traditional network (Ethernet) or a wireless network (Wi-Fi). Evidently, there exist many other level 2 methods and technologies to access the network of networks. The goal here is solely to put forward certain existing menaces on level 2 technologies currently used.

The Ethernet (10 Mbps, Fast or Giga) is known for its broadcast method by which information sent from a machine A to a machine B is visible by all other machines on the same Ethernet branch. From here, if the encapsulated information in the Ethernet block of data is not already secured, a malevolent user can read everything that is going on. For example, if the integrity is ensured to whichever higher level, the information is always readable. To ensure confidentiality, all PCs must ensure encryption.

In traditional Ethernet technologies, anybody can come and plug their PC on the network to found out what is going on. From the moment where some information such as the MAC address, the IP address and even the user’s DNS name are known, the game is up. Domain control enables user authentication and ensures access security in both Windows and Linux environments. Having said that, there is a plethora of protocol analysts (sniffers) who can read what is going on in an Ethernet network, without connecting to the domain5.

For wireless access, the Wi-Fi standard (wireless fidelity or IEEE 802.11b) generalizes as much for companies as for private networks. It presents numerous security gaps. Above all, Wi-Fi communications can be intercepted by an external user. If there are no security policies, an intruder can access network resources by using the same wireless equipment. A first level of protection is available by using WEP (wired equivalent privacy). However, WEP works with 64-bit encryption keys (128-bit keys are optional) that can be too easily hacked. Also, there is neither encryption key distribution mechanism nor veritable user authentication. In conclusion, WEP only offers weak security.

To improve WLAN security, the IEEE is aiming toward an encryption that reuses point-to-point protocol techniques. Many solutions are considered to improve security in these networks. In the short term, the use of a protocol such as SSL, SSH and IPSec is recommended. The Wi-Fi Alliance proposes a standard based on 802.1x/EAP authentication mechanisms and an improved encryption (TKIP, temporary key integrity protocol) whose group is called WPA (wireless protected access). The IEEE 802.11i workgroup has standardized this environment by proposing an extension using the AES (WPA2) encryption algorithm.

13.3.5. Physical level security

All securities reviewed up until now were of a protocol nature. In this section, we will see defaults linked to the physical environment. They apply to the Internet, as well as to networks that do not enforce the use of TCP/IP protocols. It is not an exhaustive list of defaults logged in the physical layer of the Internet, but a relatively comprehensive example.

A fault in the networks, as well as in the electronic and data processing fields in general, is the electromagnetic radiation. Radiation presents the opportunity for external attacks. We also refer to side channel attack. Globally, the Internet is composed of PCs linked by physical links through routers. Each of these components is susceptible of producing electromagnetic radiation. An interesting example is the PC screen. As stated by Quisquater and Smyde at SECI02, “it is important to note that the light emitted from a screen contains useful information that is nothing more than the video signal, etc. it is possible to reconstruct a video image from the luminosity of a distant screen”.

In the middle of the 1950s, the American military initiated the Tempest project [TEM 04], whose goal was to develop a technology capable of detecting, measuring and analyzing electromagnetic radiation emitted by a computer to intercept data from a distance. Thus, Tempest makes it possible to capture a screen’s electromagnetic emissions and recreate them into an image. This is distance image duplication. Everything that the spied upon computer displays on its screen, Tempest captures and reconstructs the image in real-time. Today, the NSA could read a screen through walls and up to a kilometer away. It is no longer necessary to spy on TCP/IP packets to understand what the victim is doing.

To reduce the security distance from a kilometer to a few meters, we can make sure that all the peripheries are class B (class A, which is the most common class, offers hardly any protection) and that our cables are shielded (especially between the screen and the UC); RJ45 cables can serve as antennae if they are poorly protected.

13.4. Internet infrastructure security

This section attempts to give answers to the primordial question: is the Internet reliable and secure? Today, we know that Internet infrastructure attacks can lead to considerable damage, from the moment where principal components, such as DNS, routers and communication links have implicit confidence relationships between each other.

[CHA 02a] gives an overview of the faults in the Internet structure and the responses put forward. It presents the attacks against the Internet infrastructure in four categories: DNS hacking, poisoning of the routing table, poor packet processing and service denial.

13.4.1. DNS

The domain name system (DNS) is the global hierarchical distributed directory that translates machine/domain names into numerical IP addresses in the Internet. Its faculties have made the DNS a critical Internet component. Thus, a DNS attack can affect a large portion of the Internet. Its distributed nature is a synonym of robustness but also of different types of vulnerabilities.

To reduce a request’s response time, DNS servers store information in a cache. If the DNS server stores false information, the result is a poisoned cache. In this case, the aggressor can redirect traffic to a site under his control. Another possibility is a DNS server controlled by an adversary (malicious server) which will modify the data sent to users. The malicious servers are used to poison the cache or to commit DoS on another server. There is also the case of an aggressor that passes for a DNS server and responds to the client with false and/or potentially malicious information. This attack can redirect traffic to a site under his control and/or launch a DoS attack on the client himself.

To answer these possible attacks, the IETF has added security extensions communally known under the term DNSsec [DNS 04]. DNSsec provides authentication and integrity to DNS updates. All attacks mentioned are attenuated by the addition of the authentication of the data source and the authentication of transactions and requests. The authentications are provided by the use of digital signature. The receiver can verify the digital signature with the received data. To make DNSsec viable, secure server and secure client environments must be created.

13.4.2. Routing table

Routing tables are used to route packets in the Internet. They are managed by information exchanges between routers. Poisoning attacks correspond to the malicious modification of routing tables. This can be done by modifying routing protocol update packets, creating bad data in the table.

The work on routing protocols in the Internet has been principally directed into two directions: distance vector protocols (for example, RIP) and link state protocols (for example, OSPF). These two types of protocol present different characteristics with regard to state information exchange and routing calculation. In a distance vector protocol, each node frequently sends its routing distances to its neighbors. A neighbor, upon reception of a distance vector packet, updates its routing table if necessary. In a link state protocol, each node periodically inundates the state of its links to all network nodes. Upon reception of the link state updates (called link state advertisement or LSA, in OSPF), each router calculates the shortest path tree (SPT) with itself as the root of the tree. Distance vector protocols consume more bandwidth than link state protocols. Also, differently from link state protocols, they suffer a lack of complete topology information at each node. This lack of knowledge encourages a variety of attacks that are not possible in link state protocols.

The poisoning of the routing table can be done by link and router attacks. Link attacks, differently from router attacks, are similar for both types of routing protocol.

13.4.2.1. Link attacks

Link attacks appear when the adversary has access to the link. The routing information can be intercepted by an adversary, without being propagated any further. However, the interruption is not efficient in practice. There is generally more than one path between two nodes6. Therefore, the victim can always obtain the information from other sources. Most routing protocols use updates that use acknowledgments. Due to this, interruptions are detected. However, if the links are selectively interrupted, it is possible to have asynchronous routing tables throughout the network, which can create buckling and DoS. Asynchronous routing tables can also be created if a router suppresses its updates, but sends an acknowledgment.

Routing information packets can also be modified or fabricated by an adversary who has access to a network link. Digital signatures are used for the integrity and authenticity of messages. In the case of digital signatures, the emitter signs the packets with his private key and all nodes can verify the signature by basing themselves on the emitter’s public key. Routing updates increase by the size of the signature (typically between 128 and 1,024 bits). It is a viable solution in link state routing protocols because messages are not frequently transmitted. This is also proposed for distance vector protocols. However, these consume an excessive amount of bandwidth. Therefore, adding a header in the form of a digital signature is not very much appreciated by researchers.

The poisoning of the routing table can also be done by replicating old messages, where a malicious adversary holds onto routing updates and replays them later. This type of attack cannot be resolved by using digital signatures because the updates are valid, only postponed. A sequence information is used to prevent this attack. The sequence information can be in the form of sequence numbers or time stamps. An update is accepted if the sequence number in the packet is greater than or equal to the sequence number of the previously received update in the same router. This resolves the replication problem; however, the packets in the same period of time can be replayed if the time stamp is used as sequencing information. No remedy has been found for this problem. However, it has limited effects because it can only be used if a router sends multiple updates within the same time period.

13.4.2.2. Router attacks

A router can be compromised and would then be called malicious. Router attacks differ according to the nature of the routing protocol.

In the case of a link state routing protocol, the malicious router can send incorrect updates regarding its neighbors, or remain quiet if the link state of the neighbor has indeed changed. Solutions are the detection of an intrusion and techniques added to the protocol. In the detection of an intrusion, a central attack analysis module detects attacks based on a possible sequence of alarm events. However, using such a module is not a solution that can be applied on a large scale. The other solution is to integrate the capacity of detection in the routing protocol itself. This was proposed in SLIP (secure link state protocol) [CHA 02b]. A router believes in an update on the condition as long as it also receives a “confirmation” update of the link state of the node that supports the suspicious link. However, this solution also proved to be incomplete.

In the case of distance vector protocols, the malicious router can send false or dangerous updates concerning any node in the network because the nodes do not have the complete topology of the network. If a malicious router creates a bad distance vector and sends it to all his neighbors, the neighbors accept the update because there is no way to validate it. Since the router is malicious, standard techniques such as digital signature do not work. [SMI 97] proposes a validation by adding old information (predecessor) in the distance vector update. Although it performs for the detection of incoherence, the algorithm presents a few faults. It is incapable of detecting router attacks when a malicious router changes the updates in an intelligent way.

13.4.3. Poor packet processing

In this type of attack, the malicious router poorly manipulates the packets, thus generating congestions, DoS or a reduction in connection bandwidth. The problem becomes difficult (or untreatable) if the router selectively interrupts or routes packets poorly, thus causing loop routing. This type of attack is very difficult to detect.

The adversaries can retain real data packets and mistreat them. This is an attack during the data transmission phase, different from the poisoning attack. Poor packet processing attacks have a limited efficiency compared to router table poisoning and to DoS attacks. In fact, these attacks are limited to a portion of the network, whereas poisoning attacks can affect the entire network. However, this type of attack is possible and very difficult to detect.

In a similar fashion to poisoning attacks, an adversary can perform a link attack or a router attack.

13.4.3.1. Link attacks

An adversary, after accessing a link, can interrupt, modify/fabricate or replicate data packets. The interruption of TCP packets could reduce the global bandwidth of the network. The source that feels congestion reduces the transmission window provoking a reduction in connection bandwidth.

In [ZHA 98], the authors have firstly shown that the selective suppression in a small number can largely degrade TCP performance. The authors used packet suppression profiles and intrusion detection profiles to identify the attacks. These are the only types of solutions attempted to detect these types of attacks. However, there are still questions regarding scaling the intrusion detection techniques throughout the Internet. Similarly to routing updates, data packets can be modified or fabricated by adversaries. IPSec, the standard series of protocols for adding security characteristics to the Internet IP layer, provides authentication and encryption for data packets throughout the Internet. To counter replay attacks, IPSec integrates a little protocol called anti-replay window protocol. This protocol can provide an anti-reply service by including a sequence number in each IPSec message and by using a sliding window.

13.4.3.2. Router attacks

Malicious routers can cause all the link attacks. Moreover, they can poorly route packets. Malicious packet routing can provoke routing congestion toward heavily loaded links, or can even be used as DoS attacks by directing an incontrollable number of packets toward a victim. This attack is cited in Cisco’s white papers [CIS 00], where packets received and emitted by the same interface of a router are suppressed. This simple filtering scheme can avoid a naïve bad routing attack. However, a malicious router can create loop routing, which remains an open problem.

13.4.4. Denial of service (DoS)

DoS are destined to specific machines with the intention of breaking the system or provoking a denial of service. These are done by individuals or groups, often for personal notoriety. They become extremely dangerous and difficult to avoid if a group of aggressors coordinate the DoS. This type of attack is called the distributed DoS (DDoS). It should be noted that a DoS can be the consequence of routing table poisoning and/or poor packet processing.

In general, DoS attacks can be of two types: ordinary and distributed. In an ordinary DoS attack, an aggressor uses a tool to send packets to the target system. These packets are created to put the target system out of service or to quash it, often forcing a reboot. Often, the source address of these packets is spoofed, making it difficult to locate the real source of the attack. In the DDoS attack, there can always be just one aggressor, but the effect of the attack is greatly multiplied by the use of attack servers known as agents7.

Several DoS attacks are well known. The flooding UDP makes it possible to send UDP packets with spoofed return addresses. A hacker links the character generation service of a UDP system to the UDP echo service of another system. In a TCP/SYN flood, the hacker sends a large quantity of SYN packets to a victim. The packet return addresses are spoofed. Thus, the victim puts SYN-ACK’s in queue, but cannot continue to send them because it never receives ACK from the spoofed addresses. Finally, in an ICMP/Smurf attack, the hacker broadcasts ICMP ping requests with a spoofed return address toward a large group of machines on a network. The machines send their response to the victim whose system is submerged and cannot provide any service.

The proposed solutions for DoS attacks fall into the preventive and reactive categories. All preventive techniques look for the detection of DoS and to do so they base themselves on older information making it possible to ensure filtration. Known techniques use the verification of inverse unicast path, the control of the flow of SYN packets and the verification of incoming and outgoing interfaces. Reactive techniques attempt to identify the adversary after the attack was carried out.

This is an active field of research because the current identification techniques are completely manual and can be spread out over several months. Current solutions consist of testing the link until reaching the source, logging data packets in the key routers (very loaded solution), ICMP traceback and IP traceback. In ICMP traceback, each router stores a packet with a low probability (1/20,000). When a packet is stored, the router sends an ICMP traceback message to the destination. In IP traceback, each router marks a packet with a reliable probability. In both cases, if there is a DoS, the destination can retrace the packets up to the source, based on ICMP packets or on information in the marked packet.

13.5. Internet access infrastructure security

We have seen the security that can be implemented for user data and Internet infrastructure. In this last section, we see the risks incurred when an entity connects to the Internet. The term entity can include a PC connected with a traditional V.92 modem, a wireless PC linked to a hot-spot, or a company network linked through an access router.

The open nature of the Internet is a quality and a defect. Once connected, whether they are ill-intentioned or not people can relatively easily access our software and material resources. If no protection is considered regarding access control, a hacker could see, consult and in the worst case destroy the content of our entity. To counter these access types, firewalls have been created. These systems have evolved to detect intrusions more intelligently with IDS. Also, with the increase of “always connected” entities, the virus plague requires more and more serious consideration8.

13.5.1. Access control by firewall

The firewall is a hardware or software concept that makes it possible to protect an internal network (or a PC) from external attacks. The firewall is a unique and mandatory point of passage between the internal network and the Internet.

Figure 13.2. Firewall

ch13-fig13.2.gif

At the beginning, firewalls behaved as authorized address managers and then evolved to control application accesses. The following section deals with access control in general. Today, firewalls are adapted to other security needs of which we will give an overview later on.

13.5.1.1. Access control

Access control makes it possible to accept or reject connection requests, but also to examine the nature of the traffic and validate its content, due to filtering mechanisms. There is static filtering and dynamic filtering.

Static filtering verifies IP packets (header and data) in order to extract the quadruplet “source address, source port, destination address, destination port”, which identifies the current session. This solution makes it possible to determine the nature of the requested service and to define whether the IP packet should be accepted or rejected. The configuration of a filter is done through an access control list (ACL) which is constituted by the concatenation of the rules to follow. For example, a first rule indicates that all the machines can connect to the Web server on port 80 and the following rule authorizes the Web server to response to all the clients of the service (on a port greater than 1,024). These rules enable all the machines to access the Web. At the end of the list, there must be a rule specifying that all communication is prohibited in the group of services and machines at the entrance or exit of the protected network. The principle consists of prohibiting everything that is not authorized. This example is of a TCP service. With TCP, the differentiation between incoming call and outgoing call relies on the ACK bit in the header, which characterizes an established connection. This distinction does not exist for UDP where differentiating a valid packet from an attack attempt on the service is not possible. We also see this problem with certain applications that answer clients’ requests on dynamically allocated ports (for example, FTP). It is generally impossible to satisfactorily manage this type of protocol without opening the access to a larger number of ports and therefore make the network even more vulnerable.

Dynamic filtering follows the principle of static filtering. Its efficiency extends to the quasi-totality of currently used protocols (TCP, UDP, etc.), due to the implementation of state tables for each established connection. To manage the absence of a UDP circuit and the dynamic allocation of ports implemented by service, dynamic filtering examines in detail the information up to the application layer. It interprets the particularities linked to the application and dynamically creates rules for the duration of the session.

13.5.1.2. Other functionalities

The proxies (application relays) are now integrated in the firewalls. A proxy is interposed between clients and servers for a specific surveillance and isolates the network from the exterior. All data is routed to the network and it decides which action to take. It verifies the integrity of the data and authenticates the exchanges. The authentication of the users is indispensable for carrying out reliable access control relating to user identity. The most currently used mechanism consists of associating a password to a person’s identifier. However, this password is often sent visibly. On the Internet, this situation is not acceptable. To remedy this, firewalls propose solutions based on encryption. Virtual private networks (VPN) use the Internet to securely interconnect different affiliate and/or partner company networks. To mitigate the lack of confidentiality on the Internet and to secure the exchanges, solutions based on the SSL protocol or on IPSec are integrated in the firewall. Thus, societies can interconnect their different sites and can provide an access control to their mobile employees.

Figure 13.3. VPN firewall type

ch13-fig13.3.gif

13.5.2. Intrusion detection

At the entrance of a network there is often a simple firewall that blocks unused access roads. However, a firewall does not sufficiently filter requests that pass through open accesses. For example, hackers often use open port 80 (HTTP) to infiltrate poorly protected systems. A veritable surveillance system that permanently controls the identity of network requests must be considered. The role of the IDS is to locate the intruder in the traffic transiting in the ports left open by the firewall. The administrators do not have enough knowledge of the problem and the company networks are still under-equipped.

13.5.2.1. IDS types

13.5.2.1.1. IDS network

The most traditional method is the IDS network. Its role is to immobilize each request, to analyze it and to let it continue along its path only if it does not correspond to an attack referenced in a database.

A quality IDS network has an exhaustive file of attack signatures. This file is centralized and updated by the constructor. The last update must be regularly uploaded. The other important point is the placement of the IDS network. A probe placed at a bad location can be inefficient. It is common that we cannot content ourselves with a single filtering system: the more complex a network, the more vulnerabilities it presents. It is logically more difficult to protect it. However, each added IDS network is expensive: they are machines greedy for resources. The flows are very heavy and it is necessary to dedicate very powerful machines to the IDS network, or specialized equipment to support them. The quality of the detection system varies from one system to another. Since the first versions, the technology has progressed and the products are becoming more performing and more pertinent. Performances have improved by targeting detection methods better. Today, each block of data is no longer analyzed from top to bottom: IDS networks have learned to target strategic points. Regarding the pertinence of intrusion detection systems, things have also changed. An IDS network does not signal all the dangerous character strings, but only those that are in a position to be exploited. If a security expert sends mail to a colleague containing an attack code, the IDS network will not initiate an alert. However, if the code is contained in a request whose goal is to saturate a server’s memory, the IDS network will signal it right away.

13.5.2.1.2. The IDS host

The IDS network does not guarantee a 100% security level on its own. To get to this, another system needs to be implemented that will observe the behavior of each block of data and will signal everything that seems unusual. This is the role of the IDS host, a probe that is placed on each system that needs to be protected.

The IDS host detects known and unknown anomalies: if an attack makes its way through the meshes of the net, the probe will locate it. The IDS host takes a snapshot of the system at a given time. It defines everything that is legitimate. Everything outside of the frame and the system’s habits is considered an attack. For example, a modification of the registry will be blocked and will raise an alert. An IDS host probe is less onerous than an IDS network but must be placed on each machine to supervise. They are often reserved for well-protected machines. They are less greedy for resources than an IDS network: they are found in the form of software and are not integrated in a server or specialized equipment like IDS networks.

13.5.2.2. Administration problems

The level of maturity of intrusion detection systems is far from being optimal. Administrators are easily drowned under a mass of inevitably irrelevant alerts. We end up seeing IDS that are no longer administered and have fallen into disuse. This reality is difficult to accept when we know that the price of entry is around tens of thousands of Euros. To avoid such a loss, we must predict that an IDS requests daily administration. It is pointless to invest in it if there is no possibility of taking the time to attend to it daily. The IDS needs regular supervision, which is not always the case. Signatures are not always updated and detections are not always investigated. What makes this task easier to bear is to filter the blocks of data to the maximum before they even get to the IDS. The detection system must be the last layer of the anti-intrusion filter. Other layers must stop benign attacks. To do this, optimally configured routers and firewalls must be carefully implemented. Thus, the rate of alerts that merit signaling will be greatly reduced.

13.5.3. Viruses

13.5.3.1. Terminology

A virus is a program that exerts a damaging action such as the modification or destruction of files, the erasing of the hard drive, the lengthening of processing time, generally worrisome visual or audio manifestations, etc. This action can be continuous, sporadic, periodic, or take place on a precise date9. Some variations are the worm, the logic bomb, the Trojan horse or the macro virus. A worm is a network virus. A logic bomb is a device that contains a launch condition, such as a system date, for example. A Trojan horse presents itself as an executable file (utility, game, etc.) that contains an insidious feature capable of causing damage. Macro viruses are linked to Office series software. These have macro commands whose initial goal is to automate repetitive tasks. This goal was diverted to destroy or modify files on the infected machine10.

13.5.3.2. Reconnaissance

Many symptoms can reveal an infection: for a file, it can be a change in size or creation date or checksum, for a program, it can be a slow loading time, a different screen aspect, surprising results, etc. The size of available memory can be reduced with respect to what we normally see. The regular user will need to use specialized software capable of a fine and complex analysis of the contents of the memory and of the hard drive.

A category of virus detectors operates on a collection of signatures. The simpler viruses comprise a series of instructions unique to themselves but identifiable and called signature. We can establish a catalog that will grow every time a new virus appears. The programs that exploit this method are called scanners. They give very few false alarms11. The inconvenience of this method is the need for periodic catalog updating, which imposes on the user a paid subscription with the anti-virus editor.

Another method exists that has the advantage of not needing any updates. It is based on heuristic algorithms that suspect, in certain successions of instructions, the possibility of a virus. The probability of false alarms is greater than with scanners but the efficiency is permanent, at least until the appearance of a new general form of attack.

13.5.3.3. Prevention

To protect ourselves against viruses, recommended solutions are the control of new installed applications, the locking of storage mediums when they do not need to be overwritten and having an updated anti-virus.

Myths, hoaxes and urban legends are falsely presented as a security alert. Their goal is to create confusion around security and to provoke cascading email transmission to overload email in-boxes (spamming)12. Manipulations of this type have brought organisms to sign their messages. Also, it is easy to verify that a virus alert is a hoax by consulting specialized sites13. In general only authenticated sources should be trusted. In all cases, the information should be verified before propagating the message, especially to a distribution list. If we are credulous, we knowingly participate in the malevolence.

The platform that consists of the most macro viruses is Microsoft Word for Windows. Viruses propagate easily because. doc files contain both text and all the associated macros. The first action to carry forward is to deactivate the execution of macro commands when receiving a Word document (or any other Office series document), especially if it is from an unknown source. The fabrication of a macro virus is available to any neophyte. A large quantity of new macro viruses is created every day. The implementation of signature files on the antivirus must take place at least once a month.

13.6. Summary and conclusion

The Internet is historically of an open nature. At its beginnings, the goal was to link a few computers to each other. Mutual confidence was implicit.

Today, millions of machines use the Internet. Mutual confidence between all communicating parties is no longer present. Hacking is more and more common, with goals of making a profit and even achieving and even personal notoriety. Security therefore comes into play.

First and foremost, security is not a service but a notion that groups various services. Among the most commonly used are integrity, confidentiality and nonrepudiation. Therefore, the more fashionable mechanisms are message authentication code, encryption and digital signature.

For a medium user, unless using applications preconceived for supporting a secured traffic, protecting data demands an analysis in order to define his security needs. This is not available to everybody. There is the exception of SSL, where in most cases, the user does not have to do anything. IPSec is standardized today but its use and configuration are still complex. Underneath the TCP/IP layers, the type of transport network also has its own problems. We must also not forget the flaws in the physical layer. These can put into question all the security services grafted on the higher levels, which are derived from engineering protocols.

The Internet infrastructure is not infallible. A secured DNS is far from applicable to the entire Internet. Routing protocols present enough defaults to make routing table poisoning attacks possible. In this field, router attacks demand research attention because very few efforts have been made in this direction. Poor packet processing from a malicious router represents an attack that is very difficult to detect. The interruption of packets still presents a problem. With regard to poor routing for packet loop routing, the problem is still open. Certain DoS attacks are well-known and detectable. However, the DoS still presents problems in finding the aggressor. In fact, IP address spoofing is a major inconvenience when trying to locate it.

Finally, we saw that installation security is still in a maturation phase. Firewalls are necessary but not sufficient. The IDS are efficient and offer an optimal level of protection in company networks. However, there is still progress to be made. The big problem with the IDS is their administration. With regard to viruses, present before the advent of the Internet, they do not stop evolving and renewing every day to the point where a monthly update of antivirus software can be insufficient these days.

13.7. Bibliography

[BAK 00] BAKER F., LINDELL B., TALWAR M., “RSVP Cryptographic Authentication”, IETF RFC, 2747, January 2000.

[BRA 97] BRADEN R., ZHANG L., BERSON S., HERZOG S., JAMIN S., “Resource Reservation Protocol (RSVP) – Version 1 Functional Specification”, IETF RFC, 2205, September 1997.

[CHA 02a] CHAKRABATI A., MANIMARAN G., “Internet Infrastructure Security: A Taxonomy”, IEEE Network, p. 13-21, November/December 2002.

[CHA 02b] CHAKRABATI A., MANIMARAN G., “Secure Link state Routing Protocol”, EcpE, Iowa State University, 2002.

[CIS 00] CISCO, Strategies to protect against Distributed Denial Of Service Attacks (DDoS), DoS Cisco White Paper, February 2000.

[DNS 04] DOMAIN NAME SYSTEM SECURITY WORKING GROUP, IETF, http://www.ietf.org/html.charters/old/dnssec-charter.html.

[FRE 96] FREIER A.O., KARLTON P., KOCHER P.C., The SSL Protocol Version 3.0, Netscape Communications, 18th November 1996.

[IPG 04] International PGP Homepage, http://www.pgpi.org.

[IPS 04] IP SECURITY PROTOCOL WORKING GROUP, IETF, http://www.ietf.org/html.charters/ipsec-charter.html.

[KEN 98a] KENT S., ATKINSON R., “IP Authentication Header”, IETF RFC, 2402, November 1998.

[KEN 98b] KENT S., ATKINSON R., “IP Encapsulating Security Payload (ESP)”, IETF RFC, 2406, November 1998.

[NF 90] FRENCH STANDARD, Systèmes de traitement de l’information – Interconnexion de systèmes ouverts – Modèle de référence de base – Part 2: Architecture de sécurité, ISO 7498-2, September 1990.

[PKI 04] PUBLIC KEY INFRASTRUCTURE WORKING GROUP (X.509), IETF, http://www.ietf.org/ html.charters/pkix-charter.html.

[SMI 97] SMITH B.R., MURTHY S., GARCIA-LUNA-ACEVES J.J., “Securing Distance Vector Routing Protocols”, Proc. SNDSS, February 1997.

[TEM 04] The complete, unofficial TEMPEST information page, http://www.eskimo.com/~joelm/tempest.html.

[TLS 04] TRANSPORT LAYER SECURITY WORKING GROUP, IETF http://www.ietf.org/html.charters/tls-charter.html.

[ZHA 98] ZHANG X. et al., “Malicious Packet Dropping: how it might impact the TCP performance and how we can detect it”, Symposium Security Privacy, May 1998.


1 Chapter written by Vedat YILMAZ.

1 If the server that sends the information uses SSL, you will see at the bottom left of the browser a small key or lock that automatically appears.

2 RSA keys are part of the asymmetrical cryptography domain and are especially used during the entity authentication phase.

3 The use of SSL/TLS requires secure socket manipulation functions.

4 IPSec appears to be a unique protocol, but it is really a series of protocols.

5 We can also listen to a network without being connected.

6 The average degree of each node is relatively high (around 3.7).

7 To have an idea of the range of a DDos, consider that more than 1,000 systems were used at different times in a concerted attack on a unique server at the University of Minnesota. The attack not only put this server out of service but also denied access to a large network of universities.

8 The defaults in the physical layer also apply to the Internet access infrastructure. Tempest attacks can be taken into account in this section. We will not return to this subject as it was already introduced in the preceding section.

9 For example, the Michelangelo virus only launches on March 6th.

10 A known macro virus, I love you, led Microsoft to modify Outlook 98.

11 Scanners are inefficient against polymorphic viruses because these viruses can modify their appearance.

12 This practice was inaugurated with the false announcement of the Good Times virus, presented as very dangerous. This false alert continues, several years after its inception, to circulate over the Internet, sometimes with variations (Penpal Greetings, AOL4FREE, PKZIP300, etc.).

13 For example, Hoaxbuster, http://www.hoaxbuster.com.