• A password should be easy for the user to remember but difficult for a perpetrator to guess.
• Initial passwords may be allocated by the security administrator or generated by the system itself. When the user logs on for the first time, the system should force a password change to improve confidentiality.
• If the wrong password is entered a predefined number of times, typically three, the log-on ID should be automatically and permanently deactivated (or at least for a significant period of time).
Token Devices, One-Time Passwords
A two-factor authentication technique, such as a microprocessor-controlled smart card, generates one-time passwords that are good for only one log-on session. Users enter this password along with a password they have memorized to gain access to the system. This technique involves something you have (a device subject to theft) and something you know (a personal identification number). Such devices gain their one-time password status because of a unique session characteristic (e.g., ID or time) appended to the password.
Biometrics
Biometric ACs are the best means of authenticating a user’s identity based on a unique, measurable attribute or trait for verifying the identity of a human being. This control restricts computer access, based on a physical (something you are) or behavioral (something you do) characteristic of the user.
Management of Biometrics
Management of biometrics should address effective security for the collection, distribution, and processing of biometric data.
Management should develop and approve biometric information management and security (BIMS) policy. The auditor should use the BIMS policy to gain a better understanding of the biometric systems in use.
Single Sign-On (SSO)
Users normally require access to a number of resources during the course of their daily routine. For example, users would first log into an operating system and thereafter into various applications. For each operating system application or other resource in use, the user is required to provide a separate set of credentials to gain access; this results in a situation wherein the user’s ability to remember passwords is significantly reduced. This situation also increases the chance that a user will write them down on or near their workstation or area of work, and thereby increase the risks that a security breach within the organization may occur.
To address this situation, the concept of SSO was developed. SSO can generally be defined as the process for consolidating all organization platform-based administration, authentication, and authorization functions into a single centralized administrative function. This function would provide the appropriate interfaces to the organization’s information resources, which may include:
• Client–server and distributed systems
• Mainframe systems
• Network security including remote access mechanisms
The SSO process begins with the first instance where the user credentials are introduced into the organization’s IT computing environment. The information resource or SSO server handling this function is referred to as the primary domain. Every other information resource, application, or platform that uses those credentials is called a secondary domain.
The authorization process of AC often requires that the system be able to identify and differentiate among users. For example, AC is often based on least privilege, which refers to the granting to users of only those accesses required to perform their duties.
Access rules (authorization) specify who can access what. Access should be on a documented need-to-know and need-to-do basis by type of access.
Having computer access does not always mean unrestricted access. Computer access can be set to many differing levels. When IS auditors review computer accessibility, they need to know what can be done with the access and what is restricted. For example, access restrictions at the file level generally include the following:
• Read, inquiry, or copy only
• Write, create, update, or delete only
• Execute only
• A combination of the above
Authentication of an individual’s identity is a fundamental component of physical and logical AC processes. When an individual attempts to access security-sensitive buildings, computer systems, or data, an AC decision must be made. An accurate determination of identity is needed to make sound AC decisions.
A wide range of mechanisms is employed to authenticate identity, utilizing various classes of identity credentials. For physical access, individual identity has traditionally been authenticated by use of paper or other nonautomated, hand-carried credentials, such as driver’s licenses and badges. Access authorization to computers and data has traditionally been authenticated through user-selected passwords. More recently, cryptographic mechanisms and biometric techniques have been used in physical and logical security applications, replacing or supplementing the traditional credentials.
The strength of the authentication that is achieved varies, depending on the type of credential, the process used to issue the credential, and the authentication mechanism used to validate the credential. This document establishes a standard for a PIV system based on secure and reliable forms of identification credentials issued by the federal government to its employees and contractors. These credentials are intended to authenticate individuals who require access to federally controlled facilities, information systems, and applications.
Systems and communications protection
Network Layer Security
There are multiple methods and techniques employed by organizations for deploying and enhancing network security. All methods provide differing levels of confidentiality, integrity, and availability depending on their technology, location on the network, and method of installation. We will highlight several of these methods to properly review, examine, and evaluate each to ensure the use of and actions conducted by these methods are functional, operational, and secure. We will start with VPNs, and then discuss wireless networking, IDS, encryption, and firewalls.
VPN
A VPN is a virtual network, built on top of existing physical networks, which can provide a secure communications mechanism for data and other information transmitted between networks. Because a VPN can be used over existing networks, such as the internet, it can facilitate the secure transfer of sensitive data across public networks. This is often less expensive than alternatives such as dedicated private telecommunications lines between organizations or branch offices. We will examine the two most common types of VPNs utilized in today’s networks in order to properly assess them: IPSec and SSL-based VPNs. “IPsec has emerged as the most commonly used network layer security control for protecting communications, while SSL is the most commonly used transport layer security control. Depending on how IPsec and SSL are implemented and configured, both can provide any combination of the following types of protection:
•Confidentiality. IPsec and SSL can ensure that data cannot be read by unauthorized parties. This is accomplished by encrypting data using a cryptographic algorithm and a secret key—a value known only to the two parties exchanging data. The data can only be decrypted by someone who has the secret key.
•Integrity. IPsec and SSL can determine if data has been changed (intentionally or unintentionally) during transit. The integrity of data can be assured by generating a message authentication code (MAC) value, which is a keyed cryptographic checksum of the data. If the data is altered and the MAC is recalculated, the old and new MACs will differ.
•Peer Authentication. Each IPsec endpoint confirms the identity of the other IPsec endpoint with which it wishes to communicate, ensuring that the network traffic and data is being sent from the expected host. SSL authentication is typically performed one-way, authenticating the server to the client; however, SSL VPNs require authentication for both endpoints.
•Replay Protection. The same data is not delivered multiple times, and data is not delivered grossly out of order.
•Traffic Analysis Protection. A person monitoring network traffic cannot determine the contents of the network traffic or how much data is being exchanged. IPsec can also conceal which parties are communicating, whereas SSL leaves this information exposed. Frequency of communication may also be protected depending on implementation. Nevertheless, the number of packets being exchanged can be counted.
•Access Control. IPsec and SSL endpoints can perform filtering to ensure that only authorized users can access particular network resources. IPsec and SSL endpoints can also allow or block certain types of network traffic, such as allowing Web server access but denying file sharing.”23
So we start with IPSec VPNs, and then we will discuss SSL/Transport Layer Security (TLS) VPNs.
IPSec VPN – SP 800-77
IPSec is a framework of open standards for ensuring private communications over public networks. It has become one of the most common network layer security controls, typically used to create a VPN.
There are three primary models for VPN architectures, as follows:
1.Gateway-to-gateway: This model protects communications between two specific networks, such as an organization’s main office network and a branch office network, or two business partners’ networks.
2.Host-to-gateway: This model protects communications between one or more individual hosts and a specific network belonging to an organization. The host-to-gateway model is most often used to allow hosts on unsecured networks, such as traveling employees and telecommuters, to gain access to internal organizational services, such as the organization’s email and web servers.
3.Host-to-host: A host-to-host architecture protects communication between two specific computers. It is most often used when a small number of users need to use or administer a remote system that requires the use of inherently insecure protocols.
VPN Model Comparison
Feature
Gateway-to-gateway
Host-to-gateway
Host-to-host
Provides protection between client and local gateway
No
N/A (client is VPN end point)
N/A (client is VPN end point)
Provides protection between VPN end points
Yes
Yes
Yes
Provides protection between remote gateway and remote server (behind gateway)
No
No
N/A (server is VPN end point)
Transparent to users
Yes
No
No
Transparent to users’ systems
Yes
No
No
Transparent to servers
Yes
Yes
No
IPSec is a collection of protocols that assist in protecting communications over IP networks. IPSec protocols work together in various combinations to provide protection for communications.
• IPsec fundamentals:
•Authentication Header (AH): AH, one of the IPSec security protocols, provides integrity protection for packet headers and data, as well as user authentication. It can optionally provide replay protection and access protection. AH cannot encrypt any portion of packets.
•AH modes: AH has two modes – transport and tunnel. In tunnel mode, AH creates a new IP header for each packet; in transport mode, AH does not create a new IP header. In IPSec architectures that use a gateway, the true source or destination IP address for packets must be altered to be the gateway’s IP address. Because transport mode cannot alter the original IP header or create a new IP header, transport mode is generally used in host-to-host architectures.
•Encapsulating Security Payload (ESP): ESP is the second core IPSec security protocol. In the initial version of IPSec, ESP provided only encryption for packet payload data. Integrity protection was provided by the AH protocol if needed. In the second version of IPSec, ESP became more flexible. It can perform authentication to provide integrity protection, although not for the outermost IP header. Also, ESP’s encryption can be disabled through the Null ESP Encryption Algorithm. Therefore, in all but the oldest IPSec implementations, ESP can be used to provide only encryption, encryption and integrity protection, or only integrity protection.
ESP has two modes: transport and tunnel. In tunnel mode, ESP creates a new IP header for each packet. The new IP header lists the end points of the ESP tunnel (such as two IPSec gateways) as the source and destination of the packet. Because of this, tunnel mode can be used with all three VPN architecture models.
•Internet Key Exchange (IKE): The purpose of the IKE protocol is to negotiate, create, and manage security associations (SAs). SA is a generic term for a set of values that define the IPSec features and protections applied to a connection. SAs can also be manually created, using values agreed upon in advance by both parties, but these SAs cannot be updated; this method does not scale for real-life large-scale VPNs. IKE uses five different types of exchanges to create SAs, transfer status and error information, and define new Diffie–Hellman groups. In IPSec, IKE is used to provide a secure mechanism for establishing IPsec-protected connections.
•IP Payload Compression Protocol (IPComp): In communications, it is often desirable to perform lossless compression on data – to repackage information in a smaller format without losing any of its meaning. The IPComp is often used with IPSec. By applying IPComp to a payload first, and then encrypting the packet through ESP, effective compression can be achieved.
IPComp can be configured to provide compression for IPSec traffic going in one direction only (e.g., compress packets from end point A to end point B, but not from end point B to end point A) or in both directions. Also, IPComp allows administrators to choose from multiple compression algorithms, including DEFLATE and LZS.49. IPComp provides a simple yet flexible solution for compressing IPSec payloads.
IPComp can provide lossless compression for IPSec payloads. Because applying compression algorithms to certain types of payloads may actually make them larger, IPComp compresses the payload only if it will actually make the packet smaller.
IPSec uses IKE to create SAs, which are sets of values that define the security of IPsec-protected connections. IKE phase 1 creates an IKE SA; IKE phase 2 creates an IPSec SA through a channel protected by the IKE SA. IKE phase 1 has two modes: main mode and aggressive mode. Main mode negotiates the establishment of the bidirectional IKE SA through three pairs of messages, while aggressive mode uses only three messages. Although aggressive mode is faster, it is also less flexible and secure. IKE phase 2 has one mode: quick mode. Quick mode uses three messages to establish a pair of unidirectional IPSec SAs. Quick mode communications are encrypted by the method specified in the IKE SA created by phase 1.
SSL VPNs – SP 800-113
An SSL VPN consists of one or more VPN devices to which users connect using their web browsers. The traffic between the web browser and the SSL VPN device is encrypted with the SSL protocol or its successor, the TLS protocol. This type of VPN may be referred to as either an SSL VPN or a TLS VPN.
Secure Sockets Layer (SSL) virtual private networks (VPN) provide secure remote access to an organization’s resources. An SSL VPN consists of one or more VPN devices to which users connect using their Web browsers. The traffic between the Web browser and the SSL VPN device is encrypted with the SSL protocol or its successor, the Transport Layer Security (TLS) protocol. This type of VPN may be referred to as either an SSL VPN or a TLS VPN. This guide uses the term SSL VPN. SSL VPNs provide remote users with access to Web applications and client/server applications, and connectivity to internal networks. Despite the popularity of SSL VPNs, they are not intended to replace Internet Protocol Security (IPsec) VPNs.1 The two VPN technologies are complementary and address separate network architectures and business needs. SSL VPNs offer versatility and ease of use because they use the SSL protocol, which is included with all standard Web browsers, so the client usually does not require configuration by the user. SSL VPNs offer granular control for a range of users on a variety of computers, accessing resources from many locations.24
SSL portal VPNs
An SSL portal VPN allows a user to use a single standard SSL connection to a website to securely access multiple network services. The site accessed is typically called a portal because it has a single page that leads to many other resources. SSL portal VPNs act as transport-layer VPNs that work over a single network port, namely the TCP port for SSL-protected HTTP (443).
SSL tunnel VPNs
An SSL tunnel VPN allows a user to use a typical web browser to securely access multiple network services through a tunnel that is running under SSL. SSL tunnel VPNs require that the web browser be able to handle specific types of active content (e.g., Java, JavaScript, Flash, or ActiveX) and that the user be able to run them. (Most browsers that handle such applications and plug-ins also allow the user or administrator to block them from being executed.)
Administering SSL VPN
The administration of both SSL portal VPNs and SSL tunnel VPNs is similar. The gateway administrator needs to specify local policy in at least two broad areas:
•Access. All SSL VPNs allow the administrator to specify which users have access to the VPN services. User authentication might be done with a simple password through a Web form, or through more sophisticated authentication mechanisms.
•Capabilities. The administrator can specify the services to which each authorized user has access. For example, some users might have access to only certain Web pages, while others might have access to those Web pages plus other services.
Different SSL VPNs have very different administrative interfaces and very different capabilities for allowing access and specifying allowed actions for users. For example, many but not all SSL VPNs allow validation of users through the Remote Authentication Dial-In User Server (RADIUS) protocol. As another example, some SSL VPNs allow the administrator to create groups of users who have the same access methods and capabilities; this makes adding new users to the system easier than gateways that require the administrator to specify both of these for each new user.25
SSL VPN architecture
The five phases of the recommended approach are as follows:
1.Identify Requirements. Identify the requirements for remote access and determine how they can best be met.
2.Design the Solution. Make design decisions in five areas: access control, endpoint security, authentication methods, architecture, and cryptography policy.
3.Implement and Test a Prototype. Test a prototype of the designed solution in a laboratory, test, or production environment to identify any potential issues.
4.Deploy the Solution. Gradually deploy the SSL VPN solution throughout the enterprise, beginning with a pilot program.
5.Manage the Solution. Maintain the SSL VPN components and resolve operational issues. Repeat the planning and implementation process when significant changes need to be incorporated into the solution.26
Note: Many of the cryptographic algorithms used in some SSL cipher suites are not FIPS-approved, and therefore are not allowed for use in SSL VPNs that are to be used in applications that must conform to FIPS-140-2.
SSL VPN architecture
Typical SSL VPN users include people in remote offices, mobile users, business partners, and customers. Hardware clients include various types of devices, such as public kiosks, home personal computers (PC), PDAs, or smart phones, which may or may not be controlled or managed by the organization. The SSL VPN may also be accessed from any location including an airport, a coffee shop, or a hotel room, as long as the location has connectivity to the Internet and the user has a Web client that is capable of using the particular SSL VPN. All traffic is encrypted as it traverses public networks such as the Internet. The SSL VPN gateway is the end point for the secure connection and provides various services and features (most SSL VPN products are standalone hardware appliances, although there are some software-based solutions that are installed on user-supplied servers).27
SSL protocol basics
The security of the data sent over an SSL VPN relies on the security of the SSL protocol. The SSL protocol allows a client (such as a web browser) and a server (such as an SSL VPN) to negotiate the type of security to be used during an SSL session. Thus, it is critical to make sure that the security agreed to by the remote user and the SSL gateway meets the security requirements of the organization using the SSL VPN.
There are three types of security that the client and the server can negotiate: the version of SSL, the type of cryptography, and the authentication method.
•Versions of SSL and TLS: The terms SSL and TLS are often used together to describe the same protocol. In fact, SSL refers to all versions of the SSL protocol as defined by the Internet Engineering Task Force (IETF), while TLS refers only to versions 3.1 and later of the SSL protocol. Two versions of TLS have been standardized: TLS 1.0 and TLS 1.1. TLS 1.0 is the same as SSL 3.1; there are no versions of SSL after 3.1. As of the writing of this guide, work is being done on TLS version 1.2.
TLS is approved for use in the protection of federal information; SSL versions other than 3.1 are not.
•Cryptography used in SSL sessions: There are many types of cryptographic functions that are used in security protocols. The most widely known cryptographic features are confidentiality (secrecy of data), integrity (the ability to detect even minute changes in the data), and signature (the ability to trace the origin of the data). The combination of these features is an important aspect of the overall security of a communications stream. SSL uses four significant types of features: confidentiality, integrity, signature, and key establishment (the way that a key is agreed to by the two parties).
SSL uses cipher suites to define the set of cryptographic functions that a client and a server use when communicating. This is unlike protocols such as IPSec and Secure/Multipurpose Internet Mail Extensions (S/MIME) where the two parties agree to individual cryptographic functions. That is, SSL exchanges say in effect, “Here is a set of functions to be used together, and here is another set I am willing to use.” IPSec and S/MIME (and many other protocols) instead say, “Here are the confidentiality functions I am willing to use, here are the integrity functions I am willing to use, and here are the signature algorithms I am willing to use,” and the other side creates a set from those choices.
Just as the SSL client and server need to be able to use the same version of SSL, they also need to be able to use the same cipher suite; otherwise, the two sides cannot communicate. The organization running the SSL VPN chooses which cipher suites meet its security goals and configures the SSL VPN gateway to use only those cipher suites.
•Authentication used for identifying SSL servers: When a web browser connects to an SSL server such as an SSL VPN gateway, the browser user needs some way to know that the browser is talking to a server the user trusts. SSL uses certificates that are signed by trusted entities to authenticate the server to the web user. (SSL can also use certificates to authenticate users to servers, but this is rarely done.)
The server authentication occurs very early in the SSL process, immediately after the user sends its first message to the SSL server. In that first message, the web browser specifies which type of certificate algorithms it can handle; the two common choices are RSA and DSS. In the second message, the SSL server responds with a certificate of one of the types that the browser said it understands. After receiving the certificate, the web browser verifies that the identity in the certificate (i.e., the domain name listed in the certificate) matches the domain name to which the web browser attempted to connect.
Some SSL VPNs use certificates issued by the vendor of the SSL VPN, and those certificates do not link through a chain of trust to a root certificate that is normally trusted by most users. If that is the case, the user should add the SSL VPN’s own certificate to the user’s list of directly trusted certificates. It is important to note that users should not add the root certificate of the SSL VPN’s manufacturer to the list of certification authorities that the user trusts, since the manufacturer’s security policies and controls may differ from those of the organization. Other SSL VPNs produce self-signed certificates that do not chain to any trusted root certificate; as before, the user should add the SSL VPN’s own certificate to the user’s list of directly trusted certificates.
Transport Layer Security
The Netscape Corporation designed a protocol known as the SSL to meet security needs of client browsers and server applications. Version 1 of SSL was never released. Version 2 (SSL 2.0) was released in 1994 but had well-known security vulnerabilities. Version 3 (SSL 3.0) was released in 1995 to address these vulnerabilities.
During this timeframe, Microsoft Corporation released a protocol known as Private Communications Technology (PCT), and later released a higher performance protocol known as the Secure Transport Layer Protocol (STLP). PCT and STLP never commanded the market share that SSL 2.0 and SSL 3.0 commanded. The IETF (a technical working group responsible for developing internet standards to ensure communications compatibility across different implementations) attempted to resolve, as best it could, security engineering and protocol incompatibility issues between the protocols. The IETF standards track Transport Layer Security Protocol Version 1.0 (TLS 1.0) emerged and was codified by the IETF as [RFC2246].
While TLS 1.0 is based on SSL 3.0, and the differences between them are not dramatic, they are significant enough that TLS 1.0 and SSL 3.0 do not interoperate. However, TLS 1.0 does incorporate a mechanism by which a TLS 1.0 implementation can negotiate to use SSL 3.0 with requesting entities as if TLS were never proposed. However, because SSL 3.0 is not approved for use in the protection of federal information (Section 7.1 of [FIPS140-2l]), TLS must be properly configured to ensure that the negotiation and use of SSL 3.0 never occurs when federal information is to be protected.
The NIST guidelines (SP 800-52, SP 800-77, and SP 800-113) attempt to make clear the impact of selecting and using secure web transport protocols for use in protecting sensitive but unclassified US government information.
Both the TLS 1.0 and the SSL 3.0 protocol specifications use cryptographic mechanisms to implement the security services that establish and maintain a secure TCP/IP connection. The secure connection prevents eavesdropping, tampering, or message forgery. Implementing data confidentiality with cryptography (encryption) prevents eavesdropping, generating a message authentication code (MAC) with a secure hash function prevents undetected tampering, and authenticating clients and servers with public key cryptography-based digital signatures prevents message forgery. In each case – preventing eavesdropping, tampering, and forgery – a key or shared secret is required by the cryptographic mechanism. A pseudorandom number generator and a key establishment algorithm provide for the generation and sharing of these secrets.
The rows in the following table identify the key establishment, confidentiality, digital signature, and hash mechanisms currently in use today in TLS 1.0 and SSL 3.0. The columns identify which key establishment, confidentiality, and signature algorithms and which hash functions are FIPS.
Mechanism
SSL (3.0)
TLS 1.0
FIPS reference
Key establishment
RSA
DH-RSA
DH-DSS
DHE-RSA
DHE-DSS
DH-Anon
Fortezza-KEA
RSA
DH-RSA
DH-DSS
DHE-RSA
DHE-DSS
DH-Anon
Confidentiality
IDEA-CBC
RC4-128
3DES-EDE-CBC
Fortezza-CBC
IDEA-CBC
RC4-128
3DES-EDE-CBC
FIPS46-3.FIPS81
Kerberos
AES
FIPS-197
Signature
RSA
DSA
RSA
DSA
EC
FIPS-186-2
FIPS-186-2
FIPS-186-2
Hash
MD5
MD5
SHA-1
SHA-1
FIPS-180-2, FIPS-198
Wireless networking
In today’s networks, there are often many methods for communications used and deployed. Some of these methods take advantage of nonwired connectivity by utilizing wireless technology. There are several types of wireless-based networking methods currently used all based on radio-frequency (RF) methods of communications for the transmission and reception of the signals carrying the digital data across and external to the network. Examples of wireless networks include cell phone networks, Wi-Fi local networks, and terrestrial microwave networks.
Every wireless LAN network consists of an AP, such as a wireless router, and one or more wireless adapters. As shown in the standard two deployment modes below, there are many security controls from SP 800-53 which are applicable and necessary to secure wireless LANs and Wi-Fi networks. The assessor needs to focus on the technology deployed, review all design and implementation documents, and test the actual APs (aka hotspots) and the client adapters used to prove the encryption and communications used during the wireless activities remain active and constant during all phases of transmission of the data over the airwaves via the RF signals used.
A wireless AP is a device that allows wireless devices to connect to a wired network using Wi-Fi, or related standards. The AP usually connects to a router (via a wired network) as a stand-alone device, but it can also be an integral component of the router itself. An AP is differentiated from a hotspot, which is the physical space where the wireless service is provided. A hotspot is a common public application of APs, where wireless clients can connect to the internet without regard for the particular networks to which they have attached for the moment.
Wireless security is the prevention of unauthorized access or damage to computers using wireless networks by utilizing different methods of encoding and encryption, based on the parameters of the RF carrier and bandwidth used by the particular type of Wi-Fi being employed. The most common types of wireless security are Wired Equivalent Privacy (WEP) and Wi-Fi Protected Access (WPA). WEP is a notoriously weak security standard. The password it uses can often be cracked in a few minutes with a basic laptop computer and widely available software tools. WEP is an old IEEE 802.11 standard from 1999 which was outdated in 2003 by WPA or WPA. WPA was a quick alternative to improve security over WEP. The current standard is WPA2 which uses an encryption mechanism which encrypts the network with a 256-bit key; the longer key length improves security over WEP.
The major terms and security areas of focus include the following:
WPA: Initial WPA version, to supply enhanced security over the older WEP protocol. Typically uses the Temporal Key Integrity Protocol (TKIP) encryption protocol.
WPA2: Also known as IEEE 802.11i-2004. Successor of WPA, and replaces the TKIP encryption protocol with Counter Cipher Mode with Block Chaining Message Authentication Protocol (CCMP) to provide additional security.
TKIP: A 128-bit per-packet key is used, meaning that it dynamically generates a new key for each packet. This is part of the IEEE 802.11i standard. TKIP implements per-packet key mixing with a rekeying system and also provides a message integrity check. These avoid the problems of WEP.
CCMP: An AES-based encryption mechanism that is stronger than TKIP; sometimes referred to as AES instead of CCMP.
EAP: Extensible Authentication Protocol. EAP is an authentication framework providing for the transport and usage of keying material and parameters generated by EAP methods since EAP uses a central authentication server.
WPA-Personal: Also referred to as WPA-preshared key (PSK) mode. Is designed for home and small office networks and does not require an authentication server. Each wireless network device authenticates with the AP using the same 256-bit key.
WPA-Enterprise: Also referred to as WPA-802.1X mode, and sometimes just WPA (as opposed to WPA-PSK). Is designed for enterprise networks, and requires a Remote Authentication Dial-In User Server (RADIUS) authentication server.
This requires a more complicated setup, but provides additional security (e.g., protection against dictionary attacks). An EAP is used for authentication, which comes in different flavors (e.g., EAP-TLS, EAP-Tunneled Transport Layer Security (TTLS), EAP-security information management (SIM)).
Bluetooth is a proprietary open wireless technology standard for exchanging data over short distances.
Cumulatively, the various versions of Bluetooth specifications define four security modes. Each version of Bluetooth supports some, but not all, of the four modes. Each Bluetooth device must operate in one of the four modes, which are described below.
Security Mode 1 is nonsecure. Security functionality (authentication and encryption) is bypassed, leaving the device and connections susceptible to attackers. In effect, Bluetooth devices in this mode are “promiscuous” and do not employ any mechanisms to prevent other Bluetooth-enabled devices from establishing connections. Security Mode 1 is supported only in v2.0 + enhanced data rate (EDR) (and earlier) devices.
In Security Mode 2, a service level-enforced security mode, security procedures are initiated after Link Management Protocol (LMP) link establishment but before Logical Link Control and Adaptation (L2CAP) channel establishment. L2CAP resides in the data link layer and provides connection-oriented and connectionless data services to upper layers. For this security mode, a security manager (as specified in the Bluetooth architecture) controls access to specific services and devices.
The centralized security manager maintains policies for AC and interfaces with other protocols and device users. Varying security policies and trust levels to restrict access may be defined for applications with different security requirements operating in parallel. It is possible to grant access to some services without providing access to other services. In this mode, the notion of authorization – the process of deciding if a specific device is allowed to have access to a specific service – is introduced.
In Security Mode 3, the link level-enforced security mode, a Bluetooth device initiates security procedures before the physical link is fully established. Bluetooth devices operating in Security Mode 3 mandate authentication and encryption for all connections to and from the device. This mode supports authentication (unidirectional or mutual) and encryption.
Similar to Security Mode 2, Security Mode 4 (introduced in Bluetooth v2.1 + EDR) is a service level-enforced security mode in which security procedures are initiated after link setup. Security requirements for services protected by Security Mode 4 must be classified as one of the following: authenticated link key required, unauthenticated link key required, or no security required. Whether or not a link key is authenticated depends on the Secure Simple Pairing association model used.
Cryptography
Three types of encryption as currently used in security controls:
1.Symmetric: One method of cryptography is symmetric cryptography (also known as secret key cryptography or private key cryptography). Symmetric cryptography is best suited for bulk encryption because it is much faster than asymmetric cryptography. With symmetric cryptography:
a. Both parties share the same key (which is kept secret). Before communications begin, both parties must exchange the shared secret key. Each pair of communicating entities requires a unique shared key. The key is not shared with other communication partners.
Note: Other names – secret key, conventional key, session key, file encryption key, etc.
Pros:
a. Speed/file size:
- Symmetric-key algorithms are generally much less computationally intensive which provides a smaller file size that allows for faster transmissions and less storage space.
Cons:
a. Key management:
- One disadvantage of symmetric-key algorithms is the requirement of a shared secret key, with one copy at each end. See drawing below.
- In order to ensure secure communications between everyone in a population of n people a total of n(n − 1)/2 keys are needed. Example: key for 10 individuals, 10(10 − 1)/2 = 45 keys.
- The process of selecting, distributing, and storing keys is known as key management; it is difficult to achieve reliably and securely.
Symmetric algorithms:
Methods
Characteristics
Data Encryption Standard (DES)
• Created in 1972 and recertified in 1993
• Uses a 64-bit block size and a 56-bit key
• Can be easily broken
Triple DES (3DES)
• Applies DES three times. Uses a 168-bit key
• Replaced with AES
Advanced Encryption Standard (AES)
• Uses the Rijndael block cipher (rhine-doll) which is resistant to all known attacks
• Uses a variable-length block and key length (128-, 192-, or 256-bit keys)
• Uses 128-bit blocks and variable key lengths (128-, 192-, or 256 bits)
Carlisle Adams Stafford Tavares (CAST)
• Two implementations: 64-bit block size with 128-bit key, 128-bit block size with 256-bit key. Used by Pretty Good Privacy (PGP) email encryption
International Data Encryption Algorithm (IDEA)
• Two implementations: 64-bit block size with 128-bit key, 128-bit block size with 256-bit key. Used by PGP email encryption
Rivest
• Includes various implementations:
• RC2 with 64-bit blocks and a variable key length (any size)
• RC4 with 40- and 128-bit keys
• RC5 with variable blocks and keys (any size)
• RC6 an improvement on RC5
2.Asymmetric: Asymmetric cryptography is a second form of cryptography. It is scalable for use in very large and ever expanding environments where data is frequently exchanged between different communication partners. With asymmetric cryptography:
a. Each user has two keys: a public key and a private key.
b. Both keys are mathematically related (both keys together are called the key pair).
c. The public key is made available to anyone. The private key is kept secret.
d. Both keys are required to perform an operation. For example, data encrypted with the private key is unencrypted with the public key. Data encrypted with the public key is unencrypted with the private key.
e. Encrypting data with the private key creates a digital signature. This ensures the message has come from the stated sender (because only the sender had access to the private key to be able to create the signature).
f. A digital envelope is signing a message with a recipient’s public key. A digital envelope, which serves as a means of AC by ensuring that only the intended recipient can open the message (because only the receiver will have the private key necessary to unlock the envelope; this is also known as receiver authentication).
g. If the private key is ever discovered, a new key pair must be generated.
Asymmetric cryptography is often used to exchange the secret key to prepare for using symmetric cryptography to encrypt data. In the case of a key exchange, one party creates the secret key and encrypts it with the public key of the recipient. The recipient would then decrypt it with their private key. The remaining communication would be done with the secret key being the encryption key. Asymmetric encryption is used in key exchange, email security, web security, and other encryption systems that require key exchange over the public network.
Pros:
a. Key management:
- Two keys (public and private), private key cannot be derived for the public so the public key can be freely distributed without confidentially being compromised
- Offers digital signatures, integrity checks, and nonrepudiation
Cons:
a. Speed/file size:
- Because symmetric-key algorithms are generally much less computationally intensive than asymmetric-key algorithms.
- In practice, asymmetric-key algorithm are typically hundreds to thousands times slower than a symmetric-key algorithm.
Asymmetric algorithms:
Method
Characteristics
Rivest–Shamir–Adleman (RSA)
• Uses a specific one-way function based on the difficulty of factoring N, a product of 2 large prime numbers (200 digits)
Diffie–Hellman key exchange
• Known as a key exchange algorithm
• Uses two system parameters (p and g)
•p is a prime number
•g is an integer smaller than p generated by both parties
ElGamal
• Extends Diffie–Hellman for use in encryption and digital signatures
Elliptic curve (EC)
• Used in conjunction with other methods to reduce the key size
• An EC key of 160 bits is equivalent to 1024-bit RSA key, which means less computational power and memory requirements
• Suitable for hardware applications (e.g., smart cards and wireless devices)
Digital Signature Algorithm (DSA)
• Used to digital sign documents
• Performs integrity check by use of SHA hashing
3.Hashing: A hash is a function that takes a variable-length string (message), and compresses and transforms it into a fixed-length value.
a. The hashing algorithm (formula or method) is public.
b. Hashing uses a secret value to protect the method.
c. Hashing is used to create checksums or message digests (e.g., an investigator can create a checksum to secure a removable media device that is to be used as evidence).
d. The hash ensures data integrity (i.e., the data have not been altered). The receiving device computes a checksum and compares it to the checksum included with the file. If they do not match, the data has been altered.
e. Examples include message digest (MD2, MD4, MD5) and Secure Hashing Algorithm (SHA).
f. SHA, Race Integrity Primitives Evaluation Message Digest (RIPEMD), and Hash of Variable Length (HAVAL).
Name
Class
Hash length
MD5
512-Bit blocks
Digest size(s): 128 bits
Rounds: 4
SHA-1
512-Bit blocks
Digest size(s): 160 bits
Rounds: 80
SHA-2
SHA-224/256
512-Bit blocks
Digest size(s): 256 bits
Rounds: 64
SHA-2
SHA-384/512
1024-Bit blocks
Digest size(s): 512 bits
Rounds: 80
RIPEMD-160
Digest size(s): 128,160, 256, and 320 bits
HAVAL
Digest size(s): 128, 160, 192, 224, and 256 bits
Rounds: 3, 4, or 5
Secure Hash
The secure hash function takes a stream of data and reduces it to a fixed size through a one-way mathematical function. The result is called a message digest and can be thought of as a fingerprint of the data. The message digest can be reproduced by any party with the same stream of data, but it is virtually impossible to create a different stream of data that produces the same message digest.
Secure Hash Standard
The Secure Hash Standard specifies five SHAs: SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512. All five of the algorithms are iterative, one-way hash functions that can process a message to produce a condensed representation called a message digest. These algorithms enable the determination of a message’s integrity: any change to the message will, with a very high probability, result in a different message digest. This property is useful in the generation and verification of digital signatures and MACs, and in the generation of random numbers or bits.
The five algorithms differ most significantly in the security strengths that are provided for the data being hashed. The security strengths of these five hash functions and the system as a whole when each of them is used with other cryptographic algorithms, such as Digital Signature Algorithms (DSAs) and keyed-hash MACs, can be found in SP 800-57 and SP 800-107.
Additionally, the five algorithms differ in terms of the size of the blocks and words of data that are used during hashing.
HMAC
Providing a way to check the integrity of information transmitted over or stored in an unreliable medium is a prime necessity in the world of open computing and communications. Mechanisms that provide such integrity checks based on a secret key are usually called MACs. Typically, MACs are used between two parties that share a secret key in order to authenticate information transmitted between these parties. This standard defines a MAC that uses a cryptographic hash function in conjunction with a secret key. This mechanism is called Hash-Based Message Authentication Code (HMAC). HMAC shall use an approved cryptographic hash function [FIPS-180-3]. It uses the secret key for the calculation and verification of the MACs.
So, in review, the table below covers the three types of encryption and their particular uses:
• Encryption provides confidentiality.
• Hashing provides integrity (like a checksum).
• Digital signatures provide authentication and integrity.
• Digitally signed encryption provides confidentiality, authentication, and integrity.
Mechanism
Data integrity
Confidentiality
Identification and authentication
Nonrepudiation
Key distribution
Symmetric-key cryptography
Encryption
No
Yes
No
No
No
Message authentication codes
Yes
No
Yes
No
No
Key transport
No
No
No
No
Yes – requires out-of-band initialization step or a TTP
Secure hash functions
Message digest
Yes
No
No
No
No
HMAC
Yes
No
Yes
No
No
Asymmetric cryptography
Digital signatures
Yes
No
Yes
Yes (with a TTP)
No
Key transport
No
No
No
No
Yes
Key agreement
No
No
Yes
No
Yes
Intrusion Detection Systems
Another element to securing networks complementing firewall implementations is an IDS. An IDS works in conjunction with routers and firewalls by monitoring network usage anomalies by being deployed in a demilitarized zone (DMZ) on the edge of the network or it is utilized as a network-based device inside the network to monitor for specific traffic patterns and alert when these patterns are identified so it protects a company’s information system resources from external as well as internal misuse.
An IDS operates continuously on the system, running in the background and notifying administrators when it detects a perceived threat. For example, an IDS detects attack patterns and issues an alert. Broad categories of IDS include:
•Network-based IDSs: Identify attacks within the monitored network and issue a warning to the operator. If a network-based IDS is placed between the internet and the firewall, it will detect all the attack attempts, whether or not they enter the firewall. If the IDS is placed between a firewall and the corporate network, it will detect those attacks that enter the firewall (it will detect intruders). The IDS is not a substitute for a firewall, but it complements the function of a firewall.
•Host-based IDSs: Configured for a specific environment and will monitor various internal resources of the operating system to warn of a possible attack. They can detect the modification of executable programs, detect the deletion of files, and issue a warning when an attempt is made to use a privileged command.
Common Intrusion Detection Methodologies
• Signature-Based Detection
A signature is a pattern that corresponds to a known threat. Signature-based detection is the process of comparing signatures against observed events to identify possible incidents. 5 Examples of signatures are as follows:
A telnet attempt with a username of “root”, which is a violation of an organization’s security policy.
An email with a subject of “Free pictures!” and an attachment filename of “freepics.exe”, which are characteristics of a known form of malware.
An operating system log entry with a status code value of 645, which indicates that the host’s auditing has been disabled.
Signature-based detection is very effective at detecting known threats but largely ineffective at detecting previously unknown threats, threats disguised by the use of evasion techniques, and many variants of known threats.
• Anomaly-Based Detection
Anomaly-based detection is the process of comparing definitions of what activity is considered normal against observed events to identify significant deviations. An IDPS using anomaly-based detection has profiles that represent the normal behavior of such things as users, hosts, network connections, or applications. The profiles are developed by monitoring the characteristics of typical activity over a period of time. The major benefit of anomaly-based detection methods is that they can be very effective at detecting previously unknown threats.
• Stateful Protocol Analysis
Stateful protocol analysis is the process of comparing predetermined profiles of generally accepted definitions of benign protocol activity for each protocol state against observed events to identify deviations. Unlike anomaly-based detection, which uses host or network-specific profiles, stateful protocol analysis relies on vendor-developed universal profiles that specify how particular protocols should and should not be used. The “stateful” in stateful protocol analysis means that the IDPS is capable of understanding and tracking the state of network, transport, and application protocols that have a notion of state.
Stateful protocol analysis can identify unexpected sequences of commands, such as issuing the same command repeatedly or issuing a command without first issuing a command upon which it is dependent. Another state tracking feature of stateful protocol analysis is that for protocols that perform authentication, the IDPS can keep track of the authenticator used for each session, and record the authenticator used for suspicious activity. This is helpful when investigating an incident. Some IDPSs can also use the authenticator information to define acceptable activity differently for multiple classes of users or specific users.
The “protocol analysis” performed by stateful protocol analysis methods usually includes reasonableness checks for individual commands, such as minimum and maximum lengths for arguments. If a command typically has a username argument, and usernames have a maximum length of 20 characters, then an argument with a length of 1000 characters is suspicious. If the large argument contains binary data, then it is even more suspicious.28
Types of IDSs include:
•Signature-based: These IDS systems protect against detected intrusion patterns. The intrusive patterns they can identify are stored in the form of signatures.
•Statistical-based: These IDS systems need a comprehensive definition of the known and expected behavior of systems.
•Neural networks: An IDS with this feature monitors the general patterns of activity and traffic on the network and creates a database. This is similar to the statistical model but with added self-learning functionality.
Signature-based IDSs will not be able to detect all types of intrusions due to the limitations of the detection rules. On the other hand, statistical-based systems may report many events outside of defined normal activity but which are normal activities on the network. A combination of signature- and statistical-based models provides better protection.
Uses of IDPS Technologies
• Identifying possible incidents
• Identifying reconnaissance activity
• Identifying security policy problems
• Documenting existing threat to an organization
• Deterring individuals from violating security policies
The table below is a high-level comparison of the four primary IDPS technology types. The strengths listed in the table indicate the roles or situations in which each technology type is generally superior to the others. A particular technology type may have additional benefits over others, such as logging additional data that would be useful for validating alerts recorded by other IDPSs, or preventing intrusions that other IDPSs cannot because of technology capabilities or placement.29
IDPS Technology Type
Types of Malicious Activity Detected
Scope per Sensor or Agent
Strengths
Network-Based
Network, transport, and application TCP/IP layer activity
Multiple network subnets and groups of hosts
Able to analyze the widest range of application protocols; only IDPS that can thoroughly analyze many of them
Wireless
Wireless protocol activity; unauthorized wireless local area networks (WLAN) in use
Multiple WU\Ns and groups of wireless clients
Only IDPS that can monitor wireless protocol activity
NBA
Network, transport, and application TCP/IP layer activity that causes anomalous network flows
Multiple network subnets and groups of hosts
Typically more effective than the others at identifying reconnaissance scanning and DoS attacks, and at reconstructing major malware infections
Host-Based
Host application and operating system (OS) activity; network, transport, and application TCP/IP layer activity
Individual host
Only IDPS that can analyze activity that was transferred in end-to-end encrypted communications
Key areas which the assessor should focus on when reviewing and evaluating IDS deployments include:
• Recording information related to observed events
• Notifying security administrators of important observed events
• Producing reports
• Response techniques:
• Stops attack
• Changes security environment
• Changes attack’s content
• False-positive adjustments
• False-negative adjustments
• Tuning
• Evasion
Firewalls
Firewall Security Systems – SP 800-41
Every time a corporation connects its internal computer network to the internet it faces potential danger. Because of the internet’s openness, every corporate network connected to it is vulnerable to attack. Hackers on the internet could theoretically break into the corporate network and do harm in a number of ways: steal or damage important data, damage individual computers or the entire network, use the corporate computer’s resources, or use the corporate network and resources as a way of posing as a corporate employee. Companies should build firewalls as one means of perimeter security for their networks. Likewise, this same principle holds true for very sensitive or critical systems that need to be protected from untrusted users inside the corporate network (internal hackers). Firewalls are defined as a device installed at the point where network connections enter a site; they apply rules to control the type of networking traffic flowing in and out. Most commercial firewalls are built to handle the most commonly used internet protocols.
To be effective, firewalls should allow individuals on the corporate network to access the internet and, at the same time, stop hackers or others on the internet from gaining access to the corporate network to cause damage. Generally, most organizations will follow a deny-all philosophy, which means that access to a given resource will be denied unless a user can provide a specific business reason or need for access to the information resource. The converse of this access philosophy, not widely accepted, is the accept-all philosophy under which everyone is allowed access unless someone can provide a reason for denying access.
Firewall General Features
Firewalls are hardware and software combinations that are built using routers, servers, and a variety of software. They should control the most vulnerable point between a corporate network and the internet, and they can be as simple or complex as the corporate information security policy demands. There are many different types of firewalls, but most enable organizations to:
• Block access to particular sites on the internet
• Limit traffic on an organization’s public services segment to relevant addresses and ports
• Prevent certain users from accessing certain servers or services
• Monitor communications between an internal and an external network
• Monitor and record all communications between an internal network and the outside world to investigate network penetrations or detect internal subversion
• Encrypt packets that are sent between different physical locations within an organization by creating a VPN over the internet (i.e., IPSec VPN tunnels)
Firewall Types
Generally, the types of firewalls available today fall into four categories which include:
•Packet filtering: Packet filtering is a security method of controlling what data can flow to and from a network. It takes place by using Access Control Lists (ACLs), which are developed and applied to a device. The ACL is just lines of text, called rules, which the device will apply to each packet that it receives. The lines of text give specific information pertaining to what packets can be accepted and what packets are denied. For instance, an ACL can have one line that states that any packets coming from the IP range 172.168.0.0 must be denied. Another line may indicate that no packets using the FTP service will be allowed to enter the network, and another line may indicate that no traffic is to be allowed through port 443. Then it can have a line indicating all traffic on port 80 is acceptable and should be routed to a specific IP address, which is the web server. Each time the device receives a packet, it compares the information in the packet’s header to each line in the ACL. If the packet indicates it is using FTP or requests to make a connection to the 443 port, it is discarded. If the packet header information indicates that it wants to communicate through port 80 using HTTP over TCP, then the packet is accepted and redirected to the web server.
This filtering is based on network layer information, which means that the device cannot look too far into the packet itself. It can make decisions based on only header information, which is limited. Most routers use ACLs to act as a type of router and to carry out routing decisions, but they do not provide the level of protection that other types of firewalls, which look deeper into the packet, provide. Since packet filtering looks only at the header information, it is not application dependent like many proxy firewalls are. Packet filtering firewalls do not keep track of the state of a connection, which takes place in a stateful firewall.
Pros:
• Scalable
• Provides high performance
• Application independent
Cons:
• Does not look into the packet past the header information
•Low security relative to other options
•Does not keep track of the state of a connection
Note: Packet filtering cannot protect against mail bomb attacks because it cannot read the content of the packet.
•Application firewall systems: Application-level firewalls inspect the entire packet and make access decisions based on the actual content of the packet. They understand different services and protocols and the commands that are used within them. An application-level proxy can distinguish between an FTP GET command and an FTP PUT command and make access decisions based on this granular level of information, where packet filtering firewalls can only allow or deny FTP requests as a whole, not the commands used within the FTP protocol.
An application-level firewall works for one service or protocol. A computer can have many different types of services and protocols (FTP, Network Time Protocol (NTP), Simple Mail Transfer Protocol (SMTP), Telnet, etc.); thus, there must be one application-level proxy per service. Providing application-level proxy services can be much trickier than it appears. The proxy must totally understand how specific protocols work and what commands within that protocol are legitimate. This is a lot to know and look at during the transmission of data. If the application-level proxy firewall does not understand a certain protocol or service, it cannot protect this type of communication. This is when a circuit-level proxy can come into play because it does not deal with such complex issues. An advantage of circuit-level proxies is that they can handle a wider variety of protocols and services than application-level proxies, but the downfall is that the circuit-level proxy cannot provide the degree of granular control that an application-level proxy can. Life is just full of compromises.
So, an application-level firewall is dedicated to a particular protocol or service. There must be one proxy per protocol because one proxy could not properly interpret all the commands of all the protocols coming its way. A circuit-level proxy works at a lower layer of the Open Systems Interconnection (OSI) model and does not require one proxy per protocol because it is not looking at such detailed information.
•Stateful inspection: When regular packet filtering is used, a packet arrives at the router, and the router runs through its ACLs to see if this packet should be allowed or denied. If the packet is allowed, it is passed on to the destination host or another router, and the router forgets it ever received this packet. This is different from stateful filtering, which remembers what packets went where until that particular connection is closed. Stateful routers also make decisions on what packets to allow or disallow, but their logic goes a step farther. For example, a regular packet filtering device may deny any UDP packets requesting service on port 25, and a stateful filtering device may have the rule to allow UDP packets through only if they are responses to outgoing requests. Basically, the stateful firewall will want to allow only those packets in that its internal hosts requested.
If User A sends a request to a computer on a different network, this request will be logged in the firewall’s state table. The table will indicate that User A’s computer made a request and there should be packets coming back to User A. When the computer on the internet responds to User A, these packets will be compared to data in the state table. Since the state table does have information about a previous request for these packets, the router will allow the packets to pass through. If, on the other hand, User A did not make any requests and packets were coming in from the internet to him, the firewall will see that there were no previous requests for this information and then look at its ACLs to see if these packets are allowed to come in.
So, regular packet filtering compares incoming packets to rules defined in its ACLs. When stateful packet filtering receives a packet, it first looks in its state table to see if a connection has already been established and if this data was requested. If there is no previous connection and the state table holds no information about the packets, the packet is compared to the device’s ACLs. If the ACL allows this type of traffic, the packet is allowed to access the network. If that type of traffic is not allowed, the packet is dropped. Although this provides an extra step of protection, it also adds more complexity because this device must now keep a dynamic state table and remember connections. This has opened the door to many types of denial-of-service attacks. There are several types of attacks that are aimed at flooding the state table with bogus information. The state table is a resource like a system’s hard drive space, memory, and CPU. When the state table is stuffed full of bogus information, it can either freeze the device or cause it to reboot. Also, if this firewall has to be rebooted for some reason, it loses its information on all recent connections; thus, it will deny legitimate packets.
Note: Context AC pertains to a sequence of events proceeding the access request and specifics of the environment within a window of time. Content pertains to making an AC decision based on the data being protected.
•Circuit or application proxy: A proxy is a middleman. If someone needed to give a box and a message to the President of the United States, this person could not just walk up to him and give him these items. The person would have to go through a middleman who would accept the box and message and thoroughly go through the box to ensure nothing dangerous was inside. This is what a proxy firewall does: it accepts messages either entering or leaving a network, inspects them for malicious information, and, when it decides things are okay, passes the data on to the destination computer.
A proxy will stand between a trusted and untrusted network and will actually make the connection, each way, on behalf of the source. So if a user on the internet requests to send data to a computer on the internal, protected network, the proxy will get this request and look it over for suspicious information. The request does not automatically go to the destination computer; the proxy server acts like the destination computer. If the proxy decides the packet is safe, it sends it on to the destination computer. When the destination computer replies, the reply goes back to the proxy server, who repackages the packet to contain the source address of the proxy server, not the host system on the internal network. All external connections heading to the internal network are terminated at the proxy server. This type of firewall makes a copy of each accepted packet before transmitting it. It will repackage the packet to hide the packet’s true origin.
Just like the packet filtering firewalls, proxy firewalls also have a list of rules that are applied to packets. When the proxy firewall receives a packet, it runs through this list of rules to see if the packet should be allowed. If the packet is allowed, the proxy firewall repackages the packet and sends it on its way to the destination computer. When users go through a proxy, they do not usually know it. Users on the internet think they are talking directly to users on the internal network and vice versa. The proxy server is the only machine that talks to the outside world. This ensures that no computer has direct access to internal computers. This also means that the proxy server is the only computer that needs a valid IP address. The rest of the computers on the internal network can use private (nonroutable IP addresses on the internet) addresses, since no computers on the outside will see their addresses anyway.
Many times, proxy servers are used when a company is using a dual-homed firewall. A dual-homed firewall has two interfaces: one facing the external network and the other facing the internal network. This is different than a computer that has forwarding enabled, which just lets packets pass through its interfaces with no AC enforced. A dual-homed firewall has two network interface cards (NICs) and should have packet forwarding and routing turned off. They are turned off for safety reasons. If forwarding were enabled, the computer would not apply the necessary ACL rules or other restrictions necessary of a firewall. Instead, a dual-homed firewall requires a higher level of intelligence to tell it what packets should go where and what types of packets are acceptable. This is where the proxy comes in. When a packet comes to the external NIC from the untrusted network on a dual-homed firewall, the computer does not know what to do with it, so it passes it up to the proxy software. The proxy software inspects the packet to make sure that it is legitimate. Then the proxy software makes a connection with the destination computer on the internal network and passes on the packet. When the internal computer replies, the packet goes to the internal interface on the dual-homed firewall, it passes up to the proxy software, the proxy inspects the packet and slaps on a different header, and the proxy passes the packet out the external NIC that is connected to the external network.
Pros:
• Looks at the information within a packet all the way up to the application layer
• Provides better security than packet filtering
• Breaks the connection between trusted and untrusted systems
Cons:
• Some proxy firewalls are limited to what applications they can support.
• Degrades traffic performance.
• Poor scalability for application-based proxy firewalls.
Note: Breaks client/server model – which is good for security, but at times bad for functionality.
Firewall Utilization
• Most companies have firewalls to restrict access into their network from internet users. They may also have firewalls to restrict one internal network from accessing another internal network. An organizational security policy will give high-level instructions on acceptable and unacceptable actions as they pertain to security. The firewall will have a more defined and granular security policy that dictates what services are allowed to be accessed, what IP addresses and ranges are to be restricted, and what ports can be accessed. The firewall is described as a “choke point” in the network, since all communication should flow through it and this is where traffic is inspected and restricted. A firewall is actually a type of gateway that can be a router, server, authentication server, or specialized hardware device. It monitors packets coming into and out of the network it is protecting. It filters out the packets that do not meet the requirements of the security policy. It can discard these packets, repackage them, or redirect them depending on the firewall configuration and security policy. Packets are filtered based on their source and destination addresses and ports, by service, packet type, protocol type, header information, sequence bits, and much more. Each vendor has different functionality and different parameters they can use for identification and access restriction.
Examples of Firewall Implementations
Firewall implementations can take advantage of the functionality available in a variety of firewall designs to provide a robust layered approach in protecting an organization’s information assets. Commonly used implementations available today include:
•Screened-host firewall: Utilizing a packet filtering router and a bastion host, this approach implements basic network layer security (packet filtering) and application server security (proxy services). An intruder in this configuration has to penetrate two separate systems before the security of the private network can be compromised. This firewall system is configured with the bastion host connected to the private network with a packet filtering router between the internet and the bastion host. Router filtering rules allow inbound traffic to access only the bastion host, which blocks access to internal systems. Since the inside hosts reside on the same network as the bastion host, the security policy of the organization determines whether inside systems are permitted direct access to the internet, or whether they are required to use the proxy services on the bastion host.
•Dual-homed firewall: A firewall system that has two or more network interfaces, each of which is connected to a different network. In a firewall configuration, a dual-homed firewall usually acts to block or filter some or all of the traffic trying to pass between the networks. A dual-homed firewall system is a more restrictive form of a screened-host firewall system, when a dual-homed bastion host is configured with one interface established for information servers and another for private network host computers.
•DMZ or screened-subnet firewall: Utilizing two packet filtering routers and a bastion host, this approach creates the most secure firewall system, since it supports both network and application-level security while defining a separate DMZ network. The DMZ functions as a small isolated network for an organization’s public servers, bastion host information servers and modem pools. Typically, DMZs are configured to limit access from the internet and the organization’s private network. Incoming traffic access is restricted into the DMZ network by the outside router and protects the organization against certain attacks by limiting the services available for use. Consequently, external systems can access only the bastion host (and its proxy service capabilities to internal systems) and possibly information servers in the DMZ. The inside router provides a second line of defense, managing DMZ access to the private network, while accepting only traffic originating from the bastion host. For outbound traffic, the inside router manages private network access to the DMZ network. It permits internal systems to access only the bastion host and information servers in the DMZ. The filtering rules on the outside router require the use of proxy services by accepting only outbound traffic on the bastion host. The key benefits of this system are that an intruder must penetrate three separate devices, private network addresses are not disclosed to the internet, and internal systems do not have direct access to the internet.
Note: In UNIX systems, the product TCP Wrappers can be used as a personal firewall or host-based IDS.
The assessor should review and test the following areas when conducting a comprehensive evaluation of the firewall and its technology:
• Scanning firewall from the outside and inside
• Scanning with firewall down to see level of exposure if it went off-line
• Directional control
• Incoming packet with internal source address
• Outgoing packet with external source address
• FTP out but not in
• Making sure that access to the firewall is authorized
• How are employees and nonemployees given access
• Obtaining a list of users on the firewall
• Cross-checking with staff lists/organization chart
• Remote administration:
• One-time passwords
• Other secure methods
• Encrypted link
• How is access changed or revoked
• How is access reviewed:
• Mechanics of authentication
• Frequency of review
• Password reset/changing passwords
• Root password control
• Need for firewall to enforce security policy (encryption, viruses, URL blocks, proxy/packet filter types of traffic):
• The rule set obtained?
• How are rule sets stored to maintained to ensure that they have not been tampered with?
• Checksums regularly verified?
• Determining whether the effectiveness of firewall has been tested
• Reviewing processes running on firewall; are they appropriate:
• Does the firewall provide adequate notice when an exploit is attempted?
Audit and accounting
Most, if not all, of the guidance for the audit and accountability family of controls can be found in the SP 800-92, Guide to Log Management.
Log Management
A log is a record of the events occurring within an organization’s systems and networks. Logs are composed of log entries; each entry contains information related to a specific event that has occurred within a system or network. Many logs within an organization contain records related to computer security. These computer security logs are generated by many sources, including security software, such as antivirus software, firewalls, and intrusion detection and prevention systems; operating systems on servers, workstations, and networking equipment; and applications. Logs are emitted by network devices, operating systems, applications, and all manner of intelligent or programmable devices. A stream of messages in time sequence often comprises the entries in a log. Logs may be directed to files and stored on disk, or directed as a network stream to a log collector. Log messages must usually be interpreted with respect to the internal state of its source (e.g., application) and announce security-relevant or operations-relevant events (e.g., a user log-in, or a systems error).
A fundamental problem with log management that occurs in many organizations is effectively balancing a limited quantity of log management resources with a continuous supply of log data. Log generation and storage can be complicated by several factors, including a high number of log sources; inconsistent log content, formats, and timestamps among sources; and increasingly large volumes of log data. Log management also involves protecting the confidentiality, integrity, and availability of logs. Another problem with log management is ensuring that security, system, and network administrators regularly perform effective analysis of log data. This publication provides guidance for meeting these log management challenges.
Originally, logs were used primarily for troubleshooting problems, but logs now serve many functions within most organizations, such as optimizing system and network performance, recording the actions of users, and providing data useful for investigating malicious activity. Logs have evolved to contain information related to many different types of events occurring within networks and systems. Within an organization, many logs contain records related to computer security; common examples of these computer security logs are audit logs that track user authentication attempts and security device logs that record possible attacks.
The Special Publication 800-92 defines the criteria for logs, log management, and log maintenance in the following control areas:
• Auditable events
• Content of audit records
• Audit storage capacity
• Response to audit processing failures
• Audit review, analysis, and reporting
• Audit reduction and report generation
• Timestamps
• Audit record retention
• Audit generation
The SP defines the four parts of log management as follows:
1. Log management:
a. Log sources.
b. Analyze log data.
c. Respond to identified events.
d. Manage long-term log data storage.
2. Log sources:
a. Log generation.
b. Log storage and disposal.
c. Log security.
3. Analyzing log data:
a. Gain an understanding of logs.
b. Prioritize log entries.
c. Compare system-level and infrastructure-level analysis.
d. Respond to identified events.
4. Manage long-term log data storage:
a. Choose log format for data to be archived.
b. Archive the log data.
c. Verify integrity of transferred logs.
d. Store media securely.
To address AU-10, nonrepudiation, the information system protects against an individual falsely denying having performed a particular action. Nonrepudiation protects individuals against later claims by an author of not having authored a particular document, a sender of not having transmitted a message, a receiver of not having received a message, or a signatory of not having signed a document. Nonrepudiation services are obtained by employing various techniques or mechanisms (e.g., digital signatures, digital message receipts).
The Digital Signature Standard defines methods for digital signature generation that can be used for the protection of binary data (commonly called a message), and for the verification and validation of those digital signatures.
There are three techniques which are approved for this process:
1. The DSA is specified in this standard. The specification includes criteria for the generation of domain parameters, for the generation of public and private key pairs, and for the generation and verification of digital signatures.
2. The RSA DSA is specified in American National Standard (ANS) X9.31 and Public Key Cryptography Standard (PKCS) #1. FIPS-186-3 approves the use of implementations of either or both of these standards, but specifies additional requirements.
3. The Elliptic Curve Digital Signature Algorithm (ECDSA) is specified in ANS X9.62. FIPS-186-3 approves the use of ECDSA, but specifies additional requirements.
When assessing logs look for the following areas:
• Connections should be logged and monitored.
• What events are logged?
- Inbound services
- Outbound services
- Access attempts that violate policy
• How frequent are logs monitored?
- Differentiate from automated and manual procedures.
• Alarming:
- Security breach response
- Are the responsible parties experienced?
• Monitoring of privileged accounts
SIEM
Security information and event management (SIEM) is a term for software products and services combining SIM and security event management (SEM). The segment of security management that deals with real-time monitoring, correlation of events, notifications, and console views is commonly known as SEM. The second area provides long-term storage, analysis, and reporting of log data and is known as SIM.
SIEM technology provides real-time analysis of security alerts generated by network hardware and applications. SIEM is sold as software, appliances, or managed services, and is also used to log security data and generate reports for compliance purposes. The term Security Information and Event Management (SIEM), coined by Mark Nicolett and Amrit Williams of Gartner in 2005, describes the product capabilities of gathering, analyzing, and presenting information from network and security devices; identity and access management applications; vulnerability management and policy compliance tools; operating system, database, and application logs; and external threat data. A key focus is to monitor and help manage user and service privileges, directory services, and other system configuration changes, as well as providing log auditing and review and IR.