Security+ technicians need to fully understand the fundamentals of system hardening (also described as “locking down” the system). This knowledge is needed not only to pass the Security+ exam, but also to work in the field of information security. You will learn that the skills needed to detect breeches and exploits are an essential part of the security technician’s repertoire.
The Security+ exam covers the general fundamentals of hardening.
Operating system (OS) hardening covers important concepts such as locking down file systems, controlling software installation, and use and methods for configuring file systems properly to limit access and reduce the possibility of a breach. Some other steps to take to harden the OS include installing only protocols that are used, enabling only services that are needed, installing only the software that is needed and approved, and granting the minimum rights to users as required.
Additional steps could be to limit the users’ ability to perform tasks they would not perform, such as installing unauthorized software, or changing Windows settings. In some cases, it may also be necessary to encrypt files on disk to further restrict access to sensitive data.
Many OS default configurations do not provide an optimum level of security, because priority is given to those who need access to data. Even so-called secure OSes may have been configured incorrectly to allow full access. Thus, it is important to modify OS settings to harden the system for access control. Other topics covered in the area of OS hardening are how to receive, test, and apply service packs and hotfixes to secure potential vulnerabilities in systems.
Depending on the environment, it may be necessary to disable external devices, such as USB interfaces and compact disc, read-only memory (CD-ROM) drives to prevent users from installing unauthorized software.
OS hardening involves making the OS less vulnerable to threats. In this chapter, we’ll cover many of the ways you can help to harden the OS.
You should follow a documented, step-by-step process to harden your OS. It is recommended you use standard approaches to securing your OSes across the board.
When looking at ways to provide file and directory security, you must first look at how file security can be structured.
Start with everything accessible and lock down the things you want to restrict.
Start with everything locked down and open up the things you want to allow access to.
Of these two potential methods, the second, which is also referred to as the rule of least privilege, is the preferred method. Least privilege is when you start with the most secure environment and then loosen the controls as needed. Using this method works to be as restrictive as possible with the authorizations provided to users, processes, or applications that access these resources. Accessibility and security are usually at opposite ends of the spectrum; this means that the more convenient it is for users to access data, the less secure the network. While looking at hardening security through permissions (for example, authentication, authorization, and accounting [AAA]), administrators should also consider updating the methods used to access the resources.
It starts with the process of evaluating risk. That’s one of the key steps in the hardening process, as the question will often arise as to what is secure enough? That’s the role of the risk assessment in this process. As an example, your child’s piggy bank may be protected by no more than a small lock hidden on the bottom. Although that’s suitable for your child’s change, you have probably noticed that your bank has many more controls protecting you and their other customers’ assets. Risk assessment works the same way in that the value of the asset will drive the process of access control and what type of authorization will be needed to access the protected resource.
It is important to look at the use and appropriateness of mandatory access control (MAC), discretionary access control (DAC), and role-based access control (RBAC) in controlling access appropriately and to coordinate this effort with the establishment of file system controls.
Let’s discuss each type of access control.
In discussing access control, MAC, DAC, and RBAC are individual areas that take on a new meaning.
MAC In this context, is not a network interface card (NIC) hardware address but rather a concept called MAC.
DAC Is often referred to as the use of discretionary access control lists (DACLs).
RBAC Should not be confused with rule-based access control but is instead an access control method based on the use of the specific roles played by individuals or systems.
All three methods have varying uses when trying to define or limit access to resources, devices, or networks. The following sections explore and illustrate each of the three access control methods.
MAC is generally built into and implemented within the OS being used, although it may also be designed into applications. MAC components are present in UNIX, Linux, Microsoft’s Windows OSes, OpenBSD, and others. Mandatory controls are usually hard-coded and set on each object or resource individually. MAC can be applied to any object within an OS and allows a high level of granularity and function in the granting or denying of access to the objects. MAC can be applied to each object and can control access by processes, applications, and users to the object. It cannot be modified by the owner or creator of the object.
The following example illustrates the level of control possible. When using MAC, if a file has a certain level of sensitivity (or context) set, the system will not allow certain users, programs, or administrators to perform operations on that file. Think of setting the file’s sensitivity higher than that of an e-mail program. You can read, write, and copy the file as desired, but without an access level of root, superuser, or administrator, you cannot e-mail the file to another system, because the e-mail program lacks clearance to manipulate the file’s level of access control. For example, this level of control is useful in the prevention of Trojan horse attacks, since you can set the access levels appropriately to each system process, thus severely limiting the capability of the Trojan horse to operate. The Trojan horse would have to have intimate knowledge of each of the levels of access defined on the system to compromise it or make the Trojan horse viable within it.
To review briefly, MAC is
Nondiscretionary The control settings are hard-coded and not modifiable by the user or owner.
Multilevel Control of access privileges is definable at multiple access levels.
Label-based May be used to control access to objects in a database
Universally Applied Applied to all objects
DAC is the setting of access permissions on an object that a user or application has created or has control of. This includes setting permissions on files, folders, and shared resources. The “owner” of the object in most OS environments applies DACs. This ownership may be transferred or controlled by root or other superuser/administrator accounts. It is important to understand that DAC is assigned or controlled by the owner rather than being hard-coded into the system. DAC does not allow the fine level of control available with MAC, but requires less coding and administration of individual files and resources.
To summarize, DAC is
Discretionary Not hard-coded and not automatically applied by the OS/network operating system (NOS) or application
Controllable Controlled by the owner of the object (file, folder, or other types)
Transferable The owner may give control away.
RBAC can be described in different ways. The most familiar process is a comparison or illustration using the “groups” concept. In Windows, UNIX/Linux, and NetWare systems, the concept of groups is used to simplify the administration of access control permissions and settings. When creating the appropriate groupings, you have the ability to centralize the function of setting the access levels for various resources within the system. We have been taught that this is the way to simplify the general administration of resources within networks and local machines.
However, although the concept of RBAC is similar, it is not the exact same structure. With the use of groups, a general level of access based on a user or machine object grouping is created for the convenience of the administrator. However, when the group model is used, it does not allow for the true level of access that should be defined, and the entire membership of the group gets the same access. This can lead to unnecessary access being granted to some members of the group.
RBAC allows for a more granular and defined access level, without the generality that exists within the group environment. A role definition is developed and defined for each job in an organization, and access controls are based on that role. This allows for centralization of the access control function, with individuals or processes being classified into a role that is then allowed access to the network and to defined resources. This type of access control requires more development and cost but is superior to MAC in that it is flexible and able to be redefined more easily. RBAC can also be used to grant or deny access to a particular router or to File Transfer Protocol (FTP) or Telnet.
RBAC is easier to understand using an example. Assume that there is a user at a company whose role within the company requires access to specific shared resources on the network. Using groups, the user would be added to an existing group which has access to the resource and access would be granted. RBAC on the other hand would have you define the role of the user and then allow that specific role access to whatever resources are required. If users get a promotion and change roles, changing their security permissions is as simple as assigning them to their new roles. If they leave the company and are replaced, assigning the appropriate role to the new employees grants them access to exactly what they need to do their jobs without trying to determine all of the appropriate groups that would be necessary without RBAC.
In summary, RBAC is:
Job Based The role is based on the functions performed by the user.
Highly Configurable Roles can be created and assigned as needed or as job functions change.
More Flexible Than MAC MAC is based on very specific information, whereas RBAC is based off of a user’s role in the company, which can vary greatly.
More Precise Than Groups RBAC allows the application of the principle of least privilege, granting the precise level of access required to perform a function.
An example of this would be using groups in Windows Active Directory to grant access to users, as opposed to granting access directly. For example, a file share used by accounting could be granted read-write access to the accounting group. The accounting group would contain all the accountants within an organization.
EXAM WARNING Be careful! RBAC has two different definitions in the Security+ exam. The first is defined as Role-Based Access Control. A second definition of RBAC that applies to control of (and access to) network devices is defined as Rule-Based Access Control. This consists of creating access control lists (ACL) for those devices and configuring the rules for access to them.
The challenge is to secure the OS and provide what’s needed to allow the system to perform as desired without allowing anything that is unnecessary. For example, you should turn off any unused services or features that could be exploited.
Surface area refers to the area (services, ports, and so forth) available on a computer for attack. Reducing the surface area, which means reducing the area that’s available for a hacker to attack, is a good part of securing an OS. While this extends beyond the OS, we will focus on reducing surface area in the OS in this chapter.
More services, file shares, features, or programs running on a computer provide more opportunity for a hacker to attack. For example, if a file server has Internet Information Services (IIS) running on it, this would provide more avenues or surface exposed for attack. Exposing the least amount of surface area for attack will greatly enhance the security of your computer.
You can use tools such as port scanners to analyze what’s open and exposed from outside the computer. “Penetration testing” to see if you can get past the computer’s defenses is one method of testing.
The following sections discuss and explore the methods used to harden defenses and reduce vulnerabilities that exist in systems. To get things started, let’s review the general steps to follow for securing an OS:
1. Disable all unnecessary services
2. Restrict permissions on files and access to the Registry
3. Apply the latest patches and fixes
4. Remove unnecessary programs
Windows-based computers also have the capability to enable and disable services. Figure 2.1 shows some of the services dialog on a workstation.
Services can be disabled through the properties for the service (see Figure 2.2).
Best practices would be to disable any services on a server or workstation that are not required.
NOTE As you begin to evaluate the need to remove protocols and services, make sure that the items you are removing are within your area of control. Consult with your system administrator on the appropriate action to take and make sure you have prepared a plan to back out and recover if you make a mistake.
While considering removal of nonessential services, it is important to look at every area of the computer’s application to determine what is actually occurring and running on the system. The appropriate tools are needed to do this, and the Internet contains a wealth of resources for tools and information to analyze and inspect systems.
FIGURE 2.1
Workstation Services
FIGURE 2.2
Workstation Services Properties
Controlling access is an important element in maintaining system security. The most secure environments follow the “least privileged” principle, as mentioned earlier. This principle states that users are granted the least amount of access possible that still enables them to complete their required work tasks. Expansions to that access are carefully considered before being implemented. Law enforcement officers and those in government agencies are familiar with this principle regarding noncomputerized information, where the concept is usually termed need to know. Generally, following this principle means that network administrators receive more complaints from users unable to access resources. However, receiving complaints from authorized users is better than suffering access violations that damage an organization’s profitability or ability to conduct business.
In practice, maintaining the least privileged principle directly affects the level of administrative, management, and auditing overhead, increasing the levels required to implement and maintain the environment. One alternative, the use of user groups, is a great time saver. Instead of assigning individual access controls, groups of similar users are assigned the same access. In cases where all users in a group have exactly the same access needs, this method works. However, in many cases, individual users need more or less access than other group members. When security is important, the extra effort to fine-tune individual user access provides greater control over what each user can and cannot access.
Keeping individual user access as specific as possible limits some threats, such as the possibility that a single compromised user account could grant a hacker unrestricted access. It does not, however, prevent the compromise of more privileged accounts, such as those of administrators or specific service operators. It does force intruders to focus their efforts on the privileged accounts, where stronger controls and more diligent auditing should occur.
Head of the Class
How Should We Work with File System Access?
Despite the emphasis on group-based access permissions, a much higher level of security can be attained in all operating platforms by individually assigning access permissions. Administratively, however, it is difficult to justify the expense and time involved in tracking, creating, and verifying individual access permissions for thousands of users trying to access thousands of individual resources. RBAC is a method that can be used to accomplish the goal of achieving the status of least privileged access. It requires more design and effort to start the implementation, but develops a much higher level of control than does the use of groups.
Good practice indicates that the default permissions allowed in most OS environments are designed for convenience, not security. For this reason, it is important to be diligent in removing and restructuring these permissions.
Encrypted file system (EFS) can be used on Windows machines to provide an additional layer of protection when it comes to securing the OS.
The encrypting file system is part of the Windows OS. What it does is encrypt the files on the disk using symmetric and asymmetric keys. EFS occurs at the OS level. If someone tries to access the file without the appropriate key, they get an “access denied” message. This is used to help ensure unauthorized users don’t get access to encrypted files. The keys are generally tied to an account—once they are imported to the account the access is relatively transparent to the user. This is why it’s important to protect the keys—if someone gains access to the keys, they would be able to access the files, so the keys should be well protected.
Using EFS is particularly useful with laptop users. In this day and age with reports of personal data such as credit card information and social security numbers being lost by large companies, EFS would help to protect that data in the event that the hardware is lost or stolen. While the thief may have the files, he won’t be able to access them since he does not have the key.
FIGURE 2.3
Encrypting a File
It’s also important to protect the keys from loss, as if the keys are lost, it’s nearly impossible to recover the data. There is a recovery agent that should have the keys imported to it.
By encrypting sensitive files, and preventing access to them by unauthorized users, this will contribute to the overall security of the machine being hardened.
Figure 2.3 shows how to encrypt files.
Clicking the details will show the information about encryption after the file has been encrypted (see Figure 2.4).
The main drawback to using EFS is performance. Even though encryption and decryption occur at the OS level, there is still some overhead involved in the process which will add to the CPU load on the machine performing the encryption and decryption.
EXAM WARNING The Security+ exam requires good knowledge of the hardening processes. It includes questions relating to hardening that you may not have thought about. For example, hardening can include concepts present in other security areas, such as locking doors, restricting physical access, and protecting the system from natural or unnatural disasters.
Updates for OSes are provided by the manufacturer of the specific component. Updates contain improvements to the OS, and new or improved components that the manufacturer believes will make the product more stable, usable, secure, or otherwise attractive to end users. For example, Microsoft updates are often specifically labeled Security Updates. If you have never taken a look at these, you can find the latest updates at www.microsoft.com/protect/default.mspx. These updates address security concerns recognized by Microsoft and should be evaluated and installed as needed. Updates should be thoroughly tested in nonproduction environments before implementation. It is possible that a “new and improved” function (especially one that enhances user convenience) may actually allow more potential for a security breach than the original component. Complete testing is a must.
FIGURE 2.4
Encryption Details
It’s a good idea to keep up with the hotfixes and patches for your respective OSes. Most vendors will provide regular patch releases and periodic hotfixes. Many of the hotfixes and patches will address security-related features.
Microsoft has a mailing list you can subscribe to for information about security updates.
To receive automatic notifications whenever Microsoft Security Bulletins and Microsoft Security Advisories are issued or revised, subscribe to Microsoft Technical Security Notifications on www.microsoft.com/technet/security/bulletin/notify.mspx.
Another good location would be to subscribe to the Computer Emergency Response Team (CERT) Web site, which may be found at www.cert.org. CERT is located at Carnegie Mellon University’s Software Engineering Institute.
One other really good resource is the SecurityFocus Web site at www.securityfocus.com. They have OS specific mailing lists you can join to receive regular updates on available patches as well as security flaws to beware of and discussions on current security topics and best practices.
Microsoft offers service packs and maintenance updates. Service packs are regular releases with bug fixes and sometimes minor enhancements in them. They are usually a good idea to install; however, it’s a good idea to test.
Hotfixes are packages that can contain one or more patches for software. They generally fix a specific issue or group of issues with a particular piece of software or OS.
Hotfixes are generally created by the vendor when a number of clients indicate that there is a compatibility or functional problem with a manufacturer’s products used on particular hardware platforms. These are mainly fixes for known or reported problems that may be limited in scope. As with the implementation of updates, these should be thoroughly tested in a nonproduction environment for compatibility and functionality before being used in a production environment. Because these are generally limited in function, it is not a good practice to install them on every machine. Rather, they should only be installed as needed to correct a specific problem.
Service packs are accumulated sets of updates or hotfixes. Service packs are usually tested over a wide range of hardware and applications in an attempt to assure compatibility with existing patches and updates and to initiate much broader coverage than just hotfixes. The recommendations discussed previously also apply to service pack installation. Service packs must be fully tested and verified before being installed on live systems. Although most vendors of OS software attempt to test all the components of a service pack before distribution, it is impossible for them to test every possible system configuration that may be encountered in the field, so it is up to the administrator to test their own. The purpose is to slow or deter compromise, provide security for resources, and assure availability.
Damage & Defense
What Should I Do to Try to Minimize Problems with Updates, Service Packs, Patches, and Hotfixes?
1. Read the instructions. Most repair procedures include information about their applicability to systems, system requirements, removal of previous repairs, or other conditions.
2. Install and test in a nonproduction environment, not on live machines.
3. If offered, use the option to back up the existing components for repair if the update fails.
4. Verify that the condition that is supposed to be updated or repaired is actually repaired.
5. Document the repair.
Patches for OSes are available from the vendor supplying the product. These are available by way of the vendor’s Web site or from mirror sites around the world. They are often security related and may be grouped together into a cumulative patch to repair many problems at once. Except for Microsoft, most vendors issue patches at unpredictable intervals; it is therefore important to stay on top of their availability and install them after they have been tested and evaluated in a nonproduction environment. The exception to this is when preparing a new, clean install. In this case, it is wise to download and install all known patches prior to introducing the machines to the network.
Windows-based platforms allow the configuration of OS and network services from provided administrative tools. This can include a service applet in a control panel in Windows NT Server or a Microsoft Management Console (MMC) tool in Windows 2000 and above (XP/2003/Vista/2008). It may also be possible to check or modify configurations at the network adapter properties and configuration pages. In either case, it is important to restrict access and thus limit vulnerability due to unused or unnecessary services or protocols.
Scripts are a versatile way to manage patches. They can be used to perform custom installations, automatic installations, and pretty much anything a programmer is clever enough to write a script for.
Windows provides Windows Scripting Host, which would enable you to create scripts or use predefined script to perform almost any task. You can add users to groups, set features, and so forth.
PowerShell is an extensible command line shell which has its own full-featured scripting language. It’s very powerful and integrates with .Net Framework. You can write more complex code using PowerShell.
There are quite a few systems out there for managing patches, including homemade systems, Microsoft’s SMS/System Center, Microsoft’s Software Update Services, and so forth.
Altiris
Altiris is now part of Symantec, the company that produces Norton Antivirus and Norton utilities. Altiris allows for the management of a wide spectrum of clients from Windows to UNIX to Linux and MacOS machines—all from a single management platform. Altiris has the ability to discover, catalog, and inventory software on Windows, UNIX, Linux, and Mac machines, which can help determine the patch level of the computers in your organization.
SMS/System Center
Microsoft SMS 2003 and System Center 2007 products are designed to aid in monitoring system health and also can be used to distribute software and settings out to different groups of computers in your organization. SMS 2003 and System Center rely heavily on Active Directory and integrate with Windows Group Policy.
Windows Software Update Services
Windows Software Update Services (WSUS) is a freely available product that allows enterprise users to manage Microsoft updates on their computers running the Windows OS. WSUS in its simplest form gets the latest updates from Microsoft and allows the administrators to determine whether to approve or decline individual updates as well as to distribute them across their infrastructure. By distributing the updates from a local server, an administrator can not only control which updates are applied but also help control the amount of Internet bandwidth needed for those updates as well as the time of day the updates are installed.
Group Policy in Windows allows you to set security settings as well as install specific software (such as virus scanning) on a group of computers.
To understand Group Policy, we need to step back and take a look at Active Directory.
You can use Group Policy to manage all aspects of the client desktop environment for Windows clients (Windows servers and workstations), including Registry settings, software installation, scripts, security settings, and so forth. The possibilities of what can be done with Group Policy are almost limitless. With VBScript, Jscript, or PowerShell you can write entire applications to execute via Group Policy. You can install software automatically across the network and apply patches to applications.
When deciding on the Group Policies you plan to enforce on your network, keep in mind that the more policies applied, the more network traffic, and hence the longer it could take for users to log onto the network.
Group Policies are stored in Active Directory as Group Policy Objects (GPOs). These objects are the instructions for the management task to perform.
Group Policy is implemented in four ways:
Local Group Policy Using local Group Policy involves setting up Group Policy on the local machine. This is not very useful for managing computers on a network. Local Group Policy is configured on the local computer.
Site Group Policy Site Group Policy is when the GPO is linked to the site. Site Group Policies can generate unwanted network traffic, so use these only when absolutely necessary.
Domain Group Policy Domain Group Policy is when the GPO is linked to the domain. This will apply the GPO to all computers and users within a domain. This is especially useful for enforcing company-wide settings. This is one of the two most commonly used applications of Group Policy.
Organizational Unit Group Policy Organizational Unit Group Policy is when the GPO is linked to the organizational unit (OU). OU Group Policy is especially useful for applying a GPO to a logical grouping (OU) of users or computers. This is particularly useful when placing computers for specific tasks in an OU container. For example, you can place all the Web servers in an OU to apply specific settings to those Web servers via Group Policy.
Group Policies provide administrators with the ability to control and configure users’ settings, manage users’ data, and perform remote software installation and maintenance. Group Policies require Active Directory and it is important to remember that the number and complexity of GPOs can adversely affect network performance and login times.
FIGURE 2.5
Three OUs
Ideally, you should segregate computers that will have the same settings applied into defined OUs. This can be done on a Windows machine using the Active Directory users and computers MMC snap-in. Figure 2.5 shows three OUs for computers: Once the OUs have been created, you can use the Group Policy editor and Group Policy management console to create a Group Policy. In Figure 2.6, we show the Group Policy management console. Most of the options in this console are self-explanatory and help is built-in.
This can be used to explore and Three OUs create new GPOs.
For this exercise, we will create a policy that will audit login events for our Structured Query Language (SQL) Servers.
1. Right click on the policy and select edit. This starts the Group Policy editor allowing us to define the policy. There are quite a few options available as far as defining restrictions and hardening the computer. Take care when defining options so that the options selected won’t inhibit users from getting their jobs done.
FIGURE 2.6
The Group Policy Management Console
2. Expand the nodes on the right and select audit policy.
3. Pick audit account login events on the right panel (see Figure 2.7).
4. Once you’ve made a selection from the right panel, a dialog box opens. In this example, we will enable auditing and click OK. If you were not sure what a setting did, you could view information about the setting on the “Explain This Setting” tab (see Figure 2.8).
5. Once you click OK, you’ll see the GPO edit screen. You should see the selection reflected in the settings in the right panel as shown in Figure 2.9.
6. Now close out the GPO editor. Your policy has been created. Note that it is possible to define as many settings as you’d like in the GPO editor, even though in this example we’ve only defined one (see Figure 2.10).
In Figure 2.10, we could check the settings tab and see more information about the settings defined in this GPO.
Microsoft provides an easy method to define Group Policy and apply it to groups of machines. This is one way we can harden the OSes and make them less vulnerable to attack.
FIGURE 2.7
Selecting Audit Account Login Events
FIGURE 2.8
Viewing Information about Settings
Security templates are basically a “starting point” for defining system settings in Windows. These templates contain hundreds of possible settings that can control a single computer or a whole network of computers and can be customized extensively. Some of the areas which security templates control include user rights, password policies, system policies, and user and system permissions. The base security templates provided by Microsoft are predefined settings to accomplish a specific task. For example, compatws in Windows is used to reduce the security level to allow older applications to run, hisecdc is used to apply a high security level to a domain controller. Similarly, hisecws is used to apply stringent security controls on a workstation. Windows security templates can be found in C:\Windows\Security\templates in XP/Server 2003. The security templates for Windows Vista are available in the Vista Security Guide available at www.microsoft.com.
FIGURE 2.9
The GPO Edit Screen
FIGURE 2.10
A Group Policy Setting
Security templates are actually part of Group Policy in the Microsoft Windows OS. The security templates can be copied and modified. Windows comes with the following default security templates.
Compatws.inf This template will change the file and Registry permissions to make the security settings consistent with what would be needed to support older applications. It is generally used when older application support is needed. This should only be used when necessary.
DC security.inf This template is created when a server is promoted to a domain controller. It can be used to set a domain controller back to the default settings applied when it was promoted to a domain controller.
Hisecdc.inf This is used to increase the security on a domain controller.
Hisecws.inf This is used to increase the security on client computers and member servers.
Notssid.inf This specifies to remove Windows Terminal Server Security Identifier (SIDs) from the file system and Registry locations. It is used to allow older applications to run under terminal services.
Securedc.inf This is used to increase the security on a domain controller, but not to the level of the High Security DC security template.
Securews.inf This is used to increase the security on client computers and member servers, but not to the level of the High Security DC security template.
Setup security.inf This is created during installation and will be different between computers. This represents the default security settings that get applied during the installation.
Security templates can be managed through the security template snap-in (see Figure 2.11).
Notes from the Field
New Templates
When making a new template, you can save a lot of time and aggravation by starting with one of the Windows templates that’s already created.
FIGURE 2.11
The Security Template Snap-in
Linux also has a number of different tools and many different templates which can be used to help harden the OS. While each version of Linux may have differences, there are also a lot of similarities.
The same principles that can be applied to Windows can also be applied to Linux. The principle of least access, disabling services and daemons that are not used, and so forth all should be considered.
An example would be if you were using a Linux server for file storage, it wouldn’t make sense to have Apache installed and configured on that particular server. Ideally, only services that are used should be enabled.
Bastille is an automated security setup tool that provides a level of security based on the usage of the server. The administrator answers a series of questions and based on the answers the settings are determined and then applied. Bastille is freely available at http://bastille-linux.sourceforge.net and works not only on Linux but on UNIX and MacOS X as well.
Configuration baselines are standard setups used when configuring machines in organizations. Configuration baselines are used to provide a starting point where machines can then be customized with respect to their specific roles in the network. For example, a Windows domain controller may not require Windows Media Services to be installed since its primary function is that of a directory service. A Web server would not necessarily require a database to be installed. Additionally specific services would be installed, turned off, or even removed completely based on the final location of the system in the network architecture.
When considering baselines for an organization, it is important to always keep in mind the principle of least access. You should determine each of the “functions” that will be needed within your organization and create baseline configurations for each. This could apply to both people and machines; in this case, we’ll use machines as an example.
The following example describes the different baselines or types of computers used in a typical organization. Each category would have its own baseline “build” which would consist of set groups of services, programs, settings, and features on a particular machine.
In the following example, the consulting organization, Haverford Consultants, Inc. has seven categories of systems that are deployed on their network. These categories are:
Web server
File and print server
Database server
Domain controller
Normal workstation
Developer workstation
Domain name system (DNS), Dynamic Host Control Protocol (DHCP) server
Each category requires specific settings to be applied. The domain controllers may have the hisecdc security template applied since they contain user account information as well as directory services for the organization as a whole. The normal workstation may only need to have the compatws template applied as the end workstations will only be used by the regular users. The Web servers as well as the DNS servers will most likely have tight security requirements as they could be placed outside the corporate firewall in a demilitarized zone (DMZ) that is accessible from the Internet.
It is important to remember that the generic security templates provided by Microsoft or used in such hardening tools as Bastille will need to be further customized by an organization to meet their specific security requirements.
Microsoft Baseline Security Analyzer
The Microsoft Baseline Security Analyzer (MBSA) is a free tool for small and medium businesses that can be used to analyze the security state of a Windows network relative to Microsoft’s own security recommendations. In addition to identifying security issues, the tool offers specific remediation guidance. MBSA will detect common security misconfigurations and missing security updates on Windows systems.
FIGURE 2.12
The MBSA Startup Screen
The initial MBSA startup screen is shown in Figure 2.12.
MBSA can scan multiple computers or a single computer. In Exercise 2, we’ll scan one computer.
1. Start up by either double-clicking on the MBSA shortcut on the desktop or by selecting MBSA from the Programs menu. Once MBSA starts up, you are presented with the “Tasks” screen. Select “Scan a computer.”
2. Enter the information of the computer to be scanned as shown in Figure 2.13. This could be either the computer name or its IP address. Click “Start Scan.”
3. When the scan completes, the results are available in a report shown in the MBSA tool (see Figure 2.14).
Each of the items on the report should be evaluated in detail to ensure all security issues are understood and resolved. The MBSA is an excellent tool that will provide insight into security vulnerabilities in your organization.
FIGURE 2.13
Scanning a Computer
FIGURE 2.14
Results from Scanning a Computer
Server OS hardening can be a very complex and daunting task. However, by following a standard set of procedures and using tools such as security templates and MBSA, this task can be made significantly easier and can result in improved security across your network. One of the first tasks to focus on is deciding which services and protocols need to be enabled and which should be disabled.
When you are considering whether to enable and disable services and protocols in relation to network hardening, there are extra tasks that must be done to protect the network and its internal systems. As with the OSes and NOSes discussed earlier, it is important to evaluate the current needs and conditions of the network and infrastructure, and then begin to eliminate unnecessary services and protocols. This leads to a cleaner network structure with more capacity and less vulnerability to attack.
Eliminating unnecessary network protocols includes eliminating such protocols as Internetwork Packet Exchange (IPX), Sequenced Packet Exchange (SPX), AppleTalk, and/or NetBIOS Extended User Interface (NetBEUI). It is also important to look at the specific operational protocols used in a network such as Internet Control Messaging Protocol (ICMP), Internet Group Management Protocol (IGMP), Service Advertising Protocol (SAP), and the Network Basic Input/Output System (NetBIOS) functionality associated with Server Message Block (SMB) transmissions in Windows-based systems.
NOTE As you begin to evaluate the need to remove protocols and services, make sure that the items you are removing are within your area of control. Consult with your system administrator on the appropriate action to take, and make sure you have prepared a plan to back out and recover if you make a mistake.
While you are considering removal of nonessential protocols, it is important to look at every area of the network to determine what is actually occurring and running on the system. The appropriate tools are needed to do this, and the Internet contains a wealth of resources for tools and information to analyze and inspect systems.
A number of functional (and free) tools can be found at sites such as www.foundstone.com/us/resources-free-tools.asp. Among these, tools like SuperScan 4.0 are extremely useful in the evaluation process. If working in a mixed environment with Windows, UNIX, Linux, and/or NetWare machines, a tool such as Big Brother for monitoring may be downloaded and evaluated (or in some cases used without charge) by visiting www.bb4.com. Another useful tool is Nmap, which is available at http://insecure.org/nmap/. These tools can be used to scan, monitor, and report on multiple platforms, giving a better view of what is present in an environment. In Linux-based systems, nonessential services can be controlled in different ways, depending on the distribution being worked with. This may include editing or making changes to xinetd.conf or inetd.conf or use of the graphical Linuxconf or ntsysv utilities or even the Webmin configuration tool. It may also include the use of ipchains or iptables in various versions to restrict the options available for connection at a firewall.
Let’s begin with a discussion about the concept of nonessential services. Nonessential services are the ones you do not use or have not used in some time. For many, the journey from desktops to desktop support to servers to entire systems support involves a myriad of new issues to work on. And as we progressed, we wanted to see what things could be done with the new hardware and its capabilities. In addition, we were also often working on a system that we were not comfortable with, had not studied, and had little information about. Along with having a superior press for using the latest and greatest information, we hurried and implemented new technologies without knowing the pitfalls and shortcomings.
Nonessential services may include network services, such as DNS or DHCP, Telnet, Web, or FTP services. They may include authentication services for the enterprise, if located on a nonenterprise device. They may also include anything that was installed by default that is not part of your needed services.
Systems without shared resources need not run file and print services. In a Linux environment, if the machine is not running as an e-mail server, then remove send-mail. If the system is not sharing files with a Windows-based network, then remove Samba. Likewise, if you are not using NIS for authentication, you should disable the service or remove it.
This is applicable for any type of OS. The Security+ exam is OS agnostic, meaning that the same general principles apply regardless of the OS that you use. Being familiar with the services that are unnecessary for the specific OS that you are working with is an important part of ensuring that the system is well secured. The basic premise is to disable the services that you do not need. The list of services that this covers varies by OS or even the specific version or release of the OS.
Nonessential protocols can provide the opportunity for an attacker to reach or compromise your system. These include network protocols such as IPX/SPX (in Windows OSes, NWLink) and NetBEUI. It also includes the removal of unnecessary protocols such as ICMP, IGMP, and specific vendor supplied protocols such as Cisco’s Cisco Discovery Protocol (CDP), which is used for communication between Cisco devices, but may open a level of vulnerability in your system. Evaluation of protocols used for communication between network devices, applications, or systems that are proprietary or used by system device manufacturers, such as the protocols used by Cisco to indicate private interior gateways to their interoperating devices, should also be closely examined.
Evaluation of the protocols suggested for removal may show that they are needed in some parts of the system but not others. Many OS platforms allow the flexibility of binding protocols to certain adaptors and leaving them unbound on others, thus reducing the potential vulnerability level.
Processes running on your systems should be evaluated regarding their necessity to operations. Many processes are installed by default but are rarely or never used by the OS. In addition to disabling or removing these processes, you should regularly evaluate the running processes on the machine to make sure they are necessary. As with disabling unnecessary protocols and services and systems, you must be aware of the need for the processes and their potential for abuse that could lead to system downtime, crashes, or breach. UNIX, Linux, Windows server and workstation systems, and NetWare systems all have mechanisms for monitoring and tracking processes, which will give you a good idea of their level of priority and whether they are needed in the environments you are running.
Like the other areas we have discussed, it is appropriate to visit the process of disabling or removing unnecessary programs. Applications that run in the background are often undetected in normal machine checks and can be compromised or otherwise affect your systems negatively. An evaluation of installed programs is always appropriate. Aside from the benefit of more resources being available, it also eliminates the potential that a breach will occur.
FTP servers are potential security problems, as they are often exposed to outside interfaces, thereby inviting anyone to access them. The vast majority of FTP servers open to the Internet support anonymous access to public resources. Additionally, incorrect file system settings in a server acting as an FTP server allow unrestricted access to all resources stored on that server and could lead to a system breach. FTP servers exposed to the Internet are best operated in the DMZ rather than the internal network and should be hardened with all of the OS and NOS fixes available. All services other than FTP should be disabled or removed. Contact from the internal network to the FTP server through the firewall should be restricted and controlled through ACL entries to prevent possible traffic through the FTP server from returning to the internal network.
FTP servers providing service in an internal network are also susceptible to attack; therefore, administrators should consider establishing access controls including usernames and passwords, as well as the use of Secure Sockets Layer (SSL) for authentication.
Some of the hardening tasks that should be performed on FTP servers include:
Protection of the server file system
Isolation of the FTP directories
Positive creation of authorization and access control rules
Regular review of logs
Regular review of directory content to detect unauthorized files and usage
Hardening DNS servers consists of performing normal OS hardening and then considering the types of control that can be done with the DNS service itself. Older versions of Berkeley Internet Name Domain (BIND) DNS were not always easy to configure, but current versions running on Linux and UNIX platforms can be secured relatively easily. Microsoft’s initial offering of DNS on NT was plagued with violations of their integrity, making internetwork attacks much easier to accomplish, since information about the internal network was easy to retrieve. With Windows 2003, Microsoft made significant strides to secure their DNS server. Among the many changes made were the addition of controls to prevent zone transfer operations to machines that are not approved to request such information, thus better protecting the resources in the zone files from unauthorized use.
With the release of BIND 9, a new capability was added to provide different functionality from the software based on the hosts accessing the server. This capability is invoked with the view clause in the named.conf. When hardening a DNS server, it is critical to restrict zone transfers.
Zone transfers should only be allowed to designated servers. Additionally, those users who may successfully query the zone records with utilities such as nslookup should be restricted via the ACL settings. Zone files contain all records of a zone that are entered; therefore, an unauthorized entity that retrieves the records has retrieved a record of what is generally the internal network, with host names and IP addresses.
There are records within a DNS server that can be set for individual machines. These include HINFO records, which generally contain descriptive information about the OS and features of a particular machine. HINFO records were used in the past to track machine configurations when all records were maintained statically, and were not as attractive a target as they are today. A best practice in this case would be to not use HINFO records in the DNS server. Attackers attempt zone transfers by using the following command: First, by typing nslookup from the command line, next the target server’s DNS server address is entered, server <ipaddress>, then the set type=any command is entered. Finally, the ls -d target.com is entered to try and force the zone transfer. If successful, a list of zone records will follow.
There are a number of known exploits against DNS servers in general. For example, a major corporation placed all their DNS servers on a single segment. This made it relatively simple to mount a denial of service (DoS) attack using ICMP to block or flood traffic to that segment. Other attacks that administrators must harden against are attacks involving cache poisoning, in which a server is fed altered or spoofed records that are retained and then duplicated elsewhere. In this case, a basic step for slowing this type of attack is to configure the DNS server to not do recursive queries. It is also important to realize that BIND servers must run under the context of root and Windows DNS servers must run under the context of Local System to access the ports they need to work with. It is possible to run BIND under a chroot jail in UNIX and to run Windows DNS under a different, lower privilege, account as well. If the base NOS is not sufficiently hardened, a compromise can occur.
Network News Transfer Protocol (NNTP) servers are also vulnerable to some types of attacks, because they are often heavily used from a network resource perspective. NNTP servers that are used to carry high volumes of newsgroup traffic from Internet feeds are vulnerable to DoS attacks that can be mounted when “flame wars” occur. This vulnerability also exists in the case of listserv applications used for mailing lists. NNTP servers also have vulnerabilities similar to e-mail servers, because they are not always configured correctly to set storage parameters, purge newsgroup records, or limit attachments. It is important to be aware of malicious code and attachments that can be attached to the messages that are being accepted and stored. NNTP servers should be restricted to valid entities, which require that the network administrator correctly set the limits for access. It is also important to be aware of the platform being used for hosting a NNTP server. If Windows-based, it will be subject to the same hardening and file permission issues present in Windows IIS servers. Therefore, there are additional services and protocols that must be limited for throughput and defenses such as virus scanning that must be in place.
The ability to share files and printers with other members of a network can make many tasks simpler and, in fact, this was the original purpose for networking computers. However, this ability also has a dark side, especially when users are unaware that they are sharing resources. If a trusted user can gain access, the possibility exists that a malicious user can also obtain access. On systems linked by broadband connections, crackers have all the time they need to connect to shared resources and exploit them.
The service called file and print sharing in Windows allows others to access the system from across the network to view and retrieve files or use resources. Other OSes have similar services (and thus similar weaknesses). The Microsoft file- and print-sharing service uses NetBIOS with SMB traffic to advertise shared resources, but does not offer security to restrict who can see and access those resources.
This security is controlled by setting permissions on those resources. The problem is that when a resource is created in a Windows-based system, they are set by default to give full control over the resource to everyone who accesses that system. By default, the file- and print-sharing service is bound to all interfaces being used for communication.
Under Windows XP and 2000 when sharing is enabled for the purpose of sharing resources with a trusted internal network over a NIC, the system is also sharing those resources with the entire untrusted external network over the external interface connection. This is no longer the case with Windows Vista, however connecting to an untrusted network automatically turns off file sharing. Many users are unaware of these defaults and do not realize their resources are available to anyone who knows enough about Windows to find them. For example, users with access to port scanning software or using the basic analysis provided through the use of NetBIOS statistics (NBTSTAT) or the net view command in a Windows network would have the capability to list shared resources if NetBIOS functionality exists.
Notes from the Field
Look at What Is Exposed
To look at the resources exposed in a Windows network, open a command window in any version of Windows that is networked. Click the Start button at the bottom left of the task bar. Click the “Run” option and in the dialog box type cmd. In the command window that opens up type net view and press the Return [Enter] key. You will see a display showing machines with shared resources in the network segment and the machines they are attached to.
The display will look something like this:
Server Name Remark
--------------------------------------
\\EXCELENTXP
\\EXC2003
The command completed successfully.
Next, type net view \\machine name at the prompt,
and hit the
Enter or Return key.
That display might look like this:
Shared resources at \\excnt4
Share name Type Used as Comment
------------------------------------------
public Disk
The command completed successfully.
As can be seen, it does not take much effort for attackers inside or outside a network to view vulnerabilities that are shown when NetBIOS functionality is present.
At the very least, the file- and print-sharing service should be unbound from the external network interface’s adapter. Another solution (or a further precaution to take in addition to unbinding the external adapter) is to use a different protocol on the internal network.
For example, computers could communicate over NetBEUI on a small local non-routed network. If file and print sharing is bound to NetBEUI and unbound from Transmission Control Protocol/Internet Protocol (TCP/IP), internal users can still share resources, but those resources will be unavailable to “outsiders” on the Internet.
If a user does not need to share resources with anyone on the internal (local) network, the file- and print-sharing service should be completely disabled. On most networks where security is important, this service is disabled on all clients. This action forces all shared resources to be stored on network servers, which typically have better security and access controls than end-user client systems.
DHCP servers add another layer of complexity to some layers of security, but also offer the opportunity to control network addressing for client machines. This allows for a more secure environment if the client machines are configured properly. In the case of the clients, this means that administrators have to establish a strong ACL to limit the ability of users to modify network settings, regardless of platform. Nearly all OSes and NOSes offer the capability to add DHCP server applications to their server versions.
As seen in each of the application server areas, administrators must also apply the necessary security patches, updates, service packs, and hotfixes to the DHCP servers they are configuring and protecting. DHCP servers with correct configuration information will deliver addressing information to the client machines. This allows administrators to set the node address, mask, and gateway information, and to distribute the load for other network services by creation of appropriate scopes (address pools).
Additional security concerns arise with DHCP. Among these, it is important to control the creation of extra DHCP servers and their connections to the network. A rogue DHCP server can deliver addresses to clients, defeating the settings and control efforts for client connection. In most systems, administrators are required to monitor network traffic consistently to track these possible additions and prevent a breach of the system. Some OS and NOS manufacturers have implemented controls in their access and authentication systems to require a higher level of authority for authorizing DHCP server operation. In the case of Windows, a Windows DHCP server that belongs to an Active Directory domain will not service client requests if it has not been authorized to run in Active Directory. However, a stand-alone Windows DHCP server can still function as a rogue. Someone could still also introduce a rogue server running a different OS and NOS or a stand-alone server that does not belong to the domain. Administrators should also restrict access to remote administration tools to limit the number of individuals who can modify the settings on the DHCP server.
Data repositories include many types of storage systems that are interlinked in systems for maintenance and protection of data. It is important to discuss the need for protection and hardening of the various types of storage that are maintained. This includes different storage media combinations, methods of connection to the information, consideration of the access implications and configurations, and maintenance of the integrity of the data. When considering tightening and securing the data repository area, file services such as those detailed earlier in the file and print section and also the Network Attached Storage (NAS) and Storage Area Network (SAN) requirements must be considered.
NAS and SAN configurations may present special challenges to hardening. For example, some NAS configurations used in a local area network (LAN) environment may have different file system access protections in place that will not interoperate with the host network’s OS and NOS. In this case, a server OS is not responsible for the permissions assigned to the data access, which may make configuration of access or integration of the access rules more complex. SAN configuration allows for intercommunication between the devices that are being used for the SAN, and thus freedom from much of the normal network traffic in the LAN, providing faster access. However, extra effort is initially required to create adequate access controls to limit unauthorized contact with the data it is processing.
Directory services information can be either very general in nature and publicly available or restricted in nature and subject to much tighter control. While looking at directory services in the application area, it is important to look at different types of directory service cases and what should be controlled within them.
Directory services data are maintained and stored in a hierarchical structure. One type of directory service is structured much like the white pages of a telephone book and may contain general information such as e-mail addresses, names, and so forth. These servers operate under the constraints of Lightweight Directory Access Protocol (LDAP) and the X.500 standard. This type of service contains general information that is searchable. Typically, these directories are write-enabled to the administrator or the owner of the record involved and read-enabled to all other users. A second type of directory services operation includes the operation of systems like Novell’s eDirectory and Windows 2003s Active Directory. Both of these services are based on the X.500 standard, as is the conventional LDAP directory service. They are not LDAP-compliant, however, as they can interoperate with LDAP directories, but have been modified for use in their respective directory services. These types of directories usually follow the LDAP/X.500 naming convention to indicate the exact name of the objects, which include designations for common name, organization, country, and so on. This might appear as CN=Joe User, O=His Company or C=US, which would designate that the record was for Joe User, a member of his company, in the United States. It is important to impose and verify stringent control on what is allowed to be written to a records database and who can write to it, because much of the information in this directory service is used to authenticate users, processes, services, and machines for access to other resources within the networks. At the same time, administrators will want to control who can read information in specific areas of the database, because they need to restrict access to some parts of the directory information.
Hardening of directory services systems requires evaluation not only of the permissions to access information, but of permissions for the objects that are contained in the database. Additionally, these systems require the use of the LDAP on the network, which also requires evaluation and configuration for secure operation. This includes setting perimeter access controls to block access to LDAP directories in the internal network, if they are not public information databases. Maintenance of security-based patches and updates from the NOS manufacturer is absolutely imperative in keeping these systems secure.
As seen in this chapter, hardening is an important process. Another way to harden the network is to use network access control (NAC). There are several different incarnations of NAC available. These include infrastructure-based NAC, endpoint-based NAC, and hardware-based NAC.
1. Infrastructure-based NAC requires an organization to be running the most current hardware and OSes. OS platforms such as Microsoft’s Windows Vista have the capability to participate in NAC.
2. Endpoint-based NAC requires the installation of software agents on each network client. These devices are then managed by a centralized management console.
3. Hardware-based NAC requires the installation of a network appliance. The appliance monitors devices for specific behavior and can limit connectivity should noncompliant activity be detected.
NAC offers administrators a way to verify that devices meet certain health standards before they’re allowed to connect to the network. Laptops, desktop computers, or any device that doesn’t comply with predefined requirements can be prevented from joining the network or can even be relegated to a controlled network where access is restricted until the device is brought up to the required security standards.
Database servers may include servers running SQL or other databases such as Oracle. These types of databases present unique and challenging conditions when considering hardening the system. For example, in most SQL-based systems, there is both a server function and a client front end that must be considered. In most database systems, access to the database information, creation of new databases, and maintenance of the databases is controlled through accounts and permissions created by the application itself. Although some databases allow the integration of access permissions for authenticated users in the OS and NOS directory services system, they still depend on locally created permissions to control most access. This makes the operation and security of these types of servers more complicated than is seen in other types.
Unique challenges exist in the hardening of database servers. Most require the use of extra components on client machines and the design of forms for access to the data structure to retrieve the information from the tables constructed by the database administrator. Permissions can be extremely complex, as rules must be defined to allow individuals to query database access to some records and no access to others. This process is much like setting access permissions but at a much more granular and complex level.
Forms designed for the query process must also be correctly formulated to allow access only to the appropriate data in the search process. Integrity of the data must be maintained, and the database itself must be secured on the platform on which it is running to protect against corruption.
Other vulnerabilities require attention when setting up specific versions of SQL in a network. For example, Microsoft’s SQL Server 2000 and earlier versions set two default conditions that must be hardened in the enterprise environment. First, the “sa” account, which is used for security associations and communication with the SQL processes and the host machine, is installed with a blank password. Second, the server is configured using mixed mode authentication, which allows the creation of SQL-specific accounts for access that are not required to be authenticated by the Windows authentication subsystem. This can lead to serious compromise issues and allow control of the server or enterprise data. It is strongly recommended that administrators harden these two conditions, using a strong password on the sa account and using Windows authentication instead of mixed-mode authentication.
Network access concerns must also be addressed when hardening the database server. SQL, for example, requires that ports be accessible via the network depending on what platform is in use. Oracle may use ports 1521, 1522, 1525, or 1529, among others. MS SQL Server uses ports 1433 and 1434 for communication. As can be seen, more consideration of network access is required when using database servers. Normal OS concerns must also be addressed.
SQL Server security takes an ongoing and constant effort to try to protect databases and their content. An excellent discussion of the SQL Server security model by Vyas Kondreddi can be viewed at www.sql-server-performance.com/vk_sql_security.asp.
TEST DAY TIP Spend a few minutes reviewing port and protocol numbers for standard services provided in the network environment. This will help when you are analyzing questions that require configuration of ACL lists and determinations of appropriate blocks to install to secure a network.
EXAM WARNING The Security+ exam can ask specific questions about ports and what services they support. It’s advisable to learn common ports before attempting the exam.
21 FTP
22 Secure Shell (SSH)
23 Telnet
25 Simple Mail Transfer Protocol (SMTP)
53 DNS
80 HTTP
110 Post Office Protocol (POP)
161 Simple Network Management Protocol (SNMP)
443 SSL
Memorizing these will help you with the Security+ exam.
Workstations can present special challenges. Depending on the users’ knowledge and capabilities, they may tinker with the steps IT takes to secure their workstation and violate company policy when it comes to best practices.
As laptops become more commonplace, they present specific challenges to the organization when it comes to securing OSes.
Since laptops are portable, it’s very possible they could be stolen. If you have sensitive data on your laptop, it should not be placed on laptop drives, but in some cases, this cannot be avoided. In these cases, you should at a minimum encrypt the sensitive data.
There are a number of third-party applications, such as utimaco’s safeguard—available at http://go.utimaco.com.
Ideally, the minimum required rights for a person to perform their job should be given. Under older Windows OSes (XP and 2000 most notably), the user of a machine was given administrative rights or was added to the “Power Users” group to gain full functionality from the OS. However, if a user account is compromised, the entire machine could be compromised and could potentially lead to the entire domain being compromised. Under Vista and Windows 7, users no longer need to have administrative privileges to their systems to be able to be fully functional. This allows the system administrator to reduce the rights assigned to regular users and follows the principle of least access.
Figure 2.15 shows the common workstation groups on a Windows XP computer.
You’ll note the Users group and the Power Users group. In many cases, being in the Users group or Power Users group would be enough rights for a person to perform their tasks.
FIGURE 2.15
Common Workstation Groups on a Windows XP Computer
TEST DAY TIP Remember the principle of least access! In many cases, this will help you to make the correct choice
This chapter looked at the broad concept of infrastructure security and specifically discussed the concepts and processes for hardening various sections of systems and networks. OS security and configuration protections were discussed as were file system permission procedures, access control requirements, and methods to protect the core systems from attack. Security+ exam objectives were studied in relation to OS hardening and in relation to hardening by visiting potential problem areas including configuration concerns, ACLs, and elimination of unnecessary protocols and services from the computer. We also looked at how these hardening steps might improve and work with the OS hardening and ways to obtain, install, and test various fixes and software updates.
Harden following the principle of “least privilege” to limit access to any resource
Set file access permissions as tightly as possible
Track, evaluate, and install the appropriate OS patches, updates, service packs, and hotfixes in your system environment
Remember the principle of least access!
Eliminating unnecessary network protocols includes eliminating such protocols as Internetwork Packet Exchange (IPX), Sequenced Packet Exchange (SPX), AppleTalk, and/or NetBIOS Extended User Interface (NetBEUI).
As you begin to evaluate the need to remove protocols and services, make sure that the items you are removing are within your area of control. Consult with your system administrator on the appropriate action to take and make sure you have prepared a plan to back out and recover if you make a mistake.
While you are considering removal of nonessential protocols, it is important to look at every area of the network to determine what is actually occurring and running on the system.
Follow best practices for hardening specific application-type servers such as e-mail, FTP, and Web servers
Data repositories require more consideration, planning, and control of access than other application servers
Application-specific fixes, patches, and updates are used in addition to OS and NOS fixes.
Q: How should I determine how much access a person needs to a system?
A: By applying the principle of least privilege. Users should be granted the minimum level of access that will allow them to do their job effectively.
Q: Should I apply patches directly to my production machines?
A: As a general rule, as patches and updates become available, they should be tested as soon as possible in a nonproduction environment before applying to production.
Q: What protocols and services should I enable on my server?
A: You should enable only the protocols and services you are using. Do not enable services and protocols “just in case” you might need them at some future point. Enable them as they are used.
Q: What exactly is operating system hardening?
A: Operating system hardening consists of locking down file systems and controlling software installation and use and methods for configuring file systems properly to limit access and reduce the possibility of a breach. The idea is to reduce the likelihood of someone gaining access to or harming the operating system.
Q: What is Windows Group Policy?
A: Windows Group Policy uses active directory to apply settings to groups of computers. In this way it becomes easier to manage many computers (and users) by applying consistent operating system settings to the computers and/or users in a group.
1. You have a computer and through a portscan discover that port 25 is enabled. This computer is used for file and print services only. What should you do?
A. Disable SMTP
B. Disable POP
C. Disable IIS
D. Port 25 should be enabled
2. You have a computer and through a portscan discover that port 25 and Port 80 are enabled. This computer is used for serving Web pages only. What should you do?
A. Disable SMTP
B. Disable POP
C. Disable IIS
D. Port 25 and 80 should be enabled.
3. You notice port scans on a Web server. The server processes both secure and insecure pages. What steps can you take to help secure the OS?
A. Enable port 80, disable all other ports
B. Enable port 443, disable all other ports
C. Enable port 25, disable all other ports
D. Enable port 80, 443, and 25, disable all other ports
E. Enable port 80 and 443, disable all other ports
4. What port does SNMP use?
A. Port 80
B. Port 25
C. Port 161
D. Port 443
5. As part of the overall OS hardening process, you are disabling services on a Windows server machine. How do you decide which services to disable?
A. Disable all services, and then re-enable them one by one
B. Research the services required and their dependencies, then disable the unneeded services
C. Leave all services enabled, since they may be required at some point in the future
D. Disable all workstation services
6. You are configuring a server to be used for IIS. You have disabled all unused services. All access to the server will be through secure pages using HTTPS. What ports should you enable?
A. Port 80
B. Port 25
C. Port 161
D. Port 443
7. Robby is preparing to evaluate the security on his Windows XP computer and would like to harden the OS. He is concerned as there have been reports of buffer overflows. What would you suggest he do to reduce this risk?
A. Remove sample files
B. Upgrade is OS
C. Set appropriate permissions on files
D. Install the latest patches
8. Marissa is planning to evaluate the permissions on a Windows 2003 server. When she checks the permissions, she realizes that the production server is still in its default configuration. She is worried that the file system is not secure. What would you recommend Melissa do to alleviate this problem?
A. Remove the anonymous access account from the permission on the root directory
B. Remove the system account permissions on the root of the C:\ drive directory
C. Remove the “everyone” group from the permissions on the root directory
D. Shut down the production server until it can be hardened
9. You have been asked to review the general steps used to secure an OS. You have already obtained permission to disable all unnecessary services. What should be your next step?
A. Remove unnecessary user accounts and implement password guidelines
B. Remove unnecessary programs
C. Apply the latest patches and fixes
D. Restrict permissions on files and access to the Registry
10. Yesterday, everything seemed to be running perfectly on the network. Today, the Windows 2003 production servers keep crashing and running erratically. The only events that have taken place are a scheduled backup, a CD/ DVD upgrade on several machines, and an unscheduled patch install. What do you think has gone wrong?
A. The backup altered the archive bit on the backup systems
B. The CD/DVDs are not compatible with the systems in which they were installed
C. The patches were not tested before installation
D. The wrong patches were installed
11. Debbie is reviewing open ports on her Web server and has noticed that port 23 is open. She has asked you what the port is and if it presents a problem. What should you tell her?
A. Port 23 is no problem because it is just the Telnet client
B. Port 23 is a problem because it is used by the Subseven Trojan
C. Port 23 is open by default and is for system processes
D. Port 23 is a concern because it is a Telnet server and is active
12. Monday morning has brought news that your company’s e-mail has been blacklisted by many Internet service providers (ISPs). Somehow your e-mail servers were used to spread spam. What most likely went wrong?
A. An insecure e-mail account was hacked
B. Sendmail vulnerability
C. Open mail relay
D. Port 25 was left open
13. Management was rather upset to find out that someone has been hosting a music file transfer site on one of your servers. Internal employees have been ruled out as it appears it was an outsider. What most likely went wrong?
A. Anonymous access
B. No Web access control
C. No SSL
D. No bandwidth controls
14. You have been given the scan below and asked to review it. Interesting ports on (18.2.1.88):
(The 1263 ports scanned but not shown below are in state: filtered)
Port |
State |
Service |
22/tcp |
open |
ssh |
open |
dns |
|
80/tcp |
open |
http |
110/tcp |
open |
pop3 |
111/tcp |
open |
sun rpc |
Your coworker believes it is a Linux computer. What open port led to that assumption?
A. Port 53
B. Port 80
C. Port 110
D. Port 111
15. During a routine check of a file server, you discover a hidden share someone created that contains 100 Gb of music content. You discover the share was created on a drive that everyone has full control over. What steps should you take to ensure this doesn’t happen again?
A. Define an acceptable use policy
B. Remove full control from the “everyone” group
C. Remove full control from the offending user
D. Remove the files and the directory
1. A
2. A
3. E
4. C
5. A
6. D
7. D
8. C
9. A
10. C
11. D
12. C
13. A
14. D
15. A, B, and D