Chapter 15. Host Security for Servers

Host security is the security of the computer on which your web server is running. Traditionally, host security has been a computer security topic unto itself. Whole books (including a couple of our own) have been written on it.

Host security was in its heyday in the 1980s and early 1990s, when dozens or even hundreds of people shared the same computer. Many of these systems were at universities, where one of the goals of the system operators was to prevent students from seeing each other’s coursework. Other systems were at government installations, where the systems needed to store and segregate “Secret” from “Top Secret” information. As a result, host security was traditionally concerned with questions of protecting the operating system from users, protecting users from each other, and performing auditing measures.

The 1990s saw a dramatic shift in the emphasis and importance of host security. It seems that many organizations place less emphasis on host security when each person had exclusive use of a computer. This perspective is misguided because, as we have seen, distributed systems can be as vulnerable (if not more so) to the security problems that can affect large time-sharing systems. One explanation for the decreased attention to host security is that assuring host security in a distributed environment is significantly more complicated and more expensive, and in fact has proven to be beyond the capabilities of many organizations. Another explanation is that too many people are more concerned with cost and ease of deploying systems that are impossible to secure.[154]

The Web has reignited interest in host security. The measures that were developed in the 1980s and 1990s for protecting a computer system against its users and protecting the users against each other work equally well for protecting a computer system against an external attacker—especially if that attacker is able to gain some sort of foothold in your computer system to start running his own programs. After all, the computer on which your web server is running has access to all of the web server’s files; it can monitor all of the web server’s communications and it can even modify the web server itself. If an attacker has control of your computer’s operating system, it is impossible to use that computer to provide secure services.

Because of size and time constraints, this book cannot provide you with a step-by-step guide to building a secure Internet host. Instead, this chapter discusses some of the most common security problems that affect computers being used to offer web services and then describes how to build a web server that minimizes these problems. Appendix E includes references to other books that provide more detailed host security information.

Most of the problems that Robert Metcalfe identified in RFC 602 back in 1973 (see RFC 602) remain today. Many organizations that run servers on the Internet simply do not secure their servers against external attack. Other problems have gotten worse: people still pick easy-to-guess passwords, and many passwords are simply “sniffed” out of the Internet using a variety of readily available packet sniffers. And people still break into computers for the thrill, except that now many of them also steal information for financial gain or to make some ideologic point.

Perhaps the only problem that Metcalfe identified in 1973 that has been solved is the problem of unauthorized people accessing the Internet through unrestricted dialups—that is, dialup lines that do not require entering a username and password. But this problem has been solved in a strange way. Thanks to the commercialization of the Internet, the number of unrestricted dialups is quite small. On the other hand, today it is so easy to procure a “trial” account from an Internet service provider (frequently without providing anything in the way of real identification) that the real threat is no longer unauthorized users; it’s the “authorized” ones.

Back in 1973, two of the biggest vulnerabilities Metcalfe had to confront were dialup servers that didn’t require passwords and username/password combinations that were freely shared among users. These are both problems nearly thirty years later: a study by computer consultant Peter Shipley found more than 50,000 dialup modems in the San Francisco Bay Area, of which more than 2%—more than 1000—allowed unrestricted access to any caller, without the need to enter a username and password. Among the vulnerable systems were the dispatch system for the Oakland Fire Department, the order-entry system for a popular bookstore, and records systems belonging to several medical practices. Shipley found these dialups by dialing every single phone number in the San Francisco Bay Area; it is reasonable to suspect that others are engaged in a similar vulnerability assessment project (perhaps with a less academic goal).

But while unsecured dialups remain a significant problem, they are now simply one of many venues for an attacker to gain access and control over a target computer system. Many of these techniques give the attacker the ability to run code on the target machine. These techniques include:

Remote exploits

Vulnerabilities exist in many computers that make it possible for an attacker to compromise, penetrate, or simply disable the system over the network without actually logging into the system. For example, Microsoft Windows NT 4.0 was vulnerable to the ping of death , which allowed anybody on the Internet to crash a Windows NT 4.0 system by simply sending the computer a specially crafted “ping” packet. As another example, version 8.2 through 8.2.2 of the Internet Software Consortium’s BIND Domain Name Service were vulnerable to the DNS remote root exploit. This exploit allowed a remote user to gain “root” (superuser administrative) privileges on any computer running the vulnerable versions of BIND.

Many remote exploits are based on the buffer overflow technique. This technique relies on the way that the C programming language lays out information inside the computer’s memory. The remote system might try to store 100 bytes into a buffer that is only set up to hold 30 or 40 bytes. The resulting information overwrites the C program’s stack frame and causes machine code specified by the attacker to be executed.[155]

Malicious programs

Another way for an attacker to compromise a system is to provide the system’s users with a hostile program and wait for them to run the program. Some programs when run will install hidden services that give attackers remote access capabilities to the compromised machine; these programs are called back doors because they offer attackers a way into the system that bypasses conventional security measures. Trojan horses are programs that appear to have one function but actually have another purpose that is malicious, similar to the great wooden horse that the Greeks allegedly used to trick the Trojans and end the siege of Troy.

Viruses and worms are self-replicating programs that can travel between computers as attachments on email or independently over a network. Viruses modify programs on your computer, adding to them their viral payload. Worms don’t modify existing programs, but they can install back doors or drop viruses on the systems they visit.

Stolen usernames and passwords and social engineering

On many computer systems it is possible to exploit bugs or other vulnerabilities to parlay ordinary access granted to normal users into “superuser” or “administrative” access that is granted to system operators. Thus, with ordinary usernames and passwords, a moderately skilled attacker can gain full run of many systems. Because these exploits can frequently be traced to the particular username, attackers commonly use a stolen username and password.

One of the most common ways for an attacker to get a username and password is social engineering. Social engineering is one of the simplest and most effective means of gaining unauthorized access to a computer system. For a social engineering attack, an attacker basically telephones the target organization and tries to socially extract information. For example, the attacker might pretend to be a new employee who has forgotten the password for his or her account and needs to have the password “reset.” Or the attacker might pretend to be a service representative, claiming that the Administrator account needs to have its password changed so that routine maintenance can be performed. Social engineering attacks are effective because people generally want to be helpful.

Phishing

Social engineering can also be automated. There are many so-called phishing programs that will send social engineering emails to thousands or tens of thousands of users at a time. Some programs solicit usernames and passwords. Others try for valid credit cards. For example, one scam is to send email to users of an online service telling them that their credit cards have expired and that they need to enter a new one at the URL that is provided. Of course, the URL goes to the attacker’s web server, not to the ISP’s.

Scale is another important difference between the security landscape that Bob Metcalfe faced in 1973 and the one we are facing today. In 1981, there were only 231 computers on the Arpanet, and those computers were mostly used for research purposes. Today there are millions of computers in constant connection to the Internet—with tens of millions more that are connected at some point during the day. These computers are being used for all manner of communications, commerce, and government activities.

As businesses, governments, and individuals have used the Internet to communicate faster and more efficiently than ever before, so too have the bad guys. Exactly as the Internet has made it possible to archive and easily distribute a wealth of information about science, technology, business, and art, it has also made it possible for attackers to discover, exchange, distribute, and archive more information about computer vulnerabilities. Back in the 1970s it was relatively rare for attackers outside of national agencies to work in groups of more than a few people. There were some people who were very good at gaining unauthorized access, to be sure, but on the whole, the knowledge of how to compromise computer systems was confined to small groups of relatively few people. Today there are literally thousands of organized and semi-organized groups of attackers—all exchanging information regarding computer vulnerabilities and exploits. Techniques, and in many cases complete programs for penetrating system security, are now widely distributed by email, through newsgroups, on web pages, and over Internet Relay Chat (IRC). Tools for compromising security—password sniffers, denial-of-service exploits, and prepackaged Trojan horses—are distributed as well.

Attackers now use automated tools to search out vulnerable computers and, in some cases, to automatically break in, plant back doors, and hide the damage. High-speed Internet connections have made it possible for attackers to rapidly scan and attack millions of computers within a very short period of time.

This increased scale has profound implications for anyone attempting to maintain a secure computer. In years past, many computers with known vulnerabilities could stay on the network for months or even years without somebody breaking into them. This is no longer the case. These days, if your computer has a known vulnerability, there is a very good chance that somebody will find that vulnerability and exploit it.

In fact, the widespread use of automated tools has resulted in many documented cases of systems being placed into service and hooked to the Internet, but before the owners could download and install all the needed vendor patches, the systems were discovered, probed, and attacked, and back doors installed.

The Honeynet Project (http://project.honeynet.org/) is an open Internet research project that is attempting to gauge the scale of the attacker community by setting up vulnerable computers on the Internet and seeing how long it takes before the computers are compromised. The results are not encouraging. In June 2001, for instance, the Honeynet Project announced that it took only 72 hours, on average, before somebody breaks into a newly installed Red Hat 6.2 system using one of the well-known exploits. A typical system on the Internet is scanned dozens of times a day. Windows 98 computers with file sharing enabled—a typical configuration for many home users—are scanned almost once an hour and typically broken into in less than a day. In one case, a server was hacked only 15 minutes after it was put on the network.

Who is breaking into networked computers with the most sophisticated of attacks? It almost doesn’t matter—no matter who the attackers may be, they all need to be guarded against.

As clichéd as it may sound, in many cases the attackers are children and teenagers—people who sadly have not (yet) developed the morals or sense of responsibility that is sufficient to keep their technical skills in check.

It is common to refer to young people who use sophisticated attack tools as script kiddies .[156] The term is quite derisive. The word “script” implies that the attackers use readily available attack scripts that can be downloaded from the Internet to do their bidding, rather than creating their own attacks. And, of course, the attackers are called “kiddies” because so many of them turn out to be underage when they are apprehended.

Script kiddies should be considered a serious threat and feared for the same reason that teenagers with guns should be respected and feared.[157] We don’t call gangbangers gun kiddies simply because youthful gang members don’t have the technical acumen to design a Colt 45 revolver or cast the steel. Instead, most people realize that teenagers with handguns should be feared even more than adults, because a teenager is less likely to understand the consequences of his actions should he pull the trigger and thus more likely to pull it.

The same is true of script kiddies. In May 2001, for instance, the web site of Gibson Research Corporation was the subject of a devastating distributed denial-of-service attack that shut down its web site for more than 17 hours. The attack was orchestrated by more than 400 Windows computers around the Internet that had been compromised by an automated attack. As it turns out, Steve Gibson was able to get a copy of the attack program, reverse-engineer it, and trace it back. It turned out that his attacker was a 13-year-old girl.

Likewise, when authorities in Canada arrested " Mafiaboy” on April 19, 2000, for the February 2000 attacks on Yahoo, E*TRADE, CNN, and many other high-profile sites—attacks that caused more than $1.7 billion in damages—they couldn’t release the suspect’s name to the public because the 16-year-old was shielded by Canada’s laws protecting the privacy of minors.[158]

Script kiddies may not have the technical skills necessary to write their own attack scripts and Trojan horses, but it hardly matters. They have the tools and increasingly they show few reservations about using them. Either they do not understand the grave damage they cause, or they do not care.

What does a script kiddie do when he grows up? Nobody is really sure—to date, there are no reliable studies.

Anecdotal reports suggest that many script kiddies go straight. Some lose interest in computers; some become system operators and network administrators; and some even go into the field of computer security. (The wisdom of hiring one of these individuals to watch over your network is a matter of debate within the computer security community.) But it is unquestionably clear that some individuals continue their lives of crime past their 18th birthdays. (Most stop at 18 because they are no longer “shielded” by laws that provide more lenient sentencing for juveniles.)

There is a small but growing population of “hacktivists” who break into sites for ideologic or political reasons. Often, the intent of these people is to deface web pages to make a statement of some kind. We have seen cases of defacement of law enforcement agencies, destruction of web sites by environmental groups, and destruction of research computing sites involving animal studies. Sometimes, the protesters are making a political statement; they may be advancing an ideologic cause, or they may merely be anarchists striking a blow against technology or business.

Sometimes, these incidents may be carried out against national interests. For instance, a guerilla movement may deface sites belonging to a government opponent. In other cases, you see individuals in one jurisdiction attempting to make some point by attacking sites in another, such as in the Israeli-Palestinian conflict, the ongoing tension between Pakistan and India, and the aftermath of the accidental bombing of the Chinese embassy by U.S. forces. Many of these attacks may be spontaneous, but some may be coordinated or financed by the governments themselves.

These incidents can also affect third parties. For instance, during a Chinese crackdown, many ISPs around the world hosting web pages of adherents of Falun Gong found their servers under attack from sites inside China. Because of the coordination and replication of the attacks, authorities believed they were actually state-sponsored. ISPs have been attacked by vigilantes because they sell service to spammers, provide web service for hate groups, or seem to be associated with child pornographers—even if the ISP owners and operators were unaware of these activities!

Compromising a computer system is usually not an end in itself. Instead, most attackers seek to use compromised systems as a stepping stone for further attacks and vandalism. After an attacker compromises a system, the system can be used for many nefarious purposes, including:

There are many reasons that compromised systems make excellent platforms for these kinds of illegal activities. If a compromised system is connected to a high-speed Internet connection, the system may be able to do much more damage and mayhem than other systems that the attacker controls. Compromised systems can also be used to make it more difficult for authorities to trace an attacker’s actions back to the person behind the keyboard. If an attacker hops through many computers in different jurisdictions—for example, from a compromised Unix account in France to a Windows proxy server in South Korea to an academic computer center in Mexico to a backbone router in New York—it may be effectively impossible to trace the attacker backward to the source.

Tools that are commonly used by attackers include:

nc (a.k.a. netcat)

Originally written by “Hobbit,” netcat is the Swiss Army knife for IP-based networks. As such, it is a valuable diagnostic and administrative tool as well as useful to attackers. You can use netcat to send arbitrary data to arbitrary TCP/IP ports on remote computers, to set up local TCP/IP servers, and to perform rudimentary port scans.

trinoo (a.k.a. trin00)

trinoo is the attack server that was originally written by the DoS Project. trinoo waits for a message from a remote system and, upon receiving the message, launches a denial-of-service attack against a third party. Versions of trinoo are available for most Unix operating systems, including Solaris and Red Hat Linux. The presence of trinoo is usually hidden. A detailed analysis of trinoo can be found at http://staff.washington.edu/dittrich/misc/trinoo.analysis.

Back Orifice and Netbus

These Windows-based programs are Trojan horses that allow an attacker to remotely monitor keystrokes, access files, upload and download programs, and run programs on compromised systems.

bots

Short for robots, bots are small programs that are typically planted by an attacker on a collection of computers scattered around the Internet. Bots are one of the primary tools for conducting distributed denial-of-service attacks and for maintaining control on Internet Relay Chat channels. Bots can be distributed by viruses or Trojan horses. They can remain dormant for days, weeks, or years until they are activated. Bots can even engage in autonomous actions.

root kits

A root kit is a program or collection of programs that simultaneously gives the attacker superuser privileges on a computer, plants back doors on the computer, and erases any trace that the attacker has been present. Originally, root kits were designed for Unix systems (hence the name “root”), but root kits have been developed for Windows systems as well. A typical root kit might attempt a dozen or so different exploits to obtain superuser privileges. Once superuser privileges are achieved, the root kit might patch the login program to add a back door, then modify the computer’s kernel so that any attempt to read the login program returns the original, unmodified program, rather than the modified one. The netstat command might be modified so that network connections from the attacker’s machine are not displayed. Finally, the root kit might then erase the last five minutes of the computer’s log files.



[154] This is especially true of government systems. Sadly, cost-containment pressures have led even the military to build safety-critical systems—systems absolutely vital for national and theater defense—on commercial platforms with defective or weak security features and horrendous records of exploitable flaws in released products.

[155] This form of attack is at least 35 years old and well known. It is astonishing that vendors are still building software that can be exploited this way.

[156] We have also heard them referred to as ankle-biters.

[157] In this sentence,, we use the word “respected” to mean “taken seriously,” and not “treated with honor because of skill or accomplishment.”