One of the more vigorous public policy debates in the security field centers on publication of information about security vulnerabilities. Some argue that vulnerability publication should be restricted in order to limit the number of people with the knowledge and tools needed to attack computer systems. Restriction proponents are particularly concerned with information sufficient to enable others to breach security, especially including exploit or proof-of-concept code.
The benefits of publication restrictions theoretically include denying script kiddies attack tools, reducing the window of vulnerability before a patch is available, and managing public overreaction to a perception of widespread critical insecurity.
Opponents of publication restrictions argue that the public has a right to be aware of security risks, and that publication enables system administrator remediation while motivating vendors to patch. They also question whether restricting white hat researchers actually deprives black hats of tools needed to attack, under the theory that attackers are actively developing vulnerability information on par with legitimate researchers.
Today many, if not most, security researchers have voluntarily adopted a delayed publication policy. While these policies may differ in detail, they come under the rubric of responsible disclosure. The term has come to mean that there's disclosure but no distribution of proof-of-concept code until the vendor issues a patch.[18] Once the patch is issued, it in itself can be reverse engineered to reveal the security problem, so there is little point in restricting publication after that time. In return, responsible vendors will work quickly to fix the problem and credit the researcher with the find.
Various businesses that buy and sell vulnerabilities are threatening this uneasy balance, as are researchers and vendors that refuse to comply. For example, in the month of January 2007, two researchers published daily flaws in Apple's operating system without giving advance notice of those flaws to the company.
Can we regulate security information? The dissemination of pure information is protected in the U.S. by the First Amendment. Many cases have recognized that source code, and even object code, are speech-protected by the First Amendment, and as a general principle, courts have been loath to impose civil or criminal liability for truthful speech even if it instructs on how to commit a crime. (The infrequent tendency of speech to encourage unlawful acts does not constitute justification for banning it.)
On the other hand, information about computer security is different from information in other fields of human endeavor because of its reliance on code to express ideas.[19] Code has a dual nature. It is both expressive and functional. Legislatures have tried to regulate the functionality of code similar to tools that can be used to commit criminal acts.[20] But the law cannot regulate code without impacting expression because the two are intertwined.
While current case law says that laws that regulate the functionality of code are acceptable under the First Amendment if they are content-neutral, lawmakers have advocated or even passed some laws that regulate publication. For example, the Council of Europe's new Cybercrime Treaty requires signatories to criminalize the production, sale, procurement for use, import, and distribution of a device or program designed or adapted primarily for the purpose of committing unauthorized access or data intercept. Signatories can exempt tools possessed for the authorized testing or protection of a computer system. The United States is a signatory.
As previously discussed, the U.S. government and various American companies have used Section 1201 (which regulates the distribution of software primarily designed to circumvent technological protection measures that control access to a work protected under copyright laws) to squelch publication of information about security vulnerabilities. But where there is no particular statute, then security tools, including exploit code, are probably legal to possess and to distribute.
Nevertheless, companies and the government have tried to target people for the dissemination of information using either the negligence tort, conspiracy law, or aiding and abetting.
To prove negligence, the plaintiff has to establish:
Duty of care
Breach of that duty
Causation
Harm
Duty of care means that a court says that the general public has a responsibility not to publish exploit code just because it's harmful, or that the particular defendants have a responsibility not to publish exploit code because of something specific about their relationship with the company or the customers. Yet, the first amendment protects the publication of truthful information, even in code format. Code is a bit different because code works, it doesn't just communicate information. No case has ever held that someone has a legal duty to refrain from publishing information to the general public if the publisher has no illegal intent. I think that would be hard to get a court to establish, given the general practice of the community and the prevailing free speech law. I can imagine, however, a situation in which a court would impose a duty of care on a particular researcher with a prior relationship with a vendor. This hasn't happened yet.
With regard to evidence of conspiracy, the charge requires proof of an agreement. If you publish code as part of an agreement to illegally access computers, that is a crime. The government recently proved conspiracy against animal rights activists by using evidence of web site language supporting illegal acts in protest of inhumane treatment (Stop Huntingdon Animal Cruelty). The convictions are decried as a violation of the First Amendment, but there were illegal activities, and while the web site operators were not directly tied to those activities, the web site discussed, lauded, and claimed joint responsibility (by using the word "we" with regard to the illegal acts).
Aiding and abetting requires the government to show an intent to further someone else's illegal activity. Intent, as always, is inferred from circumstances.
Rarely does the government infer illegal intent from mere publication to the general public, but it has happened. For example, some courts have inferred a speaker's criminal intent from publication to a general audience, as opposed to a coconspirator or known criminal, if the publisher merely knows that the information will be used as part of a lawless act (United States v. Buttorff, 572 F.2d 619 [8th Cir.], cert. denied, 437 U.S. 906 [1978] [information aiding tax protestors]; or, United States v. Barnett, 667 F.2d 835 [9th Cir. 1982] [instructions for making PCP]). Both Buttorff and Barnett suggest that the usefulness of the defendant's information, even if distributed to people with whom the defendant had no prior relationship or agreement, is a potential basis for aiding and abetting liability, despite free speech considerations.
In contrast, in Herceg v. Hustler Magazine, 814 F.2d 1017 (5th Cir. 1987), a magazine was not liable for publishing an article describing autoerotic asphyxiation after a reader followed the instructions and suffocated. The article included details about how the act is performed, the kind of physical pleasure those who engage in it seek to achieve, and 10 different warnings that the practice is dangerous. The Court held that the article did not encourage imminent illegal action, nor did it incite, so it was First Amendment-protected.
Legitimate researchers are not comforted by this lack of legal clarity. Security researchers frequently share vulnerability information on web pages or on security mailing lists. These communities are open to the public and include both white-hat and black-hat hackers. The publishers know that some of the recipients may use the information for crimes. Nonetheless, the web sites properly advise that the information is disseminated for informational purposes and to promote security and knowledge in the field, rather than as a repository of tools for attackers.
A serious problem is that prosecutors and courts might weigh the social perception of the legitimacy of the publisher's "hacker" audience or the respectability of the publisher himself, in deciding whether the researcher published with a criminal intent.
In one example, in 2001 a Los Angeles-based Internet messaging company convinced the U.S. Department of Justice to prosecute a former employee who informed the company's customers of a security flaw in its webmail service. The company claimed that the defendant was responsible for its lost business. As a result, security researcher Bret McDanel was convicted of a violation of 18 U.S.C. § 1030(a)(5)(A), which prohibits the transmission of code, programs, or information with the intent to cause damage to a protected computer, for sending email to customers of his former employer informing them that the company's web messaging service was insecure. The government's argument at trial was that McDanel impaired the integrity of his former employer's messaging system by informing customers about the security flaw. I represented Mr. McDanel on appeal.
On appeal, the government disavowed this view and agreed with the defendant that a conviction could only be based on evidence that the "defendant intended his messages to aid others in accessing or changing the system or data."[21]. McDanel's conviction was overturned on appeal, but not before he served 16 months in prison. Nothing in the statute says that Section 1030 requires proof of intent, but because McDanel's actions were speech, the government had to read that requirement into the statute to maintain its constitutionality.
In late 2006, Chris Soghoian published an airline "boarding pass generator" on his web site. The generator took a Northwest boarding pass, which the airline distributes in a modifiable format, and allowed users to type their own name on the document. Though the Transportation Safety Administration (TSA) had long been aware of the ease of forging boarding passes, they had done nothing and the problem was not widely know. After Soghoian's publication, there was something of a public outcry, and Congress called for improved security. The Department of Homeland Security paid a visit to Soghoian, investigating whether he was aiding and abetting others in fraudulently entering the secured area of an airport. Because Soghoian had never used his fake boarding passes, nor provided it to anyone, and because the language on his web site made clear that his purpose was to critique the security of the boarding pass checkpoint, the Department of Homeland Security recognized that the publication was not criminal. Nonetheless, they sent a cease-and-desist letter to his ISP, which promptly removed the page.
The blunt lesson from these cases is that it's risky to be a smart ass. You have a right to embarrass the TSA or to show how a company is hurting its customers, but being a gadfly garners attention, and not all attention is positive. The powers that be do not like being messed with, and if the laws are unclear or confusing, they'll have even more to work with if they want to teach you a lesson. This isn't to say there is no place for being clever, contrary, or even downright ornery. Some of the most important discoveries in network security and other fields have been made by people whose motivation was to outsmart and humiliate others. If this is your approach, be aware you are inviting more risk than someone who works within the established parameters. You may also get more done. Talk to a lawyer. A good one will point out the ways in which what you are doing is risky. A great one will help you weigh various courses of action, so you can decide for yourself.
Be aware there may be statutes in your state that apply to publications that are beyond the scope of this chapter, that have arisen since this book was last printed, or that apply to your special circumstance. In general:
Publish only what you have reason to believe is true.
Publish to the vendor or system administrator first, if possible.
Don't ask for money in exchange for keeping the information quiet. I've had clients accused of extortion after saying they would reveal the vulnerability unless the company wants to pay a finder's fee or enter into a contract to fix the problem.
Do not publish to people you know intend to break the law. Publish to a general audience, even though some people who receive the information might intend to break the law.
If you are thinking about publishing in a manner that is not commonly done today, consult a lawyer.
[18] Paul Roberts, Expert Weighs Code Release In Wake Of Slammer Worm, IDG News Service, Jan. 30, 2003, available at http://www.computerworld.com/securitytopics/security/story/0,10801,78020,00.html; Kevin Poulsen, Exploit Code on Trial, SecurityFocus, Nov. 23, 2003, at http://www.securityfocus.com/news/7511.
[19] See 49 U.C.L.A. L.Rev. 871, 887–903.
[20] See, e.g., 18 U.S.C. 2512(1)(b) (illegal to possess eavesdropping devices); Cal. Penal Code §; 466 (burglary tools).
[21] Government's Motion for Reversal of Conviction, United States v. McDanel, No. 03-50135 (9th Cir. 2003), available at http://cyberlaw.stanford.edu/about/cases/001625.shtml