As they commit to this activity, the Federal Government can and will help them, in the spirit of a true public-private partnership. The Government will not dictate solutions and will eschew regulation.
—NATIONAL PLAN FOR INFORMATION SYSTEMS PROTECTION, THE WHITE HOUSE, 2000
By the start of the Obama administration, it was clear to many in government and on the Hill that market forces alone were not driving the private sector to protect the nation’s vital systems against sophisticated criminal groups and foreign nations. So, with support from Democrats in the Senate, in May of 2011, the White House delivered to Congress a comprehensive legislative proposal that would have granted the Department of Homeland Security authority to regulate critical infrastructure for cybersecurity, from oil and gas companies to stadiums.
Industry was not supportive. The U.S. Chamber of Commerce quickly made it its mission to kill the bill and succeeded, arguing that with the economy still fragile after the financial crisis a few years earlier, regulating cybersecurity would hurt one of the few sectors of the economy that was thriving: information technology. They espoused the mistaken view that cybersecurity regulation would stifle innovation and was a jobs killer. In case the message wasn’t clear, the Chamber of Commerce hung giant banners between the pillars of its headquarters, which faced the White House across Lafayette Square, that spelled out J-O-B-S. The idea alienated many of the tech executives who had donated both money and technical talent to elect the President. As the legislative proposal failed to gain support on the Hill, by the summer of 2012 the administration quietly began to plot a different approach.
At that time, cybersecurity nominally fell under John Brennan, the deputy national security adviser for Counterterrorism and Homeland Security (and later CIA director). Brennan, by his own admission, mostly focused on the first half of his title, the administration’s Catholic conscience on the targeted killing of terrorists. He delegated most of the Homeland Security mission on a day-to-day basis to his deputy, Heidi Avery. Avery is not a household name by any measure. A career intelligence official, she managed to coordinate the Obama administration’s response to Deepwater Horizon and a series of major natural disasters without raising her profile above a single blog post. And so, in her stealthy way, so as not to undermine what was now a long-shot effort to get the legislative package through Congress, Avery quietly assembled a small team to see which elements of the legislative proposal the President could accomplish through executive action.
Executive orders allow the President to direct agencies to take actions that align with existing laws. The CSIS commission report, “Securing Cyberspace for the 44th Presidency,” had suggested that the President draw on existing regulatory authority, such as the EPA has over water systems or DHS has over chemical plants, to regulate for cybersecurity. Avery asked staffers in the NSC cyber office to pursue this strategy. They didn’t expect that the idea of regulating for cybersecurity would meet resistance within the administration.
Any regulation issued by a federal agency gets reviewed by the small but powerful Office of Information and Regulatory Affairs. OIRA is housed inside the Office of Management and Budget in the ugly, 1960s-era New Executive Office Building, directly north of the Eisenhower Executive Office Building that houses the NSC staff. Pennsylvania Avenue was seemingly a dividing line, with the national security team on one side advocating regulation and the Chamber of Commerce and OMB on the other, dead set against it.
When the NSC team went to meet with OIRA to get them on board with the idea of issuing an executive order to expand regulation on cybersecurity, the response that they got was lukewarm.
Jasmeet Seehra, a career official and experienced Washington hand, was surprised that, after failing to get a bill passed because it had been labeled a job killer, the President would sign on to using his authority to unilaterally expand regulation. Unemployment was still hovering around 8 percent. “I know it’s none of my business,” the career official said, “but you guys remember it’s an election year, right?” Instead, she suggested, might they think about a “nudge”? No one in the room had any idea what she was talking about. She pulled out a copy of a book called Nudge: Improving Decisions About Health, Wealth, and Happiness, and suggested they do some reading. The book was coauthored by her boss, Cass Sunstein.
Sunstein, a Democrat, was not necessarily a fan of regulation. A former colleague of President Obama’s at the University of Chicago Law School, Sunstein had advocated for simple but not always popular ideas, such as subjecting regulation to cost-benefit analysis. With the economist Richard Thaler, Sunstein had written Nudge, arguing that government may be more effective when it shapes voluntary action rather than when it sets mandatory requirements. On his first date with his future wife and former UN ambassador Samantha Power, he told her his dream job was to run OIRA and implement these ideas.
The nudge the NSC team came up with was the NIST Cybersecurity Framework, discussed in chapter 3. It was meant to spur voluntary efforts by private-sector companies to defend their own infrastructure and, most agree, was largely effective at doing that. The National Strategy for Trusted Identities in Cyberspace, which we will discuss in chapter 8, was a nudge to get industry to solve the long-standing problems that keep things like health records from being brought online and make it burdensome to carry out certain financial transactions. Urging industries to create their own mutual information sharing and analysis organizations was also a nudge.
All of these nudges contributed to some corporations improving their cybersecurity, and there are many more nudges that government should consider, but to get companies to perform critical roles in our economy to the level of defense that is now possible, they may need to be pushed a bit harder. Let’s call it a shove. We think it’s time to reopen the regulation debate and to think anew about how the government should be interacting with the private sector to further erode the offense’s remaining advantages.
Evan Wolff is one of the leading cybersecurity attorneys in Washington. What that means is that by day he helps his clients respond to cyber incidents, including directing investigations and advising on notifications under state, federal, and international requirements. By night, as a former MITRE data scientist and Global Fellow at the Wilson Center, he thinks and writes about how his clients can mitigate the threat of cyber incidents in the first place, including what can be done to build an effective collective defense. From experience, Wolff recognizes that only rarely would the teams behind security incidents stop at nothing to reach their targets. In fact, the forensics usually shows that the basic blocking and tackling of cybersecurity was not done, making it easy for the attacker. Fundamentally, what Wolff sees as missing isn’t any one or group of security controls, it’s an economic model that would force companies to take on the full societal cost of poor cybersecurity, along with better coordination with federal and industry partners. In the language of economics, we need to make companies “internalize the externalities” of protecting data, protecting networks, and establishing secure communications, says Wolff. “Until we internalize those externalities,” he believes, “we aren’t going to begin to get the industry part of the collective defense down.”
Contrary to the current political dogma, regulation doesn’t kill innovation, it can create it. When markets are not valuing what we as a society want them to value, regulation can create whole new markets. The common refrain from industry is that regulation can’t possibly move fast enough to keep up with innovation. We can find many examples where twenty-year-old security standards are still applicable and still have yet to be broadly implemented. We also see a growing reluctance to use digital products and services by consumers and businesses because of security risks.
The tide seems to be turning on this issue. Twenty years ago, when President Clinton released the first national strategy on cybersecurity, it “eschewed” regulation. Three years after that, the Bush strategy took largely the same stance. The Obama administration, as we’ve seen, pulled a regulatory proposal after the Chamber of Commerce went on a jihad to stop it. And yet, as the losses start to mount in cyberspace, the Trump administration’s cyber strategy was silent on the topic of regulation. Surprisingly, the Department of Homeland Security’s 2018 cybersecurity strategy actually said it would use its regulatory authority: “DHS must, therefore, smartly leverage its regulatory authorities in tailored ways, and engage with other agencies to ensure that their policies and efforts are informed by cybersecurity risks and aligned to national objectives to address critical cybersecurity gaps.”
Smartly leveraging existing regulatory authorities in tailored ways is exactly what government should be doing. What doomed the Obama legislative proposal to failure was a bid to centralize all regulation of cybersecurity at the Department of Homeland Security, instead of taking the sector-by-sector approach advocated by the bipartisan group behind the CSIS commission. In that approach, DHS would regulate only the sectors it already has responsibility for, such as the chemical, pipeline, and maritime industries, leaving other sector-specific agencies that understand their industries to regulate them.
Although Clinton, Bush, and Obama eschewed, rejected, or declined to establish a federal cybersecurity regulatory regime, there is a mountain of cybersecurity regulation created by federal agencies. Banks, nuclear power plants, self-driving cars, hospitals, insurance companies, defense contractors, passenger aircraft, chemical plants, and dozens of other private-sector entities are all subject to cybersecurity regulation by a nearly indecipherable stream of agencies including the FTC, FAA, DHS, DoD, FERC, DOE, HHS, DOT, OCC, and on and on. Variation in federal regulations should be a result of conscious policy choices, not the incremental accretion of rules written at different times with little central guidance. It is time to step back and assess which of these agencies and regulations have been effective.
What we would like to see is either a comprehensive law passed by Congress or an executive order issued by a President. Where regulations are purposefully different to address specific industries, that is the intelligent, nuanced approach. When regulations differ because regulators have not consulted with one another, that is mismanagement. Eating up greater and greater percentages of security spending with duplicative regulatory compliances is in nobody’s interest. For companies with multiple regulators, a streamlined process for verifying compliance, not just for eliminating duplicate requirements, is necessary. The financial sector has been developing a good model for how to coordinate oversight with the Federal Financial Institutions Examination Council.
The law or order would establish basic cybersecurity regulatory principles and best practices for federal regulators, as well as just enumerating what regulations exist. Among the best practices could be guidelines for such policy issues as accountability, audits, incident reporting, government information sharing, bug bounties, identity and access management, third-party code security reviews, continuous monitoring, public notifications, supply chain security, and personnel certifications. Above all else, though, the key principle that needs to be followed is that regulations need to be outcome-based, by telling regulated entities what they need to achieve, not how to do it.
In other areas, we have done this before. After 9/11, when experts worried about other ways that terrorists could use our infrastructure against us as weapons, as al-Qaeda had done with airplanes, many people focused on the chemical industry. Massive volumes of highly toxic chemicals were stored all over the country, often in close proximity to population centers. Through the Chemical Facility Anti-Terrorism Standards (CFATS), the Department of Homeland Security worked in partnership with industry to develop a program that fundamentally reduced the risk to the American people. It did not mandate that facilities relocate or switch to safer production methods, but it had the effect of causing companies to make those decisions. If by necessity you must store toxic chemicals in downtown Boston, then the program mandated that you implement security at such a level as to thwart an attack on the facility by trained terrorists. Instead of hiring paramilitary forces and hardening their complexes, most companies chose (wisely) to relocate or switch processes.
In the nuclear reactor domain, outcome-based security is achieved through the use of a design basis threat (DBT). While the details of what makes up the DBT used by the Nuclear Regulatory Commission are classified, the basic idea is that nuclear facilities need to be able to detect and delay an adversary composed of a certain number of actors with a certain type of skills bearing a certain set of weapons and tools. Replicating this model would be straightforward for cybersecurity, where red teaming is a well-established practice. One effective regulatory approach might be for government agencies to certify the capabilities of red teams and then for those red teams to conduct tests of companies.
One of the best examples of how to do outcome-based regulation is already being intelligently applied to combating cyber threats. Regulation E of the Electronic Funds Transfer Act requires that banks reimburse consumers for fraud losses. Contrary to popular belief, it’s not that your checking account is insured by the Federal Deposit Insurance Corporation that keeps you from being liable for fraudulent charges. Instead, the banks must accept those costs. Originally established when check fraud was the dominant way criminals took money out of the financial system, the rule works equally well now that threats have mostly morphed into online account compromise.
We think, as a basis, mandatory breach disclosure is a start. In some version of a truth and reconciliation commission process, companies should be required to disclose all losses of intellectual property going back ten years. Moving forward, the Securities and Exchange Commission should establish a process to adjudicate public disclosures in a timely manner so that investors are made aware when the intellectual property they are banking on may be used by foreign competitors.
Breach disclosure alone has not had the hoped-for effect of preventing personally identifiable information (PII) from being stolen. Equifax, after disclosing the largest data breach in U.S. history in 2017, recovered its stock price in less than a year. We think it is necessary that the fines for losing PII be significant enough to make companies think twice about storing that data.
The Ponemon Institute, a cybersecurity think tank, puts the cost of a data breach at $141 per lost record. Those costs, incurred through disclosure, class-action lawsuits, state-level fines, and improved security, have not been sufficient to get companies to make the necessary changes in operations and investments to prevent such losses. If, instead, companies knew with certainty that they would be paying, say, $1,000 for every record they lost, that would begin to align incentives.
The prospect of high fines may still mean that many companies will simply choose to take their chances and accept that if they are breached they will lose it all. Trying to force companies with bad cybersecurity out of business should not be the goal. Nonetheless, we think companies should have to prove that they have the resources to pay those fines.
Congress should steal an idea from environmental policy by requiring companies that store PII to purchase bonds that would cover the full societal costs of the loss. Oil tankers operating in U.S. waters must have a Certificate of Financial Responsibility issued by the U.S. Coast Guard National Pollution Funds Center showing that the vessel’s owner has the financial resources to cover the cost of cleaning up a spill.
Such massive policies mean that the underwriters are going to want near certainty that they will never be paid out. Thus, we now have double-hulled ships and other improvements that have made spills such as the Exxon Valdez a thing of the past. If we treat data like oil spills and require companies to cover the full societal costs of losing data, the market will do the rest by ensuring that spills of data become exceedingly rare.
These diverse regulatory models could be applied differently depending on the sector and the harm that government is trying to prevent. We are certain, however, that the blanket conclusion that regulation is anathema to innovation is wrong, that voluntary collaboration can be enhanced by a regulatory baseline, and that a lack of security is holding us back from fully benefiting from the digital revolution.
While successive federal administrations have shied away from coherent and coordinated cyber regulations, while various federal agencies and departments have developed their own regulations covering the cybersecurity of specific industries, while Congress has remained largely in gridlock on cyber regulations, state governments have acted.
California has required since 2012 that businesses have cybersecurity programs, and in 2016 its attorney general, Kamala Harris, concluded that failing to implement the Center for Internet Security’s Critical Security Controls “constitutes a lack of reasonable security.” (The CIS recommends twenty specific areas for controls.) In September 2018, Governor Jerry Brown signed SB-327, requiring that by January 1, 2020, manufacturers of devices sold in California must implement “reasonable” security features. Given that California’s economy is the fifth largest in the world, that will mean that most device makers will be compelled to comply with the new law.
Not to be outdone, other states have pursued cyber regulations. Ohio enacted legislation in 2018 that provides protection for businesses against tort suits following a successful cyberattack that obtained personally identifiable information if the corporation had a cybersecurity program based on the CIS Critical Security Controls and the NIST Cybersecurity Framework (so-called safe harbor). New York’s Department of Financial Services (DFS) Regulation 500 has since 2017 applied to foreign banks, state-chartered banks, insurance companies, and other financial entities, requiring a cybersecurity program, a qualified chief information security officer, multifactor authentication, encryption, application security, supply-chain vendor review, asset inventory, and continuous monitoring or annual penetration tests. It also recommends the NIST Cybersecurity Framework and requires an executive to sign off for the efficacy of the company’s cybersecurity plan.
Some of these state regulations are commendable ways of guiding recalcitrant corporations to the minimum essential steps to protect themselves, their customers, and the general health of the cyberspace on which we all rely. Unfortunately, many of the laws vary significantly on important issues such as when there is a legal requirement to notify the customer or the government about a possible hack. The multiplicity of varying state regulations combined with the numerous federal rules does provide corporations and their lobbyists with grounds for complaint that it is all just too hard and expensive to track, understand, and implement.
If ever there was an example of interstate commerce it is the modern corporation’s information technology network. With the exception of small businesses, corporate cyber activity is almost always multistate, involving distributed offices, data centers, IT vendors, and customers. Thus, there is a good case for creating superseding and uniform federal regulation. There are only two problems: Congress, which is hyperpartisan and concerned about committee jurisdictional fiefdoms within each house; and the lobbying groups such as the U.S. Chamber of Commerce, which blindly and in a knee-jerk manner oppose any new cyber regulation with the erroneous mantra about stifling innovation.
Until the Congress and the lobbyists can begin to act in the national interest, there is the risk that any new uniform federal regulations they might pass would actually water down some state laws and be less effective. Thus, we may, for now, be better off with progressive states such as California and New York writing rules that end up being de facto national standards because most major companies fall under their jurisdiction.
We have been arguing in this book that corporations can now achieve a fairly high level of cybersecurity if they spend enough, deploy state-of-the-art IT and cyber solutions, and adopt the right policies and procedures. Even the power grid, government agencies, and the military could achieve enhanced security. However, in the unlikely event that all of that happened, we would still have a problem.
A foreign nation-state could still cause a high degree of chaos by attacking the backbone of the internet itself, rendering it useless or at the very least only sporadically available. Without the internet, few other pieces of the economy would work. The three remaining problems are, naturally, described by three acronyms: DDoS, DNS, and BGP.
In the attacks against the banks, the Iranians used a distributed denial-of-service attack. In the more recent attacks, variants of the Mirai bot used thousands of surveillance cameras as platforms to flood the Domain Name System (DNS) and disrupted large parts of the internet. A worse DDoS can easily be imagined and executed, effectively knocking off-line key infrastructure by flooding internet connections. At a certain level of flood, the companies that specialize in stopping DDoS attacks, such as Akamai and Cloudflare, will be overwhelmed.
If, as in the case of the Mirai bot, the flood is directed at specific DNS-related IP addresses and at certain companies that specialize in outsourcing Domain Name System look-up services, then that key part of the internet backbone will not work and it will be impossible for some users to find their way to web pages, for some email to reach its intended recipient, or for some of the countless networked devices in our homes and offices to function. Mirai was a relatively small attack compared to what could happen.
While the DNS tells your email message or the server supporting your browser where to go on the internet (or on your corporate intranet) to find the address you are looking for, there is a completely different system used by the internet service providers who own and operate the big fiber-optic pipes that are the backbone of the internet. That system, the one that tells Verizon or AT&T how to route a message to get to a server that is not on their own network, is called BGP (it means Border Gateway Protocol, but just say BGP like everyone else). BGP is still the biggest security flaw in the internet, even twenty years after Mudge Zatko testified that he and the other members of the L0pht could take it down in thirty minutes.
Think of the BGP as like the Waze app. It tells internet traffic what the route is to get from where it is to where it wants to go. If you get onto the internet by connecting to Verizon in the United States and you want to read a web page of the Australian Football League that is on a server in Australia that connects to the internet by using the local ISP Telstra, Verizon will consult the BGP tables. There they will find the routing to the football club. It may be from Verizon to AT&T to Telstra.
The problem is that essentially any internet service provider in the world can contribute to the BGP. So, what if China Telecom posted on the BGP table that the Aussie football server was actually on their network? Then your computer would connect to China Telecom while looking for Australia. And that is what has been happening.
According to Chris Demchak of the U.S. Naval War College and Yuval Shavitt of Tel Aviv University, China Telecom has been messing around with the BGP tables. For instance, in February 2016 and for six months after, all traffic from Canada to South Korea was misrouted through China. In October 2016, some traffic from the United States to Italy went through China. In April 2017, the pathway from Norway to Japan was altered to go through China. In July 2017, it was the connection from Italy to Thailand. You get the picture.
While traffic is going through China, that traffic (emails, for example) can be copied or sent into a black hole. The Chinese are not alone. Russia and other countries have also been regularly redirecting internet traffic that was not supposed to go through them. Iran has been grabbing traffic for secure messaging apps such as Signal. While such messages are encrypted, the “To” and “From” metadata could be interesting to the Iranian secret police.
China Telecom’s attempts to redirect traffic through China is greatly aided by the fact that it operates eight internet points of presence (PoPs) inside the United States and has its own West Coast fiber running from Hillsboro, Oregon, to Morro Bay, California. Needless to say, AT&T does not have PoPs in China, nor cables running from Shanghai to Dalian. China would never agree to that, because in a crisis the United States could really throw the Chinese internet into chaos by playing with the BGP tables and grabbing their traffic.
Whatever you think about the potentially beneficial or pernicious effects of cybersecurity regulation of U.S. companies, the very backbone of the internet, made up of the long-haul fiber-optic cables and the routing systems, should be secured so bad guys trying to mess around with the DNS and BGP routing systems would have a harder time. The way that could be done would be for the Federal Communications Commission (FCC) to regulate this internet backbone, the DNS, and the BGP systems to require some basic security functionality. They refuse to, once again, out of an ideological antipathy toward regulation thinly guised as a fear that regulation would impede innovation.
Do you chew Trident or Dentyne gum? Do you like Philadelphia Cream Cheese and sometimes put it on Ritz crackers? When no one is looking, do you eat all of the Oreo cookies in the roll? Or when you have a cough, do you pop in a Hall’s lozenge? Then you already know some of the products of a big, multinational food company you’ve probably never heard of: Mondelēz. Production of all of those necessities of life came to a crashing halt because of the NotPetya attack we discussed in chapter 2. Mondelēz, however, had insurance and, therefore, assumed that it had transferred its risk and that its claim to cover some of its $100 million in losses would be honored. Not so fast.
Zurich, the big Swiss insurance company, refused to pay out. The Swiss said that NotPetya was an act of war, an attack carried out by the Russian military and, therefore, excluded from the insurance policy. Thus, the debate about what is and what is not cyber war has gone into a courthouse and the future role of cyber insurance may be decided over who pays for the losses on the day the Oreos stopped.
The outcome in the case is important because many corporations rely on insurance as part of their overall risk strategy. Moreover, cyber insurance could play an even bigger role if it were combined in some creative ways with regulation, as we will discuss later. The last ten years have seen a burgeoning of the cyber risk insurance market. Corporations and even city governments have attempted to transfer risk to insurance companies by buying up new cyber insurance policies. There was a lot of hope among cybersecurity experts that this trend could be a way of nudging corporations into improving their security by linking insurance coverage to some meaningful security measures. Alas, that has not happened. Instead, a lot of insurance companies have a newfound source of profit and that income has actually worked against meaningful security improvements.
At first, staid old insurance companies were fearful of covering cyber risk. They put riders in the existing casualty and loss policies that exempted damage from cyberattack. They did so because there was no actuarial data: no reliable record of how often attacks occur, what losses from the attacks typically amount to, or how attackers actually execute their breaches. To insurance companies used to being able to predict with more than 90 percent accuracy when you will get into a car crash and how much it will cost them, cyber was a scary place, a terra incognita.
Gradually, however, some companies dipped a toe, not five or ten toes, in the water. They covered a few kinds of costs and only up to fairly low dollar amounts. They would pay for a cyber incident response team, maybe for the cost of giving customers some relatively useless credit monitoring service postbreach. Maybe they would pay out on some legal costs and cover some business continuity losses. What the insurance policies would not cover were the two most expensive effects of cyber breaches: reputational damage and intellectual property theft. If China steals your research-and-development secrets, you are on your own.
Most of the carriers began to require some assurance from their clients that the insured party had taken some minimal cybersecurity measures. What they almost never did, however, was to bother to check on whether the corporations were actually doing what they claimed to be doing. As one insurance company official told us, “We can always check after they file a claim and if they weren’t living up to the minimum practices they said they were, we can just deny the claim.”
Insurance companies could, of course, require continuous monitoring software to report on a company’s state of security and compliance in real time. Doing this would be the cyber insurance equivalent of the Progressive automobile insurance policies that involve installing driver behavior monitors in cars. Why wouldn’t insurance companies want that kind of information? Money.
After years now of selling limited cyber coverage, most insurance companies have found that doing so is profitable. While they do not sell anywhere near as much cyber insurance as life, casualty, home, or auto insurance, they keep a much higher percentage of the premiums they collect when they sell cyber insurance. The payouts to the insured are a smaller percentage of the revenue than in most other forms of insurance. So why rock the boat when you are making money?
Most insurance, somewhat oddly, is not regulated at the federal level. Health-care insurance is, of course, or was until the Trump administration made a hash of the Affordable Care Act. Almost all other kinds of insurance are supervised by insurance commissioners in the fifty states. Some states elect their insurance commissioners, as in California. In others, governors appoint them. In New York, the Department of Financial Services doubles as the insurance regulator.
What worries some of the state insurance commissioners we have talked with is the prospect of the often-discussed “cyber Pearl Harbor” or the “cyber 9/11.” What they mean by that is there could be a large-scale, catastrophic cyber event that would not be covered by one of the many outs and exclusions the insurance companies have put in their policies. The commissioners worry that a major cyber event could cause companies to pay out so much that they might become insolvent and unable to continue to pay out for other damages. Thus, the commissioners are beginning to think about a new law similar to the Terrorism Risk Insurance Act, TRIA.
Enacted after 9/11, TRIA provides a partial federal-government financial backstop to the insurance industry in the event of a major terrorist attack that exceeds the financial ability of the insurance industry to respond to claims. A “Cyber War Risk Insurance Act” is one example of a possible useful new regulation. It could provide a partial federal financial backstop to the industry in case of a national cyber event. It would also be an opportunity to create some meaningful compliance standards for insured entities.
We would suggest that corporations would be eligible for such CWRIA-backed insurance only if they had installed such features as an approved continuous monitoring system to perform asset discovery, assess the state of critical patches, and conduct vulnerability assessments. Companies that went out of compliance for more than thirty consecutive days would lose coverage until they remedied their deficiencies. Now that would not be a federal mandate, but it would be one hell of a nudge, maybe even a shove.
The employees were looking forward to the holiday party, but then one of them, assisted by his wife, started shooting everybody. It was December 2, 2015, in San Bernardino, California. Syed Farook and his wife, Tashfeen Malik, killed fourteen and wounded twenty-two, before being chased and killed by local police. The FBI was immediately called in to lead the investigation.
By February, the FBI was saying publicly that it could not open one of the iPhones used by the terrorists. The devices were set so that after a few failed attempts to open them with a PIN, the phones would wipe all data. FBI Director James Comey called upon Apple to develop software that could be used to bypass the PIN and unlock the devices. Apple CEO Tim Cook, correctly in our view, refused, saying that Apple could not be compelled to weaken the security of its own products. Comey took Cook to court.
The backstory was that Comey had been campaigning inside the Obama administration for new legislation that would require companies that make encryption software to build in ways that the government could decrypt the code. It was an idea that had gone down in flames twenty years earlier, when Congress rejected a proposal for a so-called Clipper chip that would permit a court to unlock encryption upon petition by the government. After months of lobbying inside the administration, Comey had lost. The Obama administration would not undercut encryption. Then San Bernardino happened, and Comey saw his chance to get a court to give him what the White House would not.
How could anyone possibly deny the FBI’s request to help it in an ongoing investigation of such a heinous terrorist attack? Comey and his supporters at the Justice Department claimed they were not seeking a legal precedent, they were just worried about this one case. There were, however, hundreds of other iPhones involved in other cases that the FBI or local police could not open. What happened next tells you a lot about the value of encryption to cybersecurity. Far from supporting Comey and the FBI, former high-level national security officials came out of the woodwork to support Apple, including former CIA and NSA directors. We were part of that chorus. What we were all saying was that encryption is essential to secure private-sector networks and databases. If Apple created a way to break the encryption on its devices, malicious actors would find a way to use it too.
In a heated debate at the RSA security conference in 2016, Dick Clarke asserted that the government was looking to create a bad precedent and that, in fact, it already had classified means to open the phone. All that the FBI needed to do was to “drive up the Baltimore-Washington Parkway to Fort Meade,” the home of the NSA. John Carlin, then the assistant attorney general for national security, strenuously denied it was about precedent and asserted that there was no existing method of opening the device available to the government. Comey told the same story in congressional hearings.
Then, while Comey’s case against Cook worked its way through the courts, the FBI announced that it had opened the iPhones with the help of an Israeli security firm. The court case ceased. Much later, the Justice Department Inspector General reviewed what had happened and concluded that while Comey and Carlin were denying that the government could open the iPhones, the FBI actually had the capability all along. The IG declined to investigate whether Comey had knowingly misled Congress or, alternatively, that no one in the FBI had bothered to tell their leader that he was wrong as he went around the Capitol for weeks saying no capability existed. The latter possibility seems unlikely.
The takeaway from this tempest in a Capitol teacup is that even national security officials, or maybe especially national security officials, think that encryption is a sine qua non for corporate cybersecurity, but do not think government should have a role in it. Many national security officials were even willing to break ranks with the FBI to stress this point. Of course, no discussion of the virtues of encrypting everything would be complete these days without mentioning that a lot of companies are having their networks encrypted involuntarily and not by the government.
“I have a friend whose company just got hit. All of their data got encrypted. Do you think they should pay the ransom?”
We have had more than a few calls like that. We usually say that the answer is probably yes, you should pay, unless you have multiple, reliable, backup databases. Then our callers often respond, “Okay, then do you know where I can buy some Bitcoin?”
In 2017 and 2018, there was a near pandemic of ransomware in North America and Europe. According to the Royal Canadian Mounted Police, sixteen hundred ransomware attacks were occurring each day in Canada in 2015. By the fall of 2016, the attacks almost doubled. As we said, a pandemic.
Hackers could easily buy attack kits that would find vulnerabilities that allowed them to go from publicly facing web pages or email servers into an entire corporate network. There they could deploy something else easily procured on the dark web: software that finds and encrypts all data stored on a network, including emails, Word and Excel documents, Salesforce, Oracle, SAP files, everything. Then comes the ransom offer.
Want the key to unlock everything we encrypted? Then send us one hundred thousand dollars’ worth of Bitcoin. Although Bitcoin was supposed to be a safe way of doing business because it involved a publicly viewable blockchain record, it has actually turned out to be easy to use it to hide money flows. Bitcoin is the coin of the realm when it comes to ransomware, allegedly very difficult to trace.
Faramarz Savandi and Mohammad Mansouri knew how to do it. The two Iranians wrote their own version of ransomware software and it became known as the SamSam kit. The two men hit about two hundred networks in the United States over two years and collected more than $6 million in Bitcoin. The damage that their ransomware did to networks was estimated at $30 million. Among their victims were numerous hospitals and medical facilities (MedStar Georgetown, Kansas Heart Hospital, Hollywood Presbyterian, LabCorps), and city governments and agencies (Atlanta, Newark, the Port of San Diego).
In Atlanta, Mayor Keisha Bottoms declined to pay the fifty-thousand-dollar demand. Most of her city’s services, including some police functions, were down for a week. One estimate put the cost of coming back online at $17 million. The two Iranians were also active in attacking networks in Canada and the United Kingdom. They remain at large and are believed to be living well in Tehran. There are numerous others in many countries engaged in the same profitable trade, which has been estimated to have produced more than a billion dollars in revenue in the last few years from thousands of ransoms around the world performed by dozens of attack groups.
So, back to our caller. Why do we often tell them to pay up? There is honor among thieves, and if you pay, you usually get back to business pretty quickly. If the ransomware thieves did not free up your network when you paid up, then word would get around and no one would pay. After all, they have their reputation to maintain. You can, however, get around them sometimes.
It’s all about how good your backup is and how long you can afford to have your network down. If you back up your data every day, you may well have backed up the malicious software that later infected your network. Hackers are waiting a week or so after they get on your network before activating their encryption software. By so doing, they get in your backup. If you simply mount your backup after your data is involuntarily encrypted, it will just happen again, only this time you will have lost your backup as well.
The solution is to keep multiple backups of varying ages, to keep the backups segregated into discrete modules so that everything is not in one master file, and to keep so-called golden disks, the clean originals of key applications, web pages, etc. Then you can experiment with gradually restoring your network, assuming your corporation can be off-line for forty-eight or seventy-two hours. If you can’t be, you may have to pay up. As we have been saying for years, cybercrime pays, at least if you are willing to live in Tehran or someplace similar and never use your ill-gotten gains to vacation somewhere nice that has an extradition treaty with the United States.
Andy Ozment, a former White House and Homeland Security official, has provocatively proposed that ransomware may be one of the more useful regulatory mechanisms we’ve got, essentially imposing fines on companies that have not invested in basic cybersecurity. It is a compelling argument, but we think it is time to remove the incentive for cyber criminals to use ransomware by having a government law or regulation that bans paying the ransom or institutes a fine in addition to whatever ransom is paid.
Ransomware is funneling billions of dollars to the underground economy. As DEF CON cofounder Jeff Moss has pointed out, even if most of those billions of dollars go to buying Maseratis and leather jackets in Moscow suburbs, the remaining millions are going to buying more and better capabilities, expanding teams, and attracting more criminal groups to the business. We need to stop funding the development of our adversaries.
In the next three chapters we will look at how smart government intervention in the markets could solve the problem of identity theft and the workforce crisis, and secure the power grid. We will also look at how the government can do a better job of regulating its own security.