You should never allow anyone from outside your province to come close to the guardhouse, even if he or she is a relative.
—Yoshimori Hyakushu #93
In feudal Japan, it was typical for traveling merchants, monks, priests, performers, entertainers, beggars, and other outsiders to operate in or near an active military camp or castle, as the encamped soldiers made frequent use of their services.1 However, some of these outsiders were secret operatives paid to collect information for the soldiers’ enemies. Some were even disguised shinobi who took advantage of being near the castle to study or engage their targets, gather intelligence, and even infiltrate or attack the camp.2
Bansenshūkai describes how military commanders can block such threats. The most effective approach is to disallow suspicious activities and fraternization near the camp. Discussing a policy “strictly brought home to everyone by repetition,” the scroll warns that anybody who looks suspicious should not be allowed into the castle or camp at any time, mitigating the opportunity for suspicious activity to become a malicious threat.3 Trained, disciplined troops allowed only trusted merchants to operate in or near their encampment, and they actively blocked unknown or untrusted merchants from offering services in the area. Shinobi had the broader operational philosophy to distrust anyone they didn’t know.4 Furthermore, Bansenshūkai recommends that shinobi help trusted merchants and vendors fortify their huts and shops against fire to mitigate the risk that fire would spread from those shops to the encampment, whether by accident or arson.5
In this chapter, we will review the “block malicious only” mode—a mode that can become an endless chasing down of new domains, IPs, URLs, and files that are shown to be malicious. We will explore some of the reasons why many organizations (and the security industry) choose to chase this never-ending threat feed rather than adopt a “block all suspicious” mode of operation. We’ll also outline strategies and guidance for dealing with the technical problems of this inverted approach. Furthermore, in this chapter’s Castle Theory Thought Exercise, we’ll explore the ways internal staff may attempt to bypass this “block all suspicious” security control.
In terms of cybersecurity, imagine the encampment is your organization and the merchants, entertainers, and everyone else beyond your perimeter are the many external services and applications available on the internet. All the legitimate business interconnections to external sites that help your staff do their jobs—not to mention the news, social media, and entertainment sites that your employees check during their breaks—allow suspicious entities to connect to your organization and operate under the guise of normal business. Threat actors seeking to perform initial access, delivery, and exploitation often require these external communication capabilities to go unchallenged, uninspected, and unfiltered. Their ensuing offensive tactics include perpetrating drive-by compromises on websites your staff visits, sending spear-phishing emails with links and attachments to your employees, performing network scans of your environment from untrusted IPs, and using command and control (C2) sites to obtain information and send instructions to malware implants on compromised machines, to name just a few.
To combat these attacks, the cybersecurity industry has established functional security controls, policies, and systems that whitelist appropriate communications to known and trusted associates, partners, and other verified, third-party business entities. Organizations can create whitelists of domain names, IP blocks, name servers, email addresses, websites, and certificate authorities that allow staff to communicate only with trusted partners and vice versa. Under these strict whitelisting conditions, before attempting to breach an organization, threat actors must first devote the time, resources, and focus to infiltrating trusted partners.
However, while the technical problem has been solved, the human problem remains. It is part of the human condition to seek stimulation through outside relationships as well as entertainment and news. Consequently, enforcing a “block suspicious” policy can be challenging for management, as it requires the willpower to lead significant cultural and behavioral change across all parts of an organization.
For example, suppose you notice that most of your organization’s internet traffic comes from your employees’ streaming videos on entertainment sites. You note that this activity is not in line with their job duties, and you decide to block all the major entertainment sites from entering your network using layer-7 detection tools.
While this reasonable measure is in line with your business needs and perhaps even documented IT policy, many organizations that have gone through this process have come to regret it. Employees will likely complain or put social pressure on you to unblock the offending traffic, with some surely attempting to circumvent the policy via encryption or tunneling technology, proxy avoidance, or visiting entertainment sites that contain similar content but avoid your filters—putting your network and systems at greater risk.
One popular solution is to provide a non-business internet—or bring your own device (BYOD) network—on which employees can stream videos on their personal devices. You could even set up separate machines that employees use for internet research and on breaks, but not for business functions. The US Department of Defense (DoD) uses this approach, providing employees with a separate, dedicated system for nonclassified internet (NIPRnet) access; network guards physically and logically segregate this system for information flow control.6 The DoD takes further measures on NIPRnet to whitelist all known non-malicious internet resources and deny large IP blocks and ASNs it deems suspicious, or at least unnecessary.
For the past decade or more, organizations have constantly consumed threat feeds of known malicious IPs, domains, and URLs, so blocking known malicious (blacklisting) is easy enough. Blocking suspicious prevents unknown malicious traffic from infiltrating but is considerably harder for organizations, often for valid reasons. It can be extremely difficult to create a master whitelist of all known safe internet resources, sites, and IPs that you know your staff will use. Once again, the DoD is an ideal practitioner, as the organization proactively establishes a policy to block and prevent these threat scenarios. It also constantly reminds staff—through OPSEC posters, required training, terms of use on systems, and clear system warning labels—to not circumvent its policies or controls, as doing so could compromise network, system, and information security.
“Stranger danger” is a simple concept many children learn at a young age. Potential threats to the child are averted by having zero tolerance for approach by any strangers. Stranger danger is not a perfect strategy, but it can be effective, assuming any known entities (non-strangers) are verified as trustworthy. An advantage of this strategy is that it does not depend on additional security layers to respond to a threat previously recognized as suspicious. Because children and many organizations are defenseless once a malicious threat is permitted to interact with them, applying a “block all suspicious” security policy may be the first and only defense they will get. Listed below is guidance on how to apply these concepts in your environment.
Now that you have evidence of something that should be blocked, your organization can block that single IP, or possibly the netblock it belongs to (/24) if you request a firewall change from your security team. However, note that more than 14.3 million IPv4 /24 subnets would need to be evaluated and blocked, and naturally, your organization might not have the time, will, or resources to enforce a block suspicious list that comprehensively covers the internet. In lieu of that approach, start documenting a whitelist, with the understanding that this will produce false positives but will also block malicious, suspicious, and future/unknown malicious.
Where relevant, recommendations are presented with applicable security controls from the NIST 800-53 standard. Each should be evaluated with the concept of blocking suspicious in mind.
In this chapter, we looked at how shinobi commanders of fortifications adopted security policies that would make the jobs of enemy shinobi much harder. We also discussed how difficult it can be for modern organizations to adopt a similar strategy, including the challenges that organizations would need to overcome to try a similar approach with network security. We explored several ideas for how to apply the “block suspicious” concept as guidance.
In the next chapter, we will bring together concepts learned from previous chapters in order to apply them to threat intelligence. This final chapter is the capstone of the book, tying together everything you’ve learned about shinobi with the real cyber threats you’ve encountered in the previous chapters.