Even if there is a din outside, be aware that the guardhouse should not be left completely empty. Also, you should listen for any sounds.
—Yoshimori Hyakushu #66
While shinobi were sometimes hired to guard castles and other fortifications (as described in Shōninki 1), non-shinobi soldiers or mercenaries—warriors trained to repel common invaders—occupied most guard stations in feudal Japan. But Bansenshūkai and the Gunpo Jiyoshu manual advise commanders defending against shinobi to hire their own, as these warriors could train ordinary guards to identify secret shinobi tactics, techniques, and procedures (TTPs).2 Though many are described in the scrolls, TTPs were constantly being developed and refined, and different clans had their own secret techniques that other shinobi did not know.
Shinobi TTPs were clever and elegant, and they often served multiple purposes. For example, a shinobi might covertly stick a common umbrella in the ground and open it within sight of a castle’s guards. Not only could the shinobi hide things from the guards’ view by placing them under the umbrella, but also the obvious sign of human activity might draw guards away from their posts.3 This technique also leveraged a prevailing superstition of the time: the idea that forgotten or lost umbrellas became possessed and haunted their previous owner, a phenomenon variously referred to as tsukumogami, kasa-obake, and yokai.4
Shinobi hired to instruct guards faced a unique pedagogical challenge: it was taboo or outright forbidden to write down TTPs or share them with outsiders, as doing so would compromise the integrity of the skill and put other shinobi’s lives at risk. Some passages in the scrolls even advise shinobi to kill any observer or victim who discovered a TTP.5
So, instead of giving away specific techniques, shinobi stressed adopting the proper mindset, having a high level of awareness, and exercising the degree of scrutiny necessary to catch a shinobi.6 This mental stance was bolstered by a risk assessment performed on the guards’ camp, knowledge of the enemy, and the most probable and impactful threat scenarios the enemy could deploy. It seems shinobi did provide guards with a general sense of a shinobi’s operating capabilities and examples of different threat scenarios, but probably described them in a manner that did not fully disclose trade secrets.7 They taught guards the indicators of potential shinobi activity—the sights, sounds, and other observables to look for on watch—and established rules to avoid mistakes.8
Because of the innumerable techniques to learn, and because many of the guards in training lacked formal education, shinobi transmitted knowledge via poems to make the information easier to remember. (This explains the high number of poems about guard awareness found in the “100 Ninja Poems” of Yoshimori Hyakushu. These were not for shinobi per se but rather for shinobi to relay to guards.) Again, the poems provided just enough detail to describe shinobi tactics, and they provided realistic guidance but not so much as to overwhelm the guards with information. For instance, poem 66 (cited at the beginning of this chapter) provides straightforward advice: do not leave your post empty and listen for any sounds, including but not limited to the din that initially drew attention, such as footsteps approaching from the rear.9 The poems were grouped thematically. Poems 64–67, 78, 79, 91, 93, and 94 all deal with examples of maintaining awareness and avoiding blunders. Examples include how to keep watch at night when tired; which direction to face; and why drinking, singing, and soliciting prostitutes on duty are bad ideas.
Of course, if an adversary shinobi observed that guards were actively searching for shinobi TTPs, then the adversary deployed countermeasures. A clear example that appears in all three major scrolls involves a shinobi’s hiding in bushes or tall grass or crawling through a field. The shinobi’s activity disturbs the surrounding insects, who freeze and remain quiet to hide themselves. For a trained guard, the absence of buzzing, humming, or chirping indicates that a hidden person is approaching. Typically, if the guard suddenly becomes alert and starts searching for the intruder, and the shinobi knows they are exposed, the shinobi quietly withdraws.10 That’s where the countermeasure comes in. Before the next attempt, the shinobi captures some crickets in a box. The crickets, untroubled to the presence of the shinobi carrying them, chirp away freely, filling any silence around the shinobi approaching the guard post. The guards now have no reason to suspect the shinobi’s approach.11
Poem 68 vividly illustrates the challenges of TTP detection and countermeasures: “You should conduct a thorough search, following behind the party of a night patrol. This is called kamaritsuke [ambush detection].”12 At night, the commander would dispatch a primary search party to patrol the perimeter with the usual lanterns and other equipment, but the leader would also send a covert search party in the main group’s wake.13 Guards on perimeter patrol were told by their shinobi advisers to look for anything out of place—especially sounds, movement, and people.14 Of course, enemy shinobi were aware of this guidance. The scrolls describe how attacking shinobis could hide in bushes, ditches, or other dark places while a patrol passed before they continued working.15 However, in some cases, the enemy might follow the patrol party, their own movements covered by the sound and light of the defenders; the infiltrators might even attack the patrol from behind.16 Thus, having a second, covert patrol behind the first could catch hidden enemy shinobi. This group of heavily armed troops searched likely hiding spots and stayed alert for an enemy also following the main patrol party.17 However, as a counter to the counter, the attacking shinobi who was aware of the kamaritsuke technique might wait in the shadows for the second hidden patrol to pass or move to a place that the second patrol would not search. To combat this—the counter to the counter to the counter—poems 69 and 70 were added:18
This guidance encouraged an inconsistent cadence of kamaritsuke patrols to interfere with the adversary’s ability to operate freely. Frequent, semiunpredictable patrols with one or more following kamaritsuke parties left little opportunity for enemy shinobi to execute actions confidently against either the fortification or the defenders’ patrol parties.19
A best-effort approach with TTP detection and countermeasures can quickly escalate, but eventually, it becomes too dangerous or impractical to attack the fortification. This result is often the best defenders can hope for against enemy shinobis.
In this chapter, we will discuss how the philosophy behind shinobi tradecraft is applicable to understanding cyber threat actor TTPs. We will touch on how cyber threat intelligence could guide incident responders, security engineers, and threat hunters so they can better defend their organizations, much as the shinobi were able to improve the effectiveness of common castle guards and soldiers with intelligence-driven defense. We explore several of the prevailing frameworks used to describe cyber threat TTPs. Blending those frameworks with the knowledge of the shinobi makes it easier to understand both threats and why TTPs are useful. We will go over why the P in TTPs is often a mystery and will likely remain unknown, but we’ll also theorize about the procedures a rational adversary would likely take, based on how we know the shinobi acted. Lastly, we’ll explore guidance for how to incorporate cyber threat intelligence into your organization’s defense strategy and touch on why this can be so difficult to do.
In cybersecurity, TTPs describe approaches for analyzing a specific threat actor’s or group’s patterns of behavior, activities, and methods. Tactics describe the adversary’s operational maneuvers, such as reconnaissance, lateral movement, and backdoor deployment. Techniques are the detailed technical methods the adversary uses to accomplish tasks, such as using a specific tool or software to execute weaponization or exploitation. Procedures detail standard policies and courses of action to perform, such as confirming whether an exploited target system has any active users logged in before conducting additional tasks, running malware through string analysis for tradecraft errors before deploying, or implementing precautionary self-cleanup after verifying connectivity on a target box.
Once TTPs are identified and defined, defenders can search for indicators of them in their environment. They can even predict which TTPs could be used against them to support planning and implementing preemptive mitigations or countermeasures. To establish and communicate a common definition of cyber adversary TTPs, the industry has developed multiple concepts, models, analyses, and sharing methods, including:
The Pyramid of Pain (see Figure 26-1) is an excellent model for visualizing how awareness of an adversary’s indicators, tools, and TTPs can affect the defender’s security posture. It also shows how the difficulty of implementing measures and countermeasures increases for both defenders and attackers.
The name Pyramid of Pain refers to the idea that, while there is no way to guarantee absolute security or prevent all attacks, an adversary is less likely to target your organization if you make it extremely painful for them to expend the time, resources, and effort to do so.
Figure 26-1: Fortification of indicators of compromise (adapted from David Bianco’s Pyramid of Pain26)
At the bottom of the pyramid are indicators of compromise (IoC)—such as domains, IPs, file hashes, and URLs—that can positively identify known malicious indicators. Defenders can block these indicators or raise alerts around them, but the adversary can also change them.
Above the atomic indicators are host-based indicators, such as registry keys, dropped files, and artifacts. These can be detected or responded to, but threat detection or mitigation may not be automatic, and the adversary can alter them based on the target or operation.
The next level is tools—software or devices with which the adversary conducts or supports offensive actions. By searching for, removing access to, or disabling the functionality of known malicious tools in an environment, defenders may be able to detect and prevent the adversary from operating effectively.
At the top of the pyramid are the adversary’s tactics, techniques, and procedures. If you can identify or mitigate against these methods, it becomes difficult for the adversary to create or learn new TTPs to use against you—though of course, it is also painful for you as the defender to develop safeguards or countermeasures.
MITRE’s Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) framework derives many tactics from Lockheed Martin’s Cyber Kill Chain framework (see Figure 26-2). The Cyber Kill Chain framework outlines seven stages of the attack lifecycle: reconnaissance, weaponization, delivery, exploitation, installation, command and control, and actions on objectives. Each tactic identified in the ATT&CK framework lists techniques and methods, with examples, to detect or mitigate against.
Note that “Procedures” is missing from the ATT&CK framework. This is understandable, as identifying these would likely require stealing and analyzing a nation-state’s or military’s book of offensive cyber operations. This is why the Bansenshūkai, Ninpiden, and Shōninki texts, which describe the procedures of a sophisticated espionage threat group, so greatly enrich the discussion.
When your security team understands these tactics and techniques—and has identified your attack surfaces, evaluated your current security controls, and performed analysis of previous incidents to determine your organization’s defensive effectiveness—it is possible to start predicting where adversaries are likely to target your environment. With a good set of threat predictions, you can then start threat hunting—looking for indicators and evidence threat actors left behind that may indicate compromise.
Figure 26-2: Ninja attack chain (adapted from MITRE’s ATT&CK framework27)
However, without a deep understanding of exactly how threat actors operate, it is difficult to effectively hunt or detect their presence.
Here, threat intelligence shows its value. Threat intelligence does not necessarily mean feeds such as lists of new IPs, domains, URLs, and file hashes associated with malware, hacking infrastructure, or threat groups. Rather, cyber threat intelligence (CTI) refers to traditional intelligence that collects and analyzes cyber threats such as malware, hacktivists, nation-states, criminals, DDoS attacks, and more. When consumed correctly, CTI provides actionable intelligence on and assessment of what the threat is doing, its motivations, and its TTPs. Simply put, CTI is one of the best ways to effectively understand and defend against threats because it requires decision makers to inform themselves and take defensive actions.
Unfortunately, many CTI consumers pay attention only to the IoCs, as they can easily be ingested into SIEMs, firewalls, and other security devices to block or detect threats. Operating this way negates CTI’s real value, as detailed observations and assessments by CTI analysts describe behaviors, patterns, methods, attribution, and context. While CTI producers may not always reveal how they collect intelligence on a threat, they often strive to be transparent in their assessments of what they know and why they believe the threat conducts certain actions.
Of course, consuming intelligence reports with the intention of understanding a threat while also developing a deep understanding of your environment can be demanding. This process requires a broad skill set—one that includes the ability to quickly learn and make strategic decisions. But if a CTI consumer can dedicate the time to understand a threat’s every step, code, tactic, and technique, they can make decisions that allow them to successfully mitigate, detect, respond to, and even predict future threat movements.
Having already bought dozens of security solutions and employed full-time security staff to handle numerous threat vectors, some may regard CTI as the final cost layer to top off their already “expense-in-depth” security model. CTI, though, may allow you to improve your security strategies, and it can amplify the effectiveness of all other security layers, thus justifying its cost. Unfortunately, in many cases, effectively using CTI can be akin to reading reports of the newest scientific discoveries in a given field, as it requires you to understand the implications of the discovery and then rapidly change culture, business strategies, and technologies in response. While possible, this intensity of consumption, synthesis, and action seems excessively demanding. This is the biggest challenge of CTI: it’s not a crystal ball that offers easy answers that are easy to follow through on. Review the guidance below to make sound decisions about your CTI program.
Train your security and intelligence staff to threat hunt. Because not every threat can be engineered or blocked, a dedicated hunt team must constantly search for traces of threats in your network, guided by intelligence from your CTI partner, vendor, or team. Threat hunting can be augmented with purple team exercises, in which the red team performs adversarial activity on your network while your blue team attempts to hunt them, thereby learning how to counter a threat.
First, attempt to hunt for evidence of that link in your current emails. Many organizations hit an early barrier here, as their email administrator is not cooperative or does not have the necessary visibility or resources to search incoming email containing goo.gl hyperlinks. Additional barriers could include quarantining potential phishes, alerting non-IT and non-security staff to the presence of the threat, and training them on how to detect and avoid this threat.
Just as the adversary has different tools, tactics, and techniques to target your organization, your own tools require contemplation, understanding, ingenuity, and engineering to effectively block and respond to threats in a holistic, effective way. For example, your email administrator may create a rule to detect the goo.gl link shortener, but what about others? Hopefully, your CTI team would identify the threat from phishing with links and link shortening and recommend methods to detect, block, or mitigate those links. In addition, the team should keep people in your organization aware of this TTP. In other words, they should be looking not only for goo.gl but also for all link shorteners. Finally, decision makers have to strategically address this threat with new architecture, policies, or controls.
Going through this process, however painful it may be, is necessary to identify where your organization needs to improve in detecting, mitigating, and responding to threats.
These recommendations are presented with NIST 800-53 standards and should be evaluated with the idea of security awareness, TTPs, and CTI in mind.
In this chapter, we reviewed several shinobi TTPs, in particular how kamaritsuke TTPs co-evolved with countertactics by both defender and adversary until a fault-tolerant security system emerged. We explored other cyber threat tactics, which may also develop as each side tries to counter the other, until resiliency emerges. We discussed cyber threat intelligence and why just knowing what the adversary is doing, how they are doing it, and what they are going to do is not enough. To be useful, CTI must be consumed with an eye toward addressing the threat in some way. In the Castle Theory Thought Exercise, we looked at a clear example of defenders discovering an observable, followed by a shift in tactics regarding the threat. This thought exercise could be compared to spoofing system/network logs to deceive threat hunters, anomaly detectors, and even machine learning systems—and it could reemerge in modern times. The most important lesson of this chapter—perhaps even the book—is that it is critical to consume threat intelligence and respond against dynamic threats in innovative ways.