If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.
Sun Tzu, The Art of War1
Purple teaming may be the absolute most valuable thing an organization can do to mature its security posture. It allows the defensive security team, your blue team, and your offensive security team, your red team, to collaborate and work together. This attack and defense collaboration creates a powerful cycle of continuous improvement. Purple teaming is like sparring with a partner instead of shadowboxing. The refinement of the skills and processes used during purple teaming can only be rivaled by the experience gained during actual high-severity events. Purple teaming combines your red team and blue team’s efforts into a single story with the end goal of maturing an organization’s security posture.
In this chapter we discuss purple teaming from different perspectives. First, we cover the basics of purple teaming. Next, we discuss blue team operations. Then we will explore purple team operations in more detail. Finally, we discuss how the blue team can optimize its efforts during purple team exercises.
In this chapter, we discuss the following topics:
• Introduction to purple teaming
• Blue team operations
• Purple team operations
• Purple team optimization and automation
Collaboration is at the heart of purple teaming. The goal of purple teaming is to improve the skills and processes of both the red and blue teams by allowing them to work closely together during an exercise to respectively attack and defend a particular target. This is vastly different from red teaming, where communication between the red and blue teams is restricted and prohibited during most of the exercise and where the red team typically has little knowledge of the target. During a purple teaming exercise, the red team will attack a specific target, device, application, business or operational process, security control, and so on, and will work with the blue team to understand and help refine security controls until the attack can be detected and prevented, or perhaps just detected and resolved with efficacy. It’s vital that you read Chapter 7 before reading this chapter because this chapter builds on Chapter 7’s content.
I’ve seen some confuse the concept of purple teaming with the role of a white cell or white team. As described in the previous chapter, the white team facilitates communications between the red and blue teams and provides oversight and guidance. The white team usually consists of key stakeholders and those that facilitate the project. The white team isn’t a technical team and does not attack or defend the target. A purple team is not a white team. A purple team is a technical team of attackers and defenders who work together based on predefined rules of engagement to attack and defend their target. However, they do work with a white team (their project managers, business liaisons, and key stakeholders).
Purple teaming doesn’t have to be a huge, complex operation. It can start simple with a single member of the blue team working with a single member of the red team to test and harden a specific product or application. Although we will discuss how purple teaming can be used to better secure the enterprise, it’s okay to start small. There is no need to boil the ocean. Purple teaming doesn’t require a large team, but it does require a team with a mature skill set. If you task your best blue team member to work with your best red team member, you can sit back and watch the magic happen.
Many organizations begin purple teaming efforts by focusing on a specific type of attack (for example, a phish). It is most important to start with an attainable goal. For example, the goal could be to specifically test and improve a blue team skill set or to improve the ability to respond to a specific type of attack, such as a denial-of-service (DoS) attack or a ransomware attack. Then, for each goal, the purple team exercise will focus on improving and refining the process or control until it meets the criteria for success outlined for that particular effort.
One of the beautiful things about purple teaming is the ability to take into consideration past attacks and allow the security team to practice “alternate endings.” Purple teaming exercises that reenact different responses to past attacks have a “chose your own adventure” look and feel and can be very effective at helping to decide the best course of action in the future. Purple teaming exercises should encourage blue and red teams to use current standard operating procedures (SOPs) as guides but should allow responders to have flexibility and be creative. Much of the value provided by purple teaming exercises is in requiring your defenders to practice making improvised decisions. The goal is to perform simulations in order to give your team the ability to put into practice those issues identified as “lessons learned” often cited during an incident’s postmortem phase, with the goal of encouraging further reflection and mature decision making.
We discussed red teaming in Chapter 7. Most of the topics in Chapter 7 also apply to purple team exercises. There are, of course, a few differences, but many of the same considerations apply. For example, setting objectives, discussing the frequency of communication and deliverables, planning meetings, defining measurable events, understanding threats, using attack frameworks, taking an adaptive approach to your testing, and capturing lessons learned all apply to purple team exercises. The fact that during a purple team exercise the red team collaborates and interacts with the blue team will have an impact on how efforts are planned and executed. This chapter begins by discussing the basics of blue teaming and then progresses to discuss ways that both the red and blue teams can optimize their efforts when working together on a purple team exercise.
The best cyberdefenders in the world have accepted the challenge of outthinking every aggressor.2 Operating an enterprise securely is no small task. As we’ve seen in the news, there are a variety of ways in which protective and detective security controls fail. There are also a variety of ways to refine how you respond to and recover from a cyber incident. The balance between protecting an organization from cyberthreats and from mistakes its team members can make, all while ensuring that it can meet its business objectives, is achieved when strategic security planning aligns with well-defined operational security practices. Before we begin discussing purple teaming and advanced techniques for protecting an environment from cyberthreats, we’ll first discuss the basics of defense.
As exciting and glamorous as hunting down bad guys may be, there are many aspects of cyberdefense that are far less glamorous. The planning, preparation, and hardening efforts that go into defending an environment from cyberthreats are some of the most unappreciated and overlooked aspects of security, but they are necessary and important. It is my intent to provide an overview of some of the important foundational aspects of a security program so that you can build on the information presented to you here. The intent is to provide a foundation for you to take your blue teaming knowledge and overlay information about purple team exercises, thus planting ideas and providing you with resources on frameworks, tools, and methodologies so that your purple teaming efforts have the appropriate context.
Having relevant information about who has attacked you in the past will help you prioritize your efforts. It goes without saying that some of the most relevant information will be internal information on past attacks and attackers. There are also external information sources like threat intelligence feeds that are free. In addition, many commercial products are supplemented with threat intelligence feeds. Past indicators of compromise (IOCs) and information from threat intelligence gathering can be collected and stored for analysis of attack trends against an environment. These can, in turn, inform strategies for defense, including playbooks, controls selection and implementation, and testing.
Many incidents will stem from within an organization. As long as humans are involved in operating companies, then human error will always account for some security incidents. Then there’s always the insider threat, when data exfiltration happens using valid credentials. An insider threat can take the form of a disgruntled employee or one who has been blackmailed or paid to act maliciously. Supplementing your security program by overlaying an insider threat program will help you prepare for protecting yourself against an insider threat. The best preparation is a focused purple team effort on insider threats. Organizations exist that investigate the human factor surrounding insider threat security incidents, whether the causes are rooted in human error, human compromise, or human malcontent.
Controlling the environment means knowing it better than your adversary does. Controlling your technical environment starts with granular inventory information about your hardware, software, and data, especially your sensitive/protected/proprietary data and data flows. It means having a slice-in-time accurate understanding of the processes, data flows, and technical components of a system or environment. In addition to having detailed information about your environment, the ability to control it means preventing unauthorized changes and additions, or at least detecting and resolving them quickly. It may even be able to highlight where inventory and configuration practices deviate from expectation. These are familiar concepts in the security world. Having an approved secure build and preventing unauthorized changes to it should be standard practice for most organizations.
Another consideration for maintaining a higher level of control of an environment is trying to limit or prohibit humans/users from interacting with it. This works especially well in cloud environments. Consider using tools to create a headless build, using a command line instead of a user interface (GUI), and scripting and automating activities so that users are not normally interacting with the environment. Terraform, an open source project, uses the concept of Infrastructure as Code (IAC) to describe defining your infrastructure using code that can create configuration files and be shared, edited, and versioned like any other code.
Preparing for purple team exercises can somewhat differ from red team exercises in that in some instances more information is shared with the red team during a purple team exercise. This is especially true when scoping a purple team engagement. Often those people familiar with the testing target are interviewed, and system documentation and data flows are shared with the red team. This allows the red team to fine-tune its testing efforts and identify administrative roles, threat models, or other information that needs to be considered to scope the engagement.
Organizing the many important functions that a security team has to fulfill is best done when aligned to a security framework. There’s no reason to reinvent the wheel; in fact, I’d discourage any organization from developing a framework that’s completely different from a tried-and-true framework like the National Institute of Standards in Technology (NIST) Cyber Security Framework or the International Standards Organization (ISO) 27001 and 27002 frameworks. These frameworks were developed over time with the input of many experts.
Now, I’m not saying that these frameworks can’t be adapted and expanded on. In fact, I’ve often adapted them to create custom versions for an organization. Just be wary of removing entire sections, or subcategories, of a framework. I’m often very concerned when I see a security program assessment where an entire area has been marked “not applicable” (N/A). It’s often prudent to supplement the basic content of a framework to add information that will allow an organization to define priorities and maturity level. I like to overlay a Capability Maturity Model (CMM) over a framework. This allows you to identify, at a minimum, the current state and target state of each aspect of the security program. Purple team exercises can help assess the effectiveness of the controls required by the security program and also help identify gaps and oversights in it.
A mature incident response (IR) program is the necessary foundation for a purple team program to be built on. A mature process ensures that attacks are detected and promptly and efficiently responded to. Purple teaming can aid in maturing your IR program by focusing on specific areas of incident response until detection, response, and ultimately recovery time improve. For a good IR process, like many other areas of security, it’s best to use an industry standard like NIST’s Computer Security Incident Handling Guide (SP 800-61r2). When reading each section of the document, try to understand how you could apply its information to your environment. The NIST Computer Security Incident Handling Guide defines four phases of an IR life cycle:
• Preparation
• Detection and Analysis
• Containment, Eradication, and Recovery
• Post-Incident Activity
Using this guide as the basis of an IR plan is highly recommended. If you were to base your IR plan on the NIST Computer Security Incident Handling Guide, you’d cover asset management, detection tools, event categorization criteria, the structure of the IR team, key vendors and service-level agreements (SLAs), response tools, out-of-band communication methods, alternate meeting sites, roles and responsibilities, IR workflow, containment strategies, and many other topics.
An IR plan should always be supplemented with IR playbooks, which are step-by-step procedures for each role involved in a certain type of incident. It’s prudent for an organization to develop playbooks for a wide variety of incidents, including phishing attacks, distributed denial of service (DDOS) attacks, web defacements, and ransomware, to name a few. Later in this chapter we discuss the use of automated playbooks. These playbooks should be refined as lessons are learned via purple teaming efforts and improvements are made to the IR process.
Passive monitoring is not effective enough. Today and tomorrow’s aggressors are going to require more active and aggressive tactics, such as threat hunting. During a threat hunting exercise, you are looking to identify and counteract adversaries that may have already gotten past your security controls and are currently in your environment. The goal is to find these attackers early on before they have completed their objectives. You need to consider three factors when determining if an adversary is a threat to your organization: capability, intent, and opportunity to do harm. Many organizations are already performing some form of threat hunting, but it may not be formalized so that the hunting aligns with the organization’s strategic goals.
Most organizations’ threat hunting capabilities begin with some security tools that provide automated alerting and little to no regular data collection. Typically, you start off by using standard procedures that haven’t been customized that much yet. Usually the next step is to add threat feeds and increase data collection. You begin really customizing your procedures once you start routine threat hunting. As your threat hunting program matures, you’ll collect more and more data that you’ll correlate with your threat feeds, and this provides you with real threat intelligence. In turn, this results in targeted hunts based on threat intelligence specific to your environment.
Logs, system events, NetFlows, alerts, digital images, memory dumps, and other data gathered from your environment are critical to the threat hunting process. If you do not have data to analyze, it doesn’t matter if your team has an advanced skill set and best-of-breed tools because they’ll have a limited perspective based on the data they can analyze. Once the proper data is available, the threat hunting team will benefit most from ensuring that they have good analytics tools that use machine learning and have good reporting capabilities. Thus, once you have established procedures and have the proper tools and information available to you for threat hunting, the blue team can effectively hunt for the red team during red team and purple team exercises.
A mature threat hunting capability requires that large data sets must be mined for abnormalities and patterns. This is where data science comes into play. Large data sets are a result of the different types of alerts, logs, images, and other data that can provide valuable security information about your environment. You should be collecting security logs from all devices and software that generate them—workstations, servers, networking devices, security devices, applications, operating systems, and so on. Large data sets also result from the storage of NetFlow or full packet capture and the storage of digital images and memory dumps. The security tools that are deployed in the environment will also generate a lot of data. Valuable information can be gathered from the following security solutions: antivirus, data loss protection, user behavior analytics, file integrity monitoring, identity and access management, authentication, web application firewalls, proxies, remote access tools, vendor monitoring, data management, compliance, enterprise password vaults, host- and network-based intrusion detection/prevention systems, DNS, inventory, mobile security, physical security, and other security solutions. You’ll use this data to identify attack campaigns against your organization. Ensuring that your data sources are sending the right data, with sufficient detail, to a central repository, when possible, is vital. Central repositories used for this purpose often have greater protections in place than the data sources sending data to them. It’s also important to ensure that data is sent promptly and frequently in order to better enable your blue team to respond quickly.
You’ll need tools to help collect, correlate, analyze, and organize the vast amount of data you’ll have. This is where you have to do a little strategic planning. Once you understand the data and data sources you’ll be working with, then selecting tools to help with the analysis of those systems and data becomes easier. Most organizations begin with a strategy based on what data they have to log for compliance purposes and what data they are prohibited from logging. You may want to also consider “right to be forgotten” laws like those required by the European Union’s (EU) General Data Protection Regulation (GDPR). Then consider the data and data sources mentioned in the previous section and any other data source that would facilitate an investigation.
It’s important to understand how the tools you select for IR can work together. Especially important is the ability to integrate with other tools to facilitate the automation and correlation of data. Of course, the size of the environment and the budget will have an impact on your overall tool strategy. Take, for instance, the need to aggregate and correlate a large amount of security data. Large enterprises may end up relying on highly customized solutions for storing and parsing large data sets, like data lakes. Medium-size organizations may opt for commercial products like a security information event management (SIEM) system that integrates with the types of data warehouses already in use by a large number of organizations. Smaller organizations, home networks, and lab environments may opt for some of the great free or open source tools available to act as a correlation engine and data repository.
When you’re selecting IR tools, it’s important to ensure that your analysis tools used during investigations can be easily removed without leaving artifacts. The ability to easily remove a tool is an important factor in allowing you the flexibility to take an adaptive approach to your investigations. There are a lot of tried-and-true commercial products, but there are also a ton of open source or free tools that can be used. I’d encourage you to experiment with a combination of commercial and free tools until you know what works best in your environment and in what situation. For example, an organization that has invested in Carbon Black Response may want to experiment with Google Rapid Response (GRR) as well and really compare and contrast the two. Purple team exercises give the blue team an opportunity to use different tools when responding to an incident. This allows an organization to gain a better understanding of which tools work best in its environment and which tools work best in specific scenarios.
Like all aspects of technology, blue teaming has its challenges. Signature-based tools may lead to a false sense of security when they are not able to detect sophisticated attacks. Many organizations are hesitant to replace signature-based tools with machine-learning-based tools, often planning on upgrading after their current signature-based tools’ licenses expire. Those same organizations often fall prey to attacks, including ransomware, that could have been prevented if they would have performed red or purple team exercises that could have highlighted the importance of replacing less effective signature-based tools and revealed the false sense of security that many of these tools provide.
Some organizations undervalue threat hunting and are hesitant to mature their threat hunting program, fearing that it will detract from other important efforts. Organizations that find themselves understaffed and underfunded often benefit the most from maturing their blue (and purple) team operations in order to ensure they are making the best decisions with their limited resources. Taking a passive approach to cybersecurity is extraordinarily risky and a bit outdated. We now understand how to better prepare for cyberattacks with threat hunting and purple teaming efforts. Since free tools exist to support red, blue, and purple teaming efforts, it is important that investments in staffing and training be made and that the value of hunting the threat be demonstrated and understood across the organization.
Demonstrating the value of “hunting the threat” and getting organizational buy-in are difficult in organizations that are very risk tolerant. This tends to happen when an organization relies too much on risk transference mechanisms, such as using service providers, but doesn’t monitor them closely, or relies heavily on insurance and chooses to forgo implementing certain security controls or functions. As with most aspects of security, you must always focus your arguments on what is important to the business. If your argument for good security is rooted in something the company already cares about, like human safety or maximizing profits, then it is best to base your arguments by demonstrating, for example, how a cyberattack could put human life at risk or how the loss of operations from a cyberattack could have an impact on profitability and the overall valuation of the company.
Now that we have covered the basics of red teaming in Chapter 7 and blue teaming in this chapter, let’s get into more detail about purple teaming operations. We start by discussing some core concepts that guide our purple teaming efforts—decision frameworks and methodologies for disrupting an attack. Once we’ve covered those core principles, we discuss measuring improvements in your security posture and purple teaming communications.
United States Air Force Colonel John Boyd created the OODA Loop, a decision framework with four phases that create a cycle. The OODA loop’s four phases—Observe, Orient, Decide, and Act—are designed to describe a single decision maker, not a group. Real life is a bit more challenging because it usually requires collaborating with others and reaching a consensus. Here’s a brief description of the OODA Loop’s phases:
• Observe Our observations are the raw input into our decision process. The raw input must be processed in order to make decisions.
• Orient We orient ourselves when we consider our previous experiences, personal biases, cultural traditions, and the information we have at hand. This is the most important part of the OODA Loop, the intentional processing of information where we are filtering information with an awareness of our tendencies and biases. The orientation phase will result in decision options.
• Decide We must then decide on an option. This option is really a hypothesis that we must test.
• Act Take the action that we decided on. Test our hypothesis.
Since the OODA Loop repeats itself, the process begins over again with observing the results of the action taken. This decision-making framework is critical to guiding the decisions made by both the attacking and defending team during a purple team engagement. Both teams have many decision points during a purple team exercise. It is beneficial to discuss the decisions made by both teams, and the OODA Loop provides a framework for those discussions.
One of the goals of using a decision framework is to better understand how we make decisions so that we can improve the results of those decisions. A better understanding of ourselves helps us obscure our intentions in order to seem more unpredictable to an adversary. The OODA Loop can also be used to clarify your adversary’s intentions and attempt to create confusion and disorder for your adversary. If your OODA Loop is operating at a faster cadence than your adversary’s, it puts you in an offensive mode and can put your adversary in a defense posture.
Let’s look at the Lockheed Martin Cyber Kill Chain framework from a purple teaming or an attack-and-defense perspective. After all, the goal of the framework is for the identification and prevention of cyberintrusions. We will look at the framework from the attack-and-defense perspective for each of the framework’s phases: reconnaissance, weaponization, delivery, exploitation, installation, command and control (C2), and acts on objectives.
Purple team efforts differ from red team exercises in several ways, including the amount of information shared between teams. Some purple team exercises begin with a reconnaissance phase during which the red team will perform open source intelligence (OSINT) gathering and will harvest e-mail addresses and gather information from a variety of sources. Many purple team efforts have less of a focus on the reconnaissance phase and instead rely more on interviews and technical documentation to gather information about the target. There is still value in understanding what type of information is available to the public. The red team may still opt to perform research using social media and will focus on the organization’s current events and press releases. The red team may also gather technical information from the target’s external facing assets to check for information disclosure issues.
Disrupting the reconnaissance phase is a challenge because most of the red team’s activities are passive in this phase. The blue team can collect information about browser behaviors that are unique to the reconnaissance phase and work with other IT teams to understand more information about website visitors and queries. Any information that the blue team learns will go into prioritizing defenses around reconnaissance activities.
During the weaponization phase, the red team prepares the attack. It prepares a command and control (C2) infrastructure, selects an exploit to use, customizes malware, and weaponizes the payload in general. The blue team can’t detect weaponization as it happens but can learn from what it sees after the fact. The blue team will conduct malware analysis on the payload, gathering information, including the malware’s timeline. Old malware is typically not as concerning as new malware, which may have been customized to target the organization. Files and metadata will be collected for future analysis, and the blue team will identify whether artifacts are aligned with any known campaigns. Some purple team exercises can focus solely on generating a piece of custom malware to ensure that the blue team is capable of reversing it in order to stage an appropriate response.
The attack is launched during the delivery phase. The red team will send a phishing e-mail, introduce malware via USB, or deliver the payload via social media or watering hole attacks. During the delivery phase, the blue team finally has the opportunity to detect and block the attack. The blue team will analyze the delivery mechanism to understand upstream functions. The blue team will use weaponized artifacts to create indicators of compromise in order to detect new payloads during its delivery phase, and will collect all relevant logs for analysis, including e-mail, device, operating system, application, and web logs.
The red team gains access to the victim during the exploitation phase. A software, hardware, physical security, human vulnerability, or configuration error must be taken advantage of for exploitation to occur. The red team will either trigger exploitation itself by taking advantage of, for example, a server vulnerability, or a user will trigger the exploit by clicking a link in an e-mail. The blue team protects the organization from exploitation by hardening the environment, training users on security topics such as phishing attacks, training developers on security coding techniques, and deploying security controls to protect the environment in a variety of ways. Forensic investigations are performed by the blue team to understand everything that can be learned from the attack.
The installation phase is when the red team establishes persistent access to the target’s environment. Persistent access can be established on a variety of devices, including servers or workstations, by installing services or configuring Auto-Run keys. The blue team performs defensive actions like installing host-based intrusion prevention systems (HIPSs), antivirus, or monitoring processes on systems prior to this phase in order to mitigate the impact of an attack. Once the malware is detected and extracted, the blue team may extract the malware’s certificates and perform an analysis to understand if the malware requires administrative privileges. Again, try to determine if the malware used is old or new to help determine if the malware was customized to the environment.
In the command and control (C2) phase, the red team or attacker establishes two-way communication with a C2 infrastructure. This is typically done via protocols that can freely travel from inside a protected network to an attacker. E-mail, web, or DNS protocols are used because they are not typically blocked outbound. However, C2 can be achieved via many mechanisms, including wireless or cellular technology, so it’s important to have a broad perspective when identifying C2 traffic and mechanisms. The C2 phase is the blue team’s last opportunity to block the attack by blocking C2 communication. The blue team can discover information about the C2 infrastructure via malware analysis. Most network traffic may be controlled if all ingress and egress traffic goes through a proxy or if the traffic is sinkholed.
During the “acts on objectives” phase of the kill chain, the attacker, or red team, completes their objective. Credentials are gathered, privilege escalation occurs, lateral movement is achieved throughout the environment, and data is collected, modified, destroyed, or exfiltrated. The blue team aims to detect and respond to the attack. This is where “alternate endings” can be played out. The blue team can practice different approaches and use different tools when responding to an attack. Often the IR process is fully implemented, including the involvement of the executive and legal teams, key business stakeholders, and anyone else identified in the organization’s IR plan. In a real-world attack, this is when the involvement of the communications and public relations teams, law enforcement, banks, vendors, partners, parent companies, and customers may be necessary. During a purple team exercise, this is where an organization may opt to perform tabletop exercises, allowing for full attack simulation. The blue team will aim to detect lateral movement, privilege escalation, account creation, data exfiltration, and other attacker activity. The predeployment of incident response and digital forensics tools will allow rapid response procedures to occur. In a purple team exercise, the blue team will also aim to contain, eradicate, and fully recover from the incident, often working with the red team to optimize its efforts.
The Kill Chain Countermeasure framework is focused on being able to detect, deny, disrupt, degrade, deceive, and contain an attacker and to break the kill chain. In reality, it’s best to try to catch an attack early on in the detect or deny countermeasure phase, rather than later on in the attack during the disrupt or degrade phase. The concept is simple: for each phase in the Lockheed Martin Kill Chain, discussed in the preceding section, ask yourself what can you do, if anything, to detect, deny, disrupt, degrade, deceive, or contain this attack or attacker? In fact, purple team exercises can focus on a phase in the countermeasure framework. For example, a purple team exercise can focus on detection mechanisms until they are refined.
Let’s focus on the detect portion of the Kill Chain Countermeasure framework. We’ll walk through some examples of detecting an adversary’s activities in each phase of the kill chain. Detecting reconnaissance is challenging, but web analytics may provide some information.
Detecting weaponization isn’t really possible since the preparation of the attack often doesn’t happen inside the target environment, but network intrusion detection and prevention systems (NIDSs and NIPSs) can alert you to some of the payload’s characteristics. A well-trained user can detect when a phishing attack is delivered, as may proxy solutions. End-point security solutions including host-based intrusion detection systems (HIDSs) and antimalware solutions may detect an attack in the exploitation and installation phases. Command and control (C2) traffic may be detected and blocked by an NIDS/NIPS. Logs or user behavior analytics (UBA) may be used to detect attacker (or red team) activity during the “actions on objectives” phases. These are only a few examples of how the Kill Chain Countermeasure framework can be applied. Each environment is different, and each organization will have different countermeasures.
Now let’s take a different approach and focus on the C2 phase of the kill chain and discuss examples of how each countermeasure phase—detect, deny, disrupt, degrade, deceive, and contain—can counteract it. A network intrusion detection system may detect C2 traffic. Firewalls can be configured to deny C2 traffic. A network intrusion prevention system can be used to disrupt C2 traffic. A tarpit or sinkhole can be used to degrade C2 traffic, and DNS redirects can be used for deceptive tactics on C2 traffic. I’ve seen organizations use these frameworks to create matrices to organize their purple teaming efforts. It’s a great way of ensuring that you have the big picture in mind when organizing your efforts.
Purple teaming involves detailed and frequent communication between the blue and red teams. Some purple teaming projects are short term and don’t produce a vast amount of data (for example, a purple team effort to test the security controls on a single device that is being manufactured). However, purple teaming efforts that are ongoing and are intended to protect an enterprise can produce a vast amount data, especially when you take into consideration guides like the Mitre ATT&CK Matrix and the Lockheed Martin Cyber Kill Chain and Countermeasure framework.
A communication plan should be created for each purple team effort prior to the beginning of testing and response activities. Communication during a purple team exercise can take the form of meetings, collaborative work, and a variety of reports, including status reports, reports of testing results, and after-action reports (AARs). Some deliverables will be evidence based. The blue team will be incorporating indicators of compromise (IOCs) into the current security environment whenever they are discovered. The red team will have to record all details about when and how all its testing activities were performed. The blue team will have to record when and how attacks were detected and resolved. Lots of forensic images, memory dumps, and packet captures will be created and stored for future reference. The goal is to ensure that no lesson is lost and no opportunity for improvement is missed.
Purple teaming can fast-track improvements in measures such as mean time to detection, mean time to response, and mean time to remediation. Measuring improvements in detection or response times and communicating improvements to the organization’s security posture will help foster support for the purple teaming efforts. Many of the communication considerations in Chapter 7 also apply to purple teaming, especially the need for an AAR that captures input from different perspectives. Feedback from a variety of sources is critical and can lead to significant improvements in the ability to respond to cyberthreats. AARs have led organizations to purchase better equipment, refine their processes, invest in more training, change their work schedules so there are no personal gaps during meal times, refine their contact procedures, invest more in certain tools, or remove ineffective tools. At the end of the day, the blue and red teams should feel like their obstacles have been addressed.
The most mature organizations have security automation and orchestration configured in their environment to greatly expedite their attack-and-defense efforts. Security automation involves the use of automatic systems to detect and prevent cyberthreats. Security orchestration occurs when you connect and integrate your security applications and processes together. When you combine security automation and orchestration, you can automate tasks, or playlists, and integrate your security tools so they work together across your entire environment. Many security tasks can be automated and orchestrated, including attack, response, and other operational processes such as reporting.
Security automation and orchestration can eliminate repetitive, mundane tasks and streamline processes. It can also greatly speed up response times, in some cases reducing the triage process down to a few minutes. Many organizations begin working with security automation and orchestration on simple tasks. A good start may be the repetitive tasks involved with phishing investigations or the blocking of indicators. Also, automation and orchestration for malware analysis is a great place to start experimenting with process optimization.
Optimizing your purple teaming efforts can lead to some really exciting advancements in the security program. Using an open source tool like AttackIQ’s FireDrill for attack automation and combining it with a framework like the Mitre ATT&CK Matrix can quickly lead to improvements in your purple teaming capabilities and security posture.
After optimizing your attacks, it’s important to see how your defensive activities can be automated and orchestrated. Nuanced workflows can be orchestrated. Phantom has a free community edition that can be used to experiment with IR playbooks. Playbooks can be written without the need for extensive coding knowledge or can be customized using Python. Consider applying the following playbook logic to an environment and orchestrating interactions between disparate tools:
Malware detected by antivirus (AV) or IDS, endpoint security → snapshot taken of virtual machine → device quarantined using Network Access Control (NAC) → memory analyzed → file reputation analyzed → file detonated in sandbox → geolocation looked up → file on endpoints hunted for → hash blocked → URL blocked
Process optimization for purple teaming is also possible. There are many great open source IR collaboration tools. Some of my favorites are from TheHive Project. TheHive is an analysis and security operations center (SOC) orchestration platform, and it has SOC workflow and collaboration functions built in. All investigations are grouped into cases, and cases are broken down into tasks. TheHive has a Python API that allows an analyst to send alerts and create cases out of different sources such as a SIEM system or e-mail. TheHive Project has also made some supplementary tools such as Cortex, an automation tool for bulk data analysis. Cortex can pull IOCs from TheHive’s repositories. Cortex has analyzers for popular services such as VirusTotal, DomainTools, PassiveTotal, and Google Safe Browsing, to name just a few. TheHive Project also created Hippocampe, a threat-feed-aggregation tool that lets you query it through a REST API or a web UI.
Organizations that have healthy budgets or organizations that prohibit the use of open source tools have many commercial products available to assist them with automation and orchestration of their processes and attack-and-defense activities. Tools like Phantom’s commercial version, Verodin, ServiceNow, and a wide variety of commercial SIEMs and log aggregators can be integrated to optimize processes.
Becoming a master in any skill will always take passion and repetitive practice. Purple teaming allows for cyber-sparring between your offensive and defensive security teams. The result is that both teams refine their skill sets, and the organization is much better off for it. Purple team efforts combine red teaming attacks and blue team responses into a single effort where collaboration breeds improvement. No organization should assume that its defenses are impregnable. Testing the effectiveness of both your attack and defense capabilities protects your investment in cybersecurity controls and helps set a path forward toward maturation.
A Symbiotic Relationship: The OODA Loop, Intuition, and Strategic Thought (Jeffrey N. Rule) www.dtic.mil/dtic/tr/fulltext/u2/a590672.pdf
AttackIQ FireDrill https://attackiq.com/
Carbon Black Response https://www.carbonblack.com/products/cb-response/
Cyber Kill Chain https://www.lockheedmartin.com/us/what-we-do/aerospace-defense/cyber/cyber-kill-chain.html
Google Rapid Response https://github.com/google/grr
International Standards Organization, ISOs 27001 and 270 https://www.iso.org/isoiec-27001-information-security.html
National Institute of Standards and Technology’s Computer Security Incident Handling Guide (NIST IR 800-61r2) https://csrc.nist.gov/publications/detail/sp/800-61/archive/2004-01-16
National Institute of Standards in Technology (NIST) Cybersecurity Framework https://www.nist.gov/cyberframework
Terraform https://www.terraform.io
TheHive, Cortex, and Hippocampe https://thehive-project.org/
1. Lionel Giles, Sun Tzu On The Art of War, Abington, Oxon: Routledge, 2013.
2. William Langewiesche, “Welcome to the Dark Net, a Wilderness Where Invisible World Wars Are Fought and Hackers Roam Free,” Vanity Fair, September 11, 2016.