“I think the most important three things you should do if you want to get a red team job are to decide you can, decide you belong, and decide you will chew through any and all gnarly obstacles encountered along the way and emerge, undaunted, as a member of a red team on the other side.”
Twitter: @briangenz
Brian Genz leads the red team at Splunk. He has information security experience spanning multiple sectors, including defense intelligence, manufacturing, finance, and insurance. Brian has worked in the areas of security assessments, vulnerability management, security architecture, and DFIR/threat hunting. He also serves as an intelligence officer in the U.S. Army Reserve with a focus on cybersecurity and is an instructor with GTK Cyber for the black-hat training called “Applied Data Science and Machine Learning for Cybersecurity.” He holds two graduate degrees, an MBA and an MS in information technology, and multiple industry certifications.
How did you get your start on a red team?
I took a very nontraditional path to offensive security. I had just returned from a deployment to the Ninevah Province in Iraq in 2010, where I’d had the honor and privilege of serving as a long-range surveillance (LRS) company commander. Shortly after coming home, I went to work in the IT infrastructure group at a global manufacturing company.
I quickly realized that I had walked into a tough situation. The painful, hard-won knowledge and experience gained from navigating the persistent gaps, pervasive blind spots, and knowledge drain left in the wake of IT outsourcing seemed like an unending chain of insurmountable challenges. I frequently thought about how mentally and emotionally defeating that environment was, and I often compared it (in my head) to my low points during my time at the U.S. Army Ranger School a few years earlier.
Although I wouldn’t have had the foresight to recognize what was happening at the time, it’s clear now that I was following the mental playbook from Mountain Phase in Dahlonega, Georgia: “Just keep moving. Don’t quit. Solve one problem at a time, and put one foot in front of the other. Just keep going.”
I also followed my natural instincts to solve hard problems, to keep learning, and to continually look for ways to add a “particular set of unique skills” to my toolkit.
It takes a certain degree of measured, controlled insanity to follow the path of incident-driven problem-solving right up to the sketchy ledge of decision points like “Should I figure out how to get access to ACF2 and the mainframe environment to solve some sticky identity and access management (IAM) issues that no one seems willing or able to fix?” (“And should I mention in my request that I don’t know anything about ACF2 or mainframes?”) Sleeves up, heads down, solve the problem…. The forced fearlessness may have been irrational, but I always figured I could go back to hanging drywall if I accidentally deleted all the things and got fired. IBM Redbooks and Google were my friends for a while.
We don’t always know where a path will lead or who we’ll encounter along that journey. I began having more and more interactions with the global security team as a natural extension of the IAM troubleshooting I’d been doing across legacy mainframe systems and a new enterprise IAM system. That led to my first full-time role in information security, and it happened only because I’d been freakishly obsessed with solving sticky problems.
Because the global security team was relatively small, I had the opportunity to start from the ground up, focusing on vulnerability assessments and web application testing and even pitching in with forensics when the situation called for it. I suddenly had a new mission, a new set of professional development targets on the horizon, and a maniacal obsession with understanding how it all worked. Most important for me, though, was the natural alignment between my being hardwired to protect and the broader sense of being part of a community of professionals who, like me, are intrinsically motivated and driven to do whatever is necessary to protect the organizations we serve and the people who depend on our organizations.
What is the best way to get a red team job?
I think the most important three things you should do if you want to get a red team job are to decide you can, decide you belong, and decide you will chew through any and all gnarly obstacles encountered along the way and emerge, undaunted, as a member of a red team on the other side.
The first person you’ll need to convince that you’re capable, worthy, and willing to settle in for the long-haul, behind-the-scenes grind in your pursuit of a red team job is you.
Once you’ve negotiated those terms with your harshest critic, you’re ready to go about the task of identifying one or more hops between where you are and where you’re headed. I’m a fan of What Color Is Your Parachute? by Richard N. Bolles, and if you haven’t had a chance to read it, I’d recommend taking a look. The mental models we often bring to bear on a new and scary challenge can be over-fit based on past experience, bias, and fear. This artificially constrains the set of options that appear to be available. It’s as if our brains and biases conspire to perform a subconscious “zoom in 10x” operation without telling us and then present the reduced set of options as “Here’s what we’ve got, deal with it.” Refreshing our inventory of available mental models, as Richard does in his book, could help you identify paths from point A to point B that might not have been visible when viewed through the same old lenses of experience and unconscious bias. Even if you don’t yet have a high-resolution, detailed image of the desired end state, the process of thinking deliberately about how to find potential paths to your objective may at least bring a rough outline or silhouette into view.
With that foundation in place, you’ll be in an excellent position to start exploring the information security field and red team specialization, enumerating the list of skills and capabilities common among red team professionals, and evaluating options for acquiring those skills.
How can someone gain red team skills without getting in trouble with the law?
Andy Hunt wrote a book titled Pragmatic Thinking & Learning: Refactor Your Wetware, and the concepts from his book helped me develop a road map of professional development objectives.
While there is undoubtedly a wealth of other useful knowledge in Pragmatic Thinking & Learning, the aspect of the book that is most relevant to your quest to become a red team professional is the concept of the “journey from novice to expert.” Hunt describes the Dreyfus model of skill acquisition, developed in the 1980s by Stuart and Hubert Dreyfus. The researchers were working to improve the state of artificial intelligence by building software capable of learning and acquiring skills in the same way that humans do. Hunt outlines the five stages of the Dreyfus model: Novice > Advanced Beginner > Competent > Proficient > Expert.
An important note about the model is the idea that you will likely land on different levels depending on the particular set of skills required for that knowledge domain. For example, you may be a Proficient Linux sysadmin, a Competent Python developer, and a Novice home chef. If you apply this model to your objective of securing a position on a red team, the foundational technology skills such as networking, system administration, and scripting proficiency can accelerate your progress.
Equipped with a structured approach to mapping out the skill acquisition you’ll need, you’re ready to identify a curated set of information and training resources related to offensive security. Here are a few helpful resources you might want to explore:
The key to moving from Novice toward Competent is consistent, deliberate, hands-on practice. The resources will provide a treasure trove of guidance that will enable you to chart a course based on your available resources.
A few guiding principles will assist in avoiding criminal and legal issues while developing your offensive security skills:
Why can’t we agree on what a red team is?
Different strokes for different folks. It will sort itself out. Information security is a relatively nascent field compared with many of the more established professional disciplines, and there continues to be a range of strongly held opinions about which bespoke approach is best.
What is one thing the rest of information security doesn’t understand about being on a red team? What is the most toxic falsehood you have heard related to red, blue, or purple teams?
I try to avoid broad generalizations, in general. One thing I would want people to know about the red teams I’m familiar with is this: there are highly skilled, quiet professionals who are deeply committed to helping their organizations reduce their attack surface, improve their security posture, and increase the organization’s ability to achieve their mission in a manner that protects the people whom they serve.
When should you introduce a formal red team into an organization’s security program?
I can see where there would be multiple valid approaches to answering this question. Some people might recommend that an organization satisfy a series of progressive “gates” or milestones before adding a red team into the mix. For example, I would generally agree with the perspective that it’s essential to focus on the fundamentals, the blocking and tackling components such as asset inventory, patching, IAM, backups, and the like. However, the question implies a sequential, upward progression from CMM Level 0, to CMM Level 1, and beyond. The logical consequence of that argument might be a conclusion resembling this: “And therefore, we can see (as the laser pointer dot jumps confidently to the top-right quadrant of slide 42) here that it would be imprudent to deploy a sophisticated red team capability until we reach this level of maturity.”
If I were the Minister of Important Decisions for a day, I’d take an alternate approach. First, I would humbly submit for your consideration the idea that there is an inherent false dilemma built into the argument that we have to choose between option A, which is “Deploy the last few pennies of limited budget to finally solving asset inventory,” and option B, “Unleash the red team, full scope.” I think it’s reasonable to suggest that this inaccurately reduces the spectrum of available options to these two extremes.
Second, I’m interested in Ground Truth. I want—no, I need—to know what the situation on the ground is, no matter how desperate. And, if the on-the-ground reality is bad, I especially want to know about that. We need to maintain a common operating picture of when and where there are weaknesses, exposures, misconfigurations, emerging flawed practices, and so on. If we don’t deploy a red team capability that proactively seeks out gaps in our defenses by leveraging offensive security tradecraft, we are allowing preventable blind spots to exist, fester, and potentially get popped.
So, building or bringing in a red team capability should be done as early as possible so that leadership is informed about security exposures by trusted advisors rather than strangers. It’s okay to use the red team to identify weaknesses before all of the basics are “solved” because those blocking and tackling aspects of managing infrastructure are unlikely to be “fix it and forget it” activities.
How do you explain the value of red teaming to a reluctant or nontechnical client or organization?
I’ve found that it’s helpful to describe the portfolio of services that the team provides and to frame those services in the context of the organization’s mission and objectives, as potentially impacted by a significant security incident.
If the services stop at the point of “Here’s your 400-page PDF, good luck, and we’ll see you next year (with the same stuff we just used),” a knowledgeable client or executive will not see the value regardless of how the narrative unfolds. In this case, the team is essentially offering a service that keeps them employed rather than one that effects positive change for the client. That approach has a limited shelf life, given the options available to forward-thinking consumers of red team services.
What is the least bang-for-your-buck security control that you see implemented?
Commodity AV.
What’s the most important or easiest-to-implement control that can prevent you from compromising a system or network?
I think one approach would be using a combination of the following:
Why do you feel it is critical to stay within the rules of engagement?
Adhering to the rules of engagement (ROE) is essential because this is the contract—the authorization and agreement that distinguishes the red team from the unauthorized adversary. It describes lines that trusted advisors cannot cross if we intend to achieve our objective of enabling leadership to make informed decisions based on identified risks.
To frame this in a kinetic context for emphasis, imagine a different scenario in which the red team is an infantry squad moving into a military operations in urban terrain (MOUT) site at Ft. Benning, Georgia. Let’s suppose the ROE defining and governing the behavior by both offense and defense in the scenario includes a sentence, highlighted in bold, that says, “All participants in this evaluation will use blank ammunition. No participants will use live ammunition or other explosives. Live (and holy) hand grenades are right out.”
If some inexperienced but eager soldier on the red team decides to go above and beyond the usual routine involving blank ammunition in the interest of providing even more value by simulating the actual behavior of adversaries attacking the MOUT site, that would be somewhat problematic for all involved. It violates the “first do no harm” principle, it may erode trust among the members of the offense and defense, and it will certainly fail to meet the established training objectives for the evaluation.
If you were ever busted on a penetration test or other engagement, how did you handle it?
I remember one situation a while back in which a member of the blue team contacted me to ask if a specific activity that triggered an alert was related to a red team engagement. Because I’ve been on both sides of this scenario, two things came to mind immediately.
First, I “fessed up” and confirmed that the activity was related to an ongoing operation we were running. For anyone who has scrambled a team and worked all night to investigate potentially malicious activity only to find out the next morning that it was the team 15 feet from your corner of the cube farm that was executing the attack, you can probably relate to this approach. As leaders, we need to know where the point of diminishing returns lies in a situation like this. We need to find the right balance between two competing perspectives: the desire to maximize the realism and training value versus the practical and humane considerations around taking care of people and not burning scarce cycles overnight (times the total number of analysts working instead of sleeping) in the name of “letting it play out.”
Second, I notified several people who had likely been or would soon be looped in, via the grapevine, that there was a problem. The goal at this point was twofold: I wanted to make sure I spread the word at least as far as the probable blast radius of message traffic for the same reasons outlined earlier. (Think “snowball effect,” but along the paths of an informal phone tree.) I also took the opportunity to give credit to the folks on the blue team for their excellent work in detecting the malicious activity, and I echoed that praise and kudos to multiple people in leadership. That was a genuine sentiment, and also, it’s helpful from a relationship-building perspective to demonstrate our unwavering support for and solidarity with our brothers and sisters on the other side of the problem set.
How does the red team work together to get the job done?
One way I think about the required competencies and the division of labor for the red team is the concept of training an infantry unit. There are individual tasks and collective tasks. These are derived from the Mission Essential Task List (METL), and they are the building blocks of developing and maintaining the capability to perform the assigned missions. I think there’s value in considering this approach as we build, sustain, and improve our red teams.
Here’s an example: let’s say we are members of an Infantry Rifle Company. One task we must be proficient in performing, as a group, is “Conduct an attack.” There are collective tasks that the entire group must be able to perform together competently to complete the mission successfully. I’ll list a few supporting collective tasks here to provide more context:
Without drilling into all of the individual competencies or common tasks that each soldier must demonstrate, we’ll consider just one example that is nested under “Conduct a movement to contact,” and that is “Camouflage self and individual equipment.”
Let’s tie this back to the question of “How do you work together to get the job done?” on a red team. We’ll call the mission Deliver Red Team Assessment to simplify. A random sampling of the supporting collective tasks might include some of the following (but with shorter names):
Now let’s deconstruct the last collective task, “Compose and deliver the red team assessment report.” There are specific competencies that individuals need to possess to be able to contribute to the collective task that the red team has to be able to perform.
I believe that the written and verbal communication skills required to compose and deliver a red team assessment report are essential for each professional in this role, especially senior-level professionals.
So, we can follow the example and call out one example of an individual competency that might be nested under “Compose and deliver the red team assessment report,” and that is “Perform translation of technical security exposures into risk-based observations to executive stakeholders.”
When thinking about how to divide the work and coordinate tasks across team members, it can be useful to consider the training and skill-building value of partnering individuals who are skilled in the competency of “Perform translation of technical security exposures into risk-based observations to executive stakeholders” with other team members with less proficiency in a particular set of skills.
Some engagements might warrant more crosstalk and pitching in across a wide range of tasks than others. It’s important to find a balance that gets the job done while simultaneously enabling individuals to spend a decent amount of their time on the more traditionally exciting activities (breaking stuff).
What is your approach to debriefing and supporting blue teams after an operation is completed?
I build this into the requirements for the red team assessment. For example, in addition to the typical scope documents, I outline a specific approach for the red team to record and maintain records of all attack steps, with detailed host and network data that we use for a multi-session debrief with the blue team. Think of this in terms of having the red team maintain a detailed timeline of which hosts were compromised, when the activity occurred, and so on.
I’ve framed this as a way to build a tighter feedback loop between offense and defense. For example, in addition to providing the detailed list of impacted hosts, I’m interested in discussing the following with the blue team after the completion of the engagement:
These focus areas for the post-assessment review with the blue team can help to highlight visibility gaps. It can also add concrete examples to the business case the blue team needs to make when they’re working to reduce visibility gaps in the environment.
If you were to switch to blue team, what would be your first step to better defend against attacks’?
I think this question will prompt some people to think about which defensive technique/config/“secret GPO sauce” will provide the maximum protection for the attack techniques (known today) for a reasonable level of effort and resources. Don’t get me wrong—this is absolutely a valid approach in the sense that we should continually strive to identify structural, pervasive security weaknesses and remediate those to reduce our attack surface. I’ll defer to my colleagues for those recommendations in this case and focus instead on answering the question I’m hearing:
Prepare to be underwhelmed as I throw down a notional challenge coin with the longest and least sexy motto (that will never be on an actual challenge coin):
I would do the following, in this order:
I think each organization has its own unique challenges and maturity levels at a point in time, and my anecdotal sense is that it would be challenging for many organizations to allocate resources to “practitioner-defined strategy,” given the volume and pace of tactical and operational workloads in play.
What is some practical advice on writing a good report?
I’m on a 10-hour flight back to the United States on a Boeing 787 as I consider this question, and I keep coming back to this thought about writing testing reports: why do aviation maintenance inspections occur, and how do the inspectors collect, evaluate, package, and deliver their results to the people who are responsible for making sure this aircraft is free of severe defects? (Also I’m wondering now whether aircraft maintenance inspectors complain about writing reports. If you currently work in that field and you do complain about that, please know that I, for one, really appreciate your time, expertise, and diligence, especially when there are people depending on you to protect them from preventable risks. Also a quick shout-out to the people out there who write testing reports for bridges, elevators, and airbags.)
Full disclosure up front: I believe that the era of rolling in, hacking all the things, getting domain admin by lunchtime, and then dropping a several-hundred-page PDF on someone’s desk on your way out the door is over. Realistically, it’s probably been over for a while, but old habits die hard (habits like “rinse and repeat” pentests year after year and also like watching Die Hard around the holidays because it is clearly a holiday movie).
It’s essential to have a framework that guides the red team reporting activities that serve as the delivery mechanism that, if configured thoughtfully, will lead to establishing top-of-mind persistence in the leadership conversations and priorities.
One approach you might find useful is developing a customized “package” or bundle of work products related to a red team engagement. Here’s how I think about that:
Who do we need to tell about what was identified as needing remediation, and what level of detail does each person need to take the required steps at their level to facilitate remediation? Every organization will be slightly different, but here are a couple of examples:
These are just a few examples of how you might take the “recipe” found in any of the available penetration testing frameworks and then customize the final product or report template based on your organization’s requirements. I’d recommend considering the approach of designing the baseline report and then augmenting that with additional sections or appendixes as needed, based on the particular stakeholder group.
One quick tip I’d recommend: begin with the end in mind when collecting and presenting system details in the report. Think about the “last mile” of delivery when it is handed off from, for example, a member of the red team to the system owners. Keep in mind that “system owners” probably means something a bit different, in a practical sense, than it did before virtualization, and I’d argue the same dynamic is in play in terms of “cloud.”
The system owners you’re working with may need to track down and divvy up some of the remediation goodness with other system owners. So, please consider providing a separate spreadsheet or CSV file that lends itself to being analyzed, parsed, and chunked into smaller data sets in addition to dropping IP addresses and host names into a Word doc and then exporting to PDF.
How do you ensure your program results are valuable to people who need a full narrative and context?
One way to identify and close any gaps in perceived value in reports related to red team operations is to solicit feedback. One caveat—we are leveraging the collective experience and expertise of the members of our red team to provide intelligence to leadership about the organization’s security posture. Therefore, we are not in a position to do any less than that, regardless of what consumers of our reports might prefer. Said a different way, the consumers of our reports should not be allowed to “downvote” items identified under the guise of “the customer is always right.”
There is a practical, proven model of soliciting feedback for research or other types of assessments that you can adapt to whatever scale makes sense for your situation. You could have a verbal discussion with the stakeholders and ask questions like, “What about this report, and this process as a whole, is working well in your opinion?” “What isn’t working as well, and why?” It would also be useful to ask them questions about how the information flows downstream from the report review meetings. Do some recon, understand their operational processes and workflows, and take time to view the most recent report review process through their lens just long enough to understand some of the challenges and context that might be contributing to any criticism they share about the reporting process or related work products.
How do you recommend security improvements other than pointing out where it’s insufficient?
It’s always challenging to find the right balance between delivering the insights about identified risks and calling someone’s baby ugly.
One approach that can be a reasonable middle ground is to build relationships with the key teams and individuals who will be the consumers of the recommendations you make. If possible, it’s helpful to develop those relationships early and often to avoid having a first conversation that necessarily revolves around a significant finding and the corresponding unsolicited recommendations.
As we build relationships with people (notice the distinct absence of the sterile term stakeholders here) who have different roles, accountabilities, and priorities than we do, I think it’s helpful to try to walk a kilometer in their shoes. Consider this to be a mental exercise in seeking first to understand and then making a genuine effort to move from knowledge to empathy.
If you can actively look for opportunities to recognize and give kudos to the people you’re advising as a steady-state, ongoing approach, you effectively accomplish two things: you establish a pattern of providing advisory feedback, and you end up providing both positive feedback and recommendations for improvement over time.
What nontechnical skills or attitudes do you look for when recruiting and interviewing red team members?
We need people who are humble, driven, resilient, teachable, and fearless, but in a first-do-no-harm manner. We need people who understand what servant leadership means in the context of information security. We need people who are hardwired to protect. We need people who are willing to do the less glamorous, behind-the-scenes work required to keep the lights on and answer the mail. We need quiet professionals who stand shoulder to shoulder with colleagues in our common cause of applying technical and tradecraft expertise to the mission of protecting the organizations we serve. We need people who can translate technical details into a narrative that frames security weaknesses in a risk-based context.
What differentiates good red teamers from the pack as far as approaching a problem differently?
I think one characteristic that differentiates good red teamers from the pack is a combination of resilience, tenacity, and tradecraft expertise that is laser-focused on the objective of delivering risk intelligence surfaced via the red team assessment. ■