41
Georgia Weidman

“Asset management is something we as an industry don’t have down pat yet.”

Closeup image of the serial entrepreneur, penetration tester, security researcher, speaker, trainer, and author "Georgia Weidman."

Twitter: @georgiaweidman

Georgia Weidman is a serial entrepreneur, penetration tester, security researcher, speaker, trainer, and author. Her work in the field of smartphone exploitation has been featured internationally in print and on television. Georgia has presented or conducted training around the world, including at venues such as the NSA, West Point, and Black Hat. She was awarded a DARPA Cyber Fast Track grant to continue her work in mobile device security and is a Cybersecurity Policy Fellow at New America. Georgia is also the author of Penetration Testing: A Hands-On Introduction to Hacking from No Starch Press.

How did you get your start on a red team?

In college I competed in the Mid-Atlantic Collegiate Cyber Defense Competition. As part of the competition there was a red team whose job was seemingly just to make the students cry and vomit. By the end of the competition I knew I wanted to do what they did!

From there I started doing research, giving talks, and conducting training classes at security conferences and meetups. That eventually got me in front of the right people to get opportunities to participate in red team engagements.

What is the best way to get a red team job?

Unfortunately, for people like me who look for the nearest dustbin to dive into whenever I’m faced with having to talk to someone one on one, a lot of getting a job in any industry comes down to networking. At least for me I was able to make up for what I lacked in social skills by being active in security research, presenting at conferences, and volunteering to give training classes at security meetups. This led to people with hiring power offering me jobs.

How can someone gain red team skills without getting in trouble with the law?

My book Penetration Testing: A Hands-On Introduction to Hacking is one resource for new people to learn about hacking in a controlled environment. The exercises are hands-on, but you complete them in a lab environment.

There are also competitions such as capture the flag (CTF) where you can hone your skills with permission to attack the targets. Many CTFs leave their problems up for you to work through later. For the physical side, there are lockpick villages at many hacker conferences. In general, as long as you are practicing on systems, applications, etc., that you own or have express permission to attack, you are learning ethically.

Why can’t we agree on what a red team is?

I think it’s mostly due to elitism in certain pockets of the hacker community. Some people brag that they only take engagements that are “no holds barred” and that anyone who does not is not a “real” red teamer. In the real world, that is just not realistic. Organizations that allow you to work without any rules of engagement are few and far between.

“Some people brag that they only take engagements that are “no holds barred” and that anyone who does not is not a “real” red teamer. In the real world, that is just not realistic.”

What is one thing the rest of information security doesn’t understand about being on a red team? What is the most toxic falsehood you have heard related to red, blue, or purple teams?

I often hear, particularly from people who are in the business of making defensive security products, that security testing doesn’t provide value. It’s true that it can be a difficult sell because defensive products don’t have metrics such as how many suspicious links were blocked or how many instances of potential malware were found. But attackers continue to get past as many security controls as we put in front of them. It’s true that the goal of security testing is to secure the organization so that nothing happens, but, without real security testing, all the defense in the world will never be enough.

When should you introduce a formal red team into an organization’s security program?

In my experience, many organizations bring in red teaming too soon and end up wasting their money. You shouldn’t be paying red team prices to find missing patches, default passwords, and similar low-hanging fruit. Sign up to have your organization scanned for vulnerabilities first or, better yet, purchase a vulnerability scanner and scan your organization’s IT assets regularly yourself as part of your security program.

You should engage a red team when you believe your organization’s security posture is robust. Anyone can use a prepackaged tool to exploit a known remote code execution vulnerability. It takes a more sophisticated attacker to gain access to a more robust organization, and thus it takes more skill, time, and effort on the part of the security testers.

How do you explain the value of red teaming to a reluctant or nontechnical client or organization?

I’ve never been much of a salesman. My security testing clients are almost entirely inbound. Now that I have a mobile security testing product startup, I’m doubly faced with reluctance of many clients to do security testing combined with anxiety about bringing BYOD assets into a testing engagement. In general, testing is a more difficult sell since, ideally, after an engagement, nothing happens, and attackers do not break in. That’s naturally a harder sell than “Our product stops 100 percent of security attacks” even though that’s a logical fallacy.

What is the least bang-for-your-buck security control that you see implemented?

While things are finally picking up steam with some of the more in-depth mobile threat defense products, some security companies have seemingly done little more than put the word mobile in front of the name of their Windows desktop antivirus products. As mobile devices have matured from the likes of iPhone OS 1, where everything including the browser ran as root, to among the most complex security models available, even basic tasks such as scanning the filesystem for malicious file signatures just don’t work on mobile devices—nowadays mobile antivirus applications are typically restricted to their own sandboxes. In the extreme case, some mobile security apps simply periodically wake up and check if it is itself a virus before going back to sleep!

“While things are finally picking up steam with some of the more in-depth mobile threat defense products, some security companies have seemingly done little more than put the word mobile in front of the name of their Windows desktop antivirus products.”

Have you ever recommended not doing a red team engagement?

This happens a lot. Many customers reach out to me looking for red teaming or penetration testing when really what they need to start is vulnerability scanning or help developing a basic security program. Even though it means less revenue for me, I always steer potential clients in the right direction for their needs. That honesty often makes them repeat customers down the road when they are ready for more rigorous, and better paying, testing.

What’s the most important or easiest-to-implement control that can prevent you from compromising a system or network?

There is so much an organization can do with only a little budget. Certainly buying a vulnerability scanner can do wonders for finding the easy wins for attackers. Many of these are very affordable. They will help sort out missing patches, default passwords, and known vulnerabilities in commercial off-the-shelf software, etc., that are so often used as the initial foothold by attackers.

Additionally, an attack that gets me in nearly all the time is LLMNR poisoning, where I passively gather hashed credentials on the network. For this attack to work I have to be able to crack the captured password hashes and turn them back into their plaintext values so I can authenticate with systems. Certainly password complexity is not a problem we have managed to solve yet, but the IT department in an organization can download and run the same password cracking software and wordlists that security testers and attackers use. Feed the tool your domain credentials and require any user whose password is cracked in X hours to change it to something more robust.

Why do you feel it is critical to stay within the rules of engagement?

Naturally, given how so much of our society sees hackers—as criminal masterminds dead set on destroying the world just to show their rivals they can—many organizations have an understandable reluctance to allow security testers to attack their organization. The rules of engagement that are decided before the testing begins can be as rigorous or liberal as the client is comfortable with.

“The rules of engagement that are decided before the testing begins can be as rigorous or liberal as the client is comfortable with.”

Breaking the rules of engagement, even if you think it makes the testing more real-world authentic, only feeds into the notion that ethical hackers are just malicious attackers with a cover job. We need organizations and society as a whole to be more comfortable with ethical hacking, not less. On a slightly less important note, if you want repeat business, you are unlikely to get it if you break scope, and you may even find yourself in breach of contract and not receiving payment for the work you did.

If you were ever busted on a penetration test or other engagement, how did you handle it?

I’m always happy when an organization catches me in the act. Of course, I use industry-standard methods to avoid detection and to bypass defensive technologies the organization has installed, but so many testers seem to be in it simply to show how elite they are. If an organization catches me, that means their security program is where it needs to be to catch real attackers.

It doesn’t happen often, but, on one such occasion, I was performing an email-based phishing attack. The website was an off by one letter of a cloud service the organization used, it had an SSL certificate, and the email looked like emails the organization’s users were used to receiving regularly. However, before all the phishing emails had even been sent out, a target had sent the email through the proper channels to be investigated as a phishing attack. Once it was verified, IT sent out a notification to everyone that it was a phishing attack. I simply praised them for their mature security posture around phishing.

What is the biggest ethical quandary you experienced while on an assigned objective?

While I’ve fortunately not had ethical quandaries during an engagement, I have been faced with ethical quandaries when choosing who I want to work with, as a subcontractor, a partnership, or even an acquisition for my mobile security testing product company.

Of course, the same skill sets that we use in our ethical hacking engagements can be used for evil, and the products we build to help with testing can be used by malicious attackers. For example, perhaps an entity hacks mobile devices without permission, but they only do it for governments, including foreign ones whose human rights policies don’t match my own morals. In these situations, I have to make judgment calls about what makes sense for me.

How does the red team work together to get the job done?

The most valuable part of security testing is not getting domain admin but rather leaving the customer with a clear understanding of their security shortcomings and an actionable plan for how to fix them. Many automated tools will spit out remediation recommendations like “Disable the offending service” altogether, but that may not be possible for business reasons. Getting remediation and mitigation advice that is detailed and feasible is more valuable to a customer than the output of a tool they could have bought themselves for likely less than what they are paying you. There will always be some customers who just want to check the box that they did the bare minimum necessary, but many organizations’ blue teams genuinely are invested in improving their security posture. For me it’s important to not only clearly explain my results but also keep an open dialogue with the client blue team as they work through remediating the issues in case they have any questions. Additionally, I always offer a remediation validation to make sure the problems I found have been successfully mitigated.

What is your approach to debriefing and supporting blue teams after an operation is completed?

I recognize that me using testing as a way to feel elite for breaking in and accessing the keys to the kingdom doesn’t provide value to the customer. The most important part of security testing is clearly explaining what I did and what the organization can do to fix it. There will always be some customers who just want to check the box that they did the bare minimum necessary, but many organizations genuinely are invested in improving their security posture. For me it’s important not only to clearly explain my results but also keep an open dialogue with the client as they work through remediating the issues in case they have any questions. Additionally, I always offer a remediation validation to make sure the problems I found have been successfully mitigated.

If you were to switch to the blue team, what would be your first step to better defend against attacks?

I have so much respect for blue teams. On the offensive side we just have to find one weakness to break in. On the defensive side they have to secure everything.

“I have so much respect for blue teams. On the offensive side we just have to find one weakness to break in. On the defensive side they have to secure everything.”

I worked on an internal security team of a government agency at the beginning of my career, and aside from the sheer volume of devices, vulnerabilities, etc., in play, the most difficult part had to be getting buy-in from higher up the food chain to force employees to fix the remote code execution vulnerability, even though that patch will affect the functionality of an application.

On the offensive side, I’m called in when companies have a budget and are on board with what I am doing. On the defensive side, it was a constant battle even to be able to fix the printers that had security issues. Security was just a line item in the expense column on the budget, and the security team was seen as annoying people you hoped didn’t show up in your office. We definitely need to see a cultural change around this for the sake of the defenders who are doing the hardest job in security.

If everything went bust for me and I found myself back in a blue teaming role, the first thing I would do is tackle the none too simple task of identifying all the IT assets in the organization. Asset management is something we as an industry don’t have down pat yet. I can’t count the number of times I’ve been on security testing engagements where I could tell a lot of time, money, and effort had been put into building a robust security posture. But it all came crashing down because there was an old box that was decommissioned ages ago, but no one actually turned it off, so it’s been running in the back of a storage room for years. That system hasn’t been under the scrutiny of the security team in a while and has vulnerabilities to match. It’s particularly helpful if the box is an old domain controller and the domain administrator password on it is the same as the one on the updated, fully patched domain controller.

What nontechnical skills or attitudes do you look for when recruiting and interviewing red team members?

I don’t shy away from social awkwardness, no pun intended, since I have the social skills of a rock myself. But I do look for the ability to communicate effectively both verbally and in writing to both a technical and nontechnical audience. I also look for passion for the field. I’m not looking for people who work their 9 to 5 and go home and play video games all night. I’m looking for the people who are doing security research and presenting it at conferences or via white papers or blog posts. I’m looking for the people who, if they know nothing about web application security despite their deep knowledge of network security, will seek out the resources and crackme apps they need to learn the skills in whatever they need to be proficient in.

What differentiates good red teamers from the pack as far as approaching a problem differently?

I believe the most important thing is insatiable curiosity and thinking outside the box. If you spend time in red teaming, you will eventually run into technology you aren’t familiar with, that no one has done a security conference presentation or released a white paper about, and that there isn’t a public exploit or even a CVE for.

Anybody can read the manual, watch a few videos, and click Go on an automated tool, but it takes dedication and passion to home in on ways a previously unknown technology might be used to undermine security and work out a successful attack on the fly while on an engagement. ■