Chapter 12. Exploratory Testing

Image

Exploratory testing combines test design with test execution and focuses on learning about the application under test. It requires a mixture of thinking processes: logical, calculating, and conscious, along with fast and instinctive. In Chapter 4, “Thinking Skills for Testing,” we discussed how different styles of thinking are needed for different types of testing. Exploratory testing is one area that particularly benefits from applying different ways of thinking. In this chapter we will share how exploratory testing has developed and how you can practice it in your agile teams.

Exploratory testers must consider many aspects of the product, including its users, how the feature they’re testing relates to the company’s business goals, what ripple effects implementing the feature might have on other parts of the system, and what the competition is doing. In our experience, testers who are skilled at exploratory testing bring enormous value to their teams.


Lisa’s Story

My current team hired its first full-time tester less than two years before I joined as the third tester. Working with testers was new to the company culture. The team developed everything test-first and did a great job of test automation at all levels. After we three testers had been working together with the team for more than a year, I was surprised and gratified when one of the programmers noted on a company forum that he thought that no amount of automated tests could replace skilled, dedicated testers who know how to do exploratory testing well.


Exploratory testers do not enter into a test session with predefined, expected results. Instead, they compare the behavior of the system against what they might expect, based on experience, heuristics, and perhaps oracles. The difference is subtle, but meaningful.

James Lyndsay explains the subtle difference between scripted testing and exploratory testing in his paper “Why Exploration Has a Place in Any Strategy” (Lyndsay, 2006). He notes:

An automated test won’t tell you that the system’s slow, unless you tell it to look in advance. It won’t tell you that the window leaves a persistent shadow, that every other record in the database has been trashed, that even the false are returning true, unless it knows where to look, and what to look for. Sure, you may notice a problem as you dig through the reams of data you’ve asked it to gather, but then we’re back to exploratory techniques again.

In another paper, “Testing in an Agile Environment” (Lyndsay, 2007), James suggests that one of the roles that testers can play is the “bad customer.” A bad customer goes off the happy path and may even try to break the system. Using your knowledge of potential issues, you can play the part of the “bad customer,” whether that’s someone who’s malicious, in too much of a hurry, or simply incompetent. Here are some examples of actions you might try as this persona:

Image Invalid parts of input—characters, values, combinations

Image Time changes

Image Unusual uses

Image Too much—long strings, large numbers, many instances

Image Stop halfway/jump in halfway

Image Wrong assumptions

Image Making lots of mistakes, compounding mistakes

Image Using the same information for different entities

Image Triggering error messages

Image Going too fast

You may not consider yourself to be a skilled exploratory tester, but you may have tried typing in all the special characters on the keyboard or blowing out the buffer of an input field. You can use these instincts in a more mindful way. Elisabeth Hendrickson’s “Test Heuristics Cheat Sheet” (Hendrickson, 2011) is a great place to start looking for ideas. Exploratory testing gives you the opportunity to use your ability to critique, assess, and challenge your understanding of the product in a purposeful manner, in order to provide critical information to your stakeholders. Later in this chapter we will suggest ways to give constructive feedback to stakeholders.

Image

Exploratory testing and automation aren’t mutually exclusive but rather work in conjunction. Automation handles the day-to-day repetitive regression testing (checking), which enables the exploratory testers to test all the things the team didn’t think about before coding. As you explore, you may find additional tests that need more investigation or should be automated. You will likely use automation to set up exploratory scenarios, to monitor log files, or perhaps to explore scenarios that are not accessible through manual means. For complex new features where there are many “unknown unknowns,” testers and programmers can explore together as they spike potential solutions to learn enough about the feature to start writing stories and tests to guide development.

There are multiple ways to explore. You may explore alone, or in pairs or groups. In this chapter, we’ll discuss a few techniques that have worked well for us, and we’ve included stories from those who have had other experiences. We encourage you to experiment with these approaches as well, using lightweight strategies to manage them. The bibliography for Part V, “Investigative Testing,” contains many books, articles, and other resources for learning more about this powerful testing approach. Make exploratory testing an important part of your toolkit when testing stories, features, and the system as a whole.

Creating Test Charters

A test charter outlines a goal or mission for your exploratory session. There is no one right way to create a test charter, but we’ll look at a few different methods. As always, it’s best to experiment and find the one that works for you. You may find that one style works best for session-based test management (SBTM), while another works better for testing with thread-based test management (TBTM). Charters may range from happy path functional validation (although this is perhaps less valuable) to exploring failure modes, and performance and scalability.

Elisabeth Hendrickson’s Explore It! (Hendrickson, 2013) has a section on creating good charters. It takes practice to find the right level of detail for your purposes. Too specific means that you don’t have enough room to wander off the beaten path to make unexpected discoveries. Too vague or broad doesn’t give enough focus and may cause you to waste time.

Experiment with how you word charters, as well as with the number of charters you create. Elisabeth notes that “a good charter is a prompt: it suggests sources of inspiration without dictating precise actions or outcomes” (Hendrickson, 2013, p. 16). One template Elisabeth has found that works well and gives enough guidance and focus is

Explore . . . <target>

With . . . <resources>

To discover . . . <information>

It’s better to have multiple charters, each of which is concise and focuses on a single area, for example:

Explore editing profiles

With real usernames

To discover if there are instances where username constraints are not enforced

Another way to create a test charter is a mission statement and areas to be tested, for example:

Analyze the edit menu functionality of Product X

And report on areas of potential risk in Operating System Y

The simpler you keep a charter, the easier it is to stick to it. However, James Lyndsay reminds us that the broader you make a charter, the easier it is to consider distractions within the charter (Lyndsay, 2014). The breadth and specificity of the charter are tools to guide exploration. For example, a charter that says “Performance Test X” really gives no guidance.

Let’s use an example of a web-based toy store to write a charter for the end game during a delivery cycle. Note: The feature would have been explored for the workflow as soon as it was completed.

Explore shopping for a new toy

With a real live user

To identify potential bottlenecks and unexpected disadvantages

Again, we’re not suggesting that one way is better than another. On agile teams, we move at a fast pace, so we want to keep focused on the stories and features we’re currently developing. At the same time, we have to keep the big picture in mind and make sure new code doesn’t cause unintended effects elsewhere in the application.

Start with the style of charter that seems most workable for your team. Trying different formats is a nice way to shake things up and help you see your software with a fresh perspective. You can think about the templates we’ve provided as training wheels until you find the one that works for you.

Generating Test Charter Ideas

There are a few techniques that can help generate test ideas that we’ll share in this next section. It is not an exhaustive list, but we hope it will trigger some of your own ideas.

Exploring with Personas

Personas are a way of understanding your end users, and many companies create personas as part of their marketing strategy.

Jeff Patton (Patton, 2010) and David Hussman (Hussman, 2011) are among the many practitioners who create pragmatic personas to identify who actually uses a product. Personas are a good way to imagine different ways people will use an application. We explained how we use personas for usability testing in Agile Testing (pp. 202–4), but we think the concept can be taken beyond usability. If you currently don’t have defined personas, conduct a quick workshop with your team to discover at least some of them to give you a start. We’ve mentioned James Lyndsay’s “bad customer,” and we’ve shared two personas (see Figures 12-1 and 12-2) that Mike Talks has used for testing login account functionality and security.

Image

Figure 12-1 Security Worried User persona

Image

Figure 12-2 Help Desk Admin persona

Concentrate as a user on the following chunks of functionality:

Image Suspend your account.

Image Deactivate the RSA token linked with an account.

Image Delete your account.

Potentially this user will need help from the help desk.

The Security Worried User: This user has concerns about using the login service and wants to be protected from anything bad happening. After all, you hear such terrible things in the news about people’s identities being stolen.

Try to support the Help Desk admin with the following:

Image Search user.

Image Authenticate identity.

Image Deactivate RSA token.

Image Delete account.

Flows to follow:

Image Create new account.

Image Set account to be Help Desk User or Help Desk Admin.

Image Delete account.

Image Get reports of activity.

By its nature you will also be touching upon

Image Monitoring activity

Image Validation

Personas are a great way to look at your application from different angles.

The Help Desk Admin: The administrator can give permissions to other users to make them Help Desk Users or Help Desk Admins. The Help Desk Admin goes beyond the Help Desk User in what the person can support and change in people’s accounts.


Lisa’s Story

My team experimented with creating a separate project for system testing a complete rewrite of our agile project-tracking product. A tester, a designer, and a marketing expert teamed up to identify various personas representing our users. Developer Denise and Product Paul were two representatives of our product’s users. We wrote charters as user stories so we could track them in our online tracking tool. This was a good way to track the testing we felt was needed, but not a good way to capture the results of our exploratory testing sessions. See the section “Recording Results for Exploratory Test Sessions” later in this chapter on how we did that.

Persona: Paul is the project manager for Agile Toys. In his weekly iteration planning meetings (IPMs) with the team, he goes through stories in the upcoming iteration, answers questions, updates the stories with point estimates, and rearranges stories in the backlog according to priority. He typically uses his iPad in the IPM to make the changes. At his desk, he uses Chrome on a MacBook Air. Paul’s main usage of the tracking tool is keeping the backlog organized and prioritized.

Charter: Explore as Paul, the project manager, in an IPM to discover any issues with concurrent updates to stories from different devices.

Scenarios:

• Pre-IPM backlog prioritizing

• Iteration planning meeting—updating stories

Each of us sets aside an hour a day for exploratory system testing for what we call “mini group hugs,” where each tester chooses a persona and a browser and tests concurrent usage of the system. We recorded our test results on our team wiki.

Using this process, we’ve found important bugs that weren’t found while doing exploratory testing on the individual user stories. For example, an exploratory testing session with a charter of updating stories as Paul would during an IPM turned up several new bugs related to concurrent usage.


If you use personas, make sure your whole team understands them and how they can help with testing. Make them visible, perhaps by pinning their pictures and profiles on the wall. It is a good way for the programmers to get a better understanding of the users, and it also helps raise awareness of the value of exploratory testing.

Exploring with Tours

Tours can be another useful tool to generate ideas for exploratory charters or to get familiar with a new product or capability. This technique uses a metaphor of tourism and can add useful variation to your explorations, uncovering issues you may not see otherwise. James Whittaker has described some unique exploratory testing tours (Whittaker, 2012).

For example, as a tourist, perhaps you want a strategy for seeing the most important sights in London. Who you are, or what your goal is, will determine your strategy. Whittaker suggests that visiting students would approach this situation much differently from a group of flight attendants who are there for a weekend. A similar approach can help you explore your software features. Check out Whittaker’s defined tours, such as the Guidebook Tour (looking for bargains, shortcuts) or the Landmark Tour (hopping through an application’s landmarks). In the Landmark Tour, you would identify a set of software capabilities (the landmarks) and then visit those landmarks, perhaps randomizing the order. Changing the sequence of events may cause an unexpected error to occur. These are good places to start, but make them your own. For example, try combining personas with a tour to visit your most important landmarks.

Tours can be done at any time, but Janet has had great success defining tours for the end game, when you think your new release is ready to ship. Often this will give your team extra confidence that the most important features in your product work as expected. Be creative in how you document these touring sessions. Perhaps you can create a visual map to show where you’ve been. See the bibliography for Part V for links to explore some of the possibilities.

Markus Gärtner (Gärtner, 2014) recommends debriefing after completing each tour. As you debrief, you’ll identify more charters, which will take you deeper into areas you briefly touched on in the tours.

Other Ideas

Some teams base their charters on their story acceptance criteria. For example, you may want to explore error handling, perceived response time, and complementary features.

If you have identified risks during story elaboration, you may create risk-based charters that will highlight likely problem areas or areas of uncertainty. A conversation with your programmer is an excellent source for identifying architectural risks and makes a great driver for test charters.

David Hussman (Hussman, 2013) suggests creating journeys to take the personas you’ve identified someplace interesting. Once you’ve learned more about the personas through story mapping, imagine where they might like to go. These journeys might also be a useful way to explore your system after it is built. For example, if we refer back to the story map in Chapter 9, “Are We Building the Right Thing?” (refer to Figure 9-2), one way to test it might be to describe a charter as a possible journey:

Journey: Search by keyword, select an email, add sender to contacts, and reply.

Charter:

Explore journey

With different folders, different senders

To discover if flow hangs together

In Chapter 13, “Other Types of Testing,” we will look at some ways to use exploratory testing for several quality attributes beyond the scope of functional tests.

Managing Test Charters

By now you should have some great ideas for creating your exploratory test charters. The question now is, How do you keep them straight? There are a few different ways, and we’ve shared a couple of stories from exploratory testing practitioners about how they manage their charters.

Session-Based Test Management

Session-based test management (SBTM) is based on the idea of creating test charters or missions for a testing idea, exploring uninterrupted for a specific time period, recording the results, and following up with a debriefing session. We mentioned this technique in Agile Testing (p. 243) but will share a few other ideas we’ve gathered. For example, Bernice Niel Ruhland told us she uses SBTM to help train testers. The debrief session is a great way for her to provide immediate feedback to the new testers.

Try SBTM with your team. As with any testing technique we mention, there’s no one perfectly correct way to do it. Conduct a session, see how it works for you, inspect, and adapt. Check the Part V bibliography for resources to learn more, including James Lyndsay’s “Adventures in Session-Based Testing” (Lyndsay, 2003).

Thread-Based Test Management

Thread-based test management (TBTM) is less rigid in terms of time-boxing a session than SBTM. It works on the idea of organizing tests around threads of activity, rather than test sessions or artifacts. A thread does not imply a commitment of a specific time box as SBTM does. Its flexibility may lead the tester in different directions. TBTM may work better in some situations with rapidly changing priorities or frequent interruptions.

Christin’s team grouped threads by functionality or type of testing, but threads can also be based around a feature or a story. They can be organized based on common resources—for example, small-scale functional data threads versus large-scale performance threads, or threads that focus on failure modes versus threads that focus on happy path workflow.

Adam Knight (Knight, 2011 and 2013) tackles his organization’s large-scale data testing using thread-based testing, which allows testers to work on threads in parallel defined by test charters. Figure 12-6 is a representation of how he explains the benefits of an exploratory testing approach using threads to new testers in his company. They start with a feature area, idea, or risk in the center. As a flaw is discovered, they expand a new set of tests on that discovery. For most discoveries, this is done under the scope of the thread. If discoveries are made that are too large to be considered within the scope of the thread, a new thread is created to explore that area.

Image

Figure 12-6 Fractal representation of TBTM

Image

That mini exploration will result in a more targeted testing exploration around that feature area and can be represented as a circle off the original. In this way, the testing effort within each thread naturally focuses on the problem and risk areas as they are discovered.

Markus Gärtner (Gärtner, 2011) uses a slightly different tactic, which he calls Pomodoro Testing. He uses shorter sessions of 25 minutes and continues developing his testing mind map during the debriefings. You can find links to more about Adam Knight’s experiences using TBTM and Markus’s Pomodoro Testing in the bibliography for Part V.

Exploring in Groups

Generally, teams think about exploring as an individual activity, or maybe an activity done by a pair. However, group exploring provides unique opportunities to discover issues or missing features in a new product or major update. We have facilitated exercises in shared exploration, and the results demonstrate the same thing—diversity creates different ideas.

Bernice Niel Ruhland (Ruhland, 2013b) uses this approach once in a while for more complex, riskier areas of a product, especially when time is working against the team. She had the testing team, programmers, and business analysts (BAs) participate in the testing. She recounts:

We used an Excel spreadsheet to define the tests as we had specific test paths based upon coding risks to explore. In some cases critical bugs were fixed before we even finished the testing session. I received positive feedback from this approach. And of course how much or little documentation we provided changed based upon what we were testing and the testers’ experience level with the functionality.


Lisa’s Story

Spread the Testing Love: Group Hugs!

When our team is preparing a major new release, we sometimes organize “group hugs,” where the whole team, or a subset of it, joins in for testing. Sometimes it’s only testing team members, sometimes the entire product team, or something in between, but it’s always useful. Some people refer to this type of activity as a “bug hunt,” but we feel it’s a positive activity that demonstrates our passion for building quality into our product; our focus isn’t about finding bugs, but about consistency and confidence.

The iOS Group Hug

Recently we released some new features to our iOS app. We asked the team for volunteers for a group hug. Programmers, testers, and marketing folks joined in. We used a shared Google doc for the session, where we noted which device and version each participant was using, as well as already-known problems.

Our team is geographically distributed, so we used a videoconference meeting to communicate during the testing. People in each of our office locations gathered in one room, and remote people joined individually. We find it’s helpful to be able to physically talk to each other. For example, if more than one person finds the same bug, we can avoid duplication in reporting it. Also, talking through what we’ve tried gives other team members ideas for interesting tests.

During the group hug, we found some new bugs, as well as some usability issues. Generally, the feedback was positive, and we were able to release within a few days.

Involving multiple team members in one testing session is expensive. However, we’ve needed to do it only for major, risky new features where concurrency is critical, and generally one group hug is enough to provide necessary feedback.

The Place for Group Hugs

We build quality into our product with test-driven development (TDD), acceptance-test-driven development (ATDD), and constant pairing and collaboration. We have multiple suites of automated regression tests providing continual feedback in our continuous integration system. We do have automated tests for concurrent changes, but they don’t cover every possibility. In addition to the extensive exploratory testing we do on each new feature, the group hugs provide a quick way to get information that we can’t get in our normal process.


Consider group exploring (see Figure 12-7) if concurrency is a vital feature of your product, and your automated tests and exploratory testing by individual or paired testers can’t cover every scenario. Use these sessions judiciously, and add only as much structure as you need. Sharing a document where everyone records the results is helpful, but in some cases it’s too heavyweight. The same goes for preparing charters in advance.

Image

Figure 12-7 Exploring in groups

Another option is exploring with a trio (programmer, tester, BA) aimed for fast feedback. Julian Harty calls this “Trinity Testing” (Harty, 2010). The programmer can give feedback on the charters or testing paths based on his or her knowledge of the code, the BA on the business risk. When issues arise from testing, the BA can explain any business impact while the programmer assesses coding risks and time to fix the problem.

However you choose to run your group session, be prepared to learn from each one, so you can improve your next group testing session and learn even more about the software you’re testing.

Recording Results for Exploratory Test Sessions

It is important to record your exploratory test results and coverage as well as to share your thoughts with others. Some of the reasons to record results or make notes are: it gives you an opportunity to review your results with a peer and have a meaningful conversation about your findings; it allows you to track progress and issues as Christin showed with her TBTM story; it provides the opportunity to review your own results for later testing or if you want to review charters; and last (and our least favorite reason), it demonstrates to management what you’ve done if problems occur.

Recording can be as simple as taking notes on paper, on a wiki page, as a mind map, in a text document, or on a session sheet. Record issues, unexpected behavior, or features that seemed to be lacking. Hold a debriefing session to go over what you discovered. You’re likely to think of ideas for future sessions as you discuss your most recent one with peers.

Keep your notes brief and simple. If multiple team members are exploring individually or as a group, agree on one format for note taking so you can easily understand each other’s findings. Post results in a place where your whole team can see them for reference, and use these notes to improve your testing.


Lisa’s Story

When we wrote charters based on personas, we created a chore (a task) in our team’s online tracking tool for each charter. This was a good way to track the testing we felt was needed, and we could easily assign ourselves charters and see at a glance who was working on which charter. We could also track which charters were complete, in progress, or not yet started. We tried putting the results of our sessions in these chores, but we didn’t find it convenient to refer back to the results this way. Next, we experimented with writing up our test session results on our wiki. Here’s an example:

Explore being Paul, the project manager, in an iteration planning meeting.

Browsers: Chrome and Firefox

Initial setup: On Test01, used project 101 populated by the test data fixture.

Observations:

• Deleting stories required a reload.

• The velocity shown for iterations in the backlog didn’t change as I reprioritized stories until I reloaded.

• Need to do more exploring in projects with custom point scales.


Spreadsheets are a simple way to record exploratory test sessions. Adam Knight uses Excel for the top-level charters and exploratory note taking because it provides flexibility to present test results in many forms: text, graphs, tables, and external links. He then has a set of small macros that provide input boxes to allow very fast input of test notes and tracking of status. He can record status but also can indicate if an issue is “off the charter” and merits the creation of another charter to follow up.

Bernice Niel Ruhland stores her threads in one Excel tab with additional columns for testing notes. It’s like reading a story about the testing. At a glance she knows how many threads are done, in progress, and not started. It also allows her to review any issues quickly.

There are products available that record notes as you do exploratory testing. See the “Tools” section of the bibliography for a link to one of these. Some session-recording tools let you record your keystrokes and the pages you’ve visited so that you can go back and reproduce your steps if you find something worth investigating further. They even let you specify how many pages you want to keep in memory or what type you want to save.


Janet’s Story

I decided to try a tool I heard about at a conference called qTrace from QASymphony. (Note: I am not recommending this one over any other.) It is a recording tool for exploratory testing and captures screen shots, notes, and other details. I created a charter for exploring my own website, www.janetgregory.ca:

Explore the blog page

With the search function

To discover if it returns what is expected

I set a time limit of 30 minutes, but with the focus narrowed, I immediately found two issues that I had not noticed when I first checked the search function. The screen prints allowed me to check where I had been without thinking twice. The interesting thing was that I found one issue that bothered me enough that I put in a new request (story)—to search only in blog posts rather than the whole site—and one defect—showing posts authored by Mark (strange, since my name isn’t Mark, and nobody else posts) that weren’t posts, as highlighted in Figure 12-8.

Image

Figure 12-8 Exploratory test session using qTrace


Some recording tools record extensive additional information, such as data about the environment. This helps when you need a reproducible bug so that programmers can write a failing automated test and a fix. You may also use this information to write new user stories and new tests for features you found lacking or to tweak existing features you’d like to improve.

Whatever method you choose to record your results, remember to make them accessible to the whole team. Share the valuable knowledge you gained by exploratory testing.

Where Exploratory Testing Fits into Agile Testing

We think that exploratory testing is an integral part of agile testing, and it’s important to see where it fits in an agile context.

Part IV, “Testing Business Value,” was all about testing for business value. Another way to think of it is testing ideas and assumptions early—before we start coding. That type of testing is about building the right thing. There is another kind of testing designed to answer the question, “Are we building it right?” We included exploratory testing in Quadrant 3 of the agile testing quadrants because we are usually exploring the workflow to see if we delivered the business value we anticipated. We challenge our assumptions to see what we did not think about earlier when building.

Consider our levels of precision from Chapter 7, “Levels of Precision for Planning”—the product release, the feature, the story, and the task—and what type of exploratory testing might be useful at each level.

Image Product release level: This is where you would test an integrated product delivery to the integration team, or a release candidate during the end game. This would be a good time to explore dependencies between teams and high-risk workflows and perhaps do tours.

Image Features: Once all associated stories are “done,” you can explore the complete feature. At this level, good candidates for exploration are feature workflow and interaction with other applications. This might be a good place to try what Lisa calls “group hugs”—more than one person exploring on a charter.

Image Stories: Once the story meets the expected results—initial coding has been completed and all the automated tests specified before or during coding pass—you can start exploring. Think about the development risks, boundary conditions, more detailed functionality issues, and different variations of formats or states.

Image Tasks: Exploring at this level would happen during coding. Examples might be programmer exploration on an API, pairing on performance issues with a tester, or maybe even exploring some of the boundary conditions on strings. Consider programmer-tester pairing to create exploratory charters for the code being developed.

Agile teams benefit from continual exploring. When you’re brainstorming new features, you can explore how your different personas might use them. You can explore released software to identify what features may still be missing. Focus your exploratory testing by using charters, tours, heuristics, or session- or thread-based test management. What you learn will help your team improve quality, not only in the impending release, but as the product evolves in the future. Use the resources in the bibliography for Part V to grow your team’s exploratory testing skills.

Summary

Whichever techniques you use, exploratory testing is a powerful way to test new stories and features as they are incrementally delivered.

Image Use exploratory testing approaches

Image In conjunction with your automation strategy

Image To give feedback quickly to stakeholders to see if you are building the right thing

Image To prepare for releases with manual regression testing

Image Experiment with different methods to create your test charters.

Image Experiment with new test charter ideas, such as using personas, tours, or journeys.

Image Manage your test charters with SBTM or TBTM or your own approach that meet your needs.

Image Practice to build your exploratory testing capabilities.

Image Exploratory testing adds value in most, if not all, agile development activities.