Traditional software development directed us to hand off a code-complete application to testers at the end of a lengthy development cycle. As our craft improved, we sought shorter cycles, but we still handed off code to testers at the end of the development cycle. As we’ve embraced Scrum, we’ve removed the handoff by forming a cross-functional team able to perform all the required activities. Handoffs are still done, but at least they are to a person on the team with a shared commitment and focus. We’re getting better, but there is still room for improvement.
The physics of software development tells us that testers must have a stable, finished piece of software to click, poke, and test. On the surface, this sounds like an honest appraisal. I mean, why bother wasting time testing software that hasn’t been written yet, right? Isn’t Scrum about identifying and removing waste? Yes, it is, and there is waste to be removed when testing efforts are delayed. This is the underlying motivation behind the “shift left” principle in DevOps.
As mentioned in the previous chapter, one of the outputs of Sprint Planning is the plan. While most Scrum Teams use tasks to formulate the plan, the Scrum Guide is actually quiet on the subject. This means that the Developers on those Scrum Teams can experiment with different approaches. Some Developers use the columns of a Kanban board. Others use diagrams on whiteboards. Others just use impromptu conversations as the “plan.” Personally, that last one sounds difficult for Developers to inspect progress and estimate remaining work.
Another option is for the Developers to formulate their plan using acceptance tests. These tests can be manual or—preferably—automated. They are written by the Developers and run by the Developers. At any point during the Sprint, the sum of the count of failing acceptance tests could be compared against the sum of the count of passing acceptance tests, either per PBI or for the entire forecast, as a way to inspect progress. The reality is that Developers are going to have tests anyway, so why not start the Sprint by creating those acceptance tests and let them drive the development?
Starting work by creating failing tests can feel very counterintuitive and weird, but it works. Test-driven development (TDD) proves this every day. In fact, this approach to planning and driving work by creating failing tests is a part of acceptance test-driven development (ATDD). Whereas TDD is from a Developer’s perspective, ATDD is from a stakeholder’s or a user’s perspective.
ATDD is still a relatively unknown practice that shifts testing activities to an earlier time in the Sprint—all the way to the beginning. In fact, ATDD encourages the Developers to discuss collaboratively the acceptance criteria with the right people. These conversations yield practical examples that give way to understanding the features and scenarios that become the basis for the acceptance tests and even for coding design. All of this can be accomplished prior to any application coding. A benefit of ATDD is that it provides the Developers with a shared understanding of what they are developing and what Done looks like at each step of the process.
This chapter introduces ATDD and shows how to implement it using Azure Boards and Azure Test Plans.
Note In this chapter, I am using the customized Professional Scrum process, not the out-of-the-box Scrum process. Please refer to Chapter 3, “Azure Boards,” for information on this custom process and how to create it for yourself.
Before I jump into creating and executing a plan based on acceptance tests, I’ll spend some time acquainting you with Azure Test Plans and its related artifacts and functionality.
Azure Test Plans is the Azure DevOps service that helps teams and organizations plan, manage, and execute their testing efforts. It supports teams of all sizes and types regardless of whether they are practicing manual, automated, or exploratory testing to drive collaboration and quality. The browser-based experience enables the team to define, execute, and chart the results of their tests within the same integrated experience as the other Azure DevOps services, like Azure Boards.
Many teams think that Azure Test Plans is simply about creating and running tests. This is how Microsoft promotes the product and how it tends to be demoed. Although it can do those things, it’s also ideally suited for Scrum Teams to use as a way to represent their Sprint plan.
Each Sprint, the Developers create a new test plan for that Sprint. The name of the plan is simply the name of the Sprint (such as “Sprint 2”). This plan contains all the acceptance tests that prove the forecasted PBIs are done, according to their acceptance criteria. For example, if a Sprint’s forecast includes eight PBIs, each having five acceptance criteria, then the test plan will contain 40 acceptance tests, give or take. These acceptance tests can be created during Sprint Planning—even if only test names are provided. The design of those tests may even have started long before Sprint Planning, perhaps during refinement.
Azure Test Plans offers many ways to organize your tests. By creating and configuring a Test Plan work item, Developers can then organize their tests for the entire product, for a specific area, and/or for a specific Sprint. I recommend keeping it simple and having one test plan per Sprint. In Figure 7-1, I’m using the Test Plans page to create a new test plan named “Sprint 1” that maps to the Sprint 1 iteration.
FIGURE 7-1 We’re creating a new test plan for Sprint 1.
Developers who create and manage test plans, as well as test suites, require the Basic + Test Plans license. This license is included with Visual Studio Enterprise, Visual Studio Test Professional, and MSDN Platforms subscriptions. It can also be purchased separately. If you don’t have the ability to create a new test plan, then you probably don’t have the right license. Check with your Azure DevOps administrator.
Fabrikam Fiber Case Study
As part of creating the Sprint plan, during Sprint Planning a Developer also creates a respective test plan with the same name as the Sprint.
Once the test plan is created, Developers can create test suites. Test suites are essentially folders within the test plan that hold the acceptance tests. Test suites are optional but recommended. There’s nothing stopping a Developer from putting all of the acceptance tests in the “root” of the test plan and using a naming convention to identify the tests in question. With a couple of dozen tests, this gets messy, which is why test suites are a good idea.
You can create three types of test suites:
■ Static A simple suite that has only a name.
■ Requirement-based A suite that maps to a specific PBI work item and is meant to contain acceptance tests for just that PBI. The work item ID and title form the test suite’s name (such as “7476 : Twitter feed”). As test cases are added, they will automatically be linked back to the PBI work item.
■ Query-based A dynamic test suite that returns Test Case work items meeting specific criteria.
When defining the test plan, Developers should use Requirement-based suites to contain their acceptance tests. Requirement-based suites can be generated in bulk with a naming convention that relates back to the PBI in the Sprint Backlog. The creation of all Requirement-based suites—for all PBIs in the Sprint Backlog—can be performed easily in one step, as I’m doing in Figure 7-2.
FIGURE 7-2 You can create Requirement-based test suites for each PBI in the Sprint Backlog.
Behind the scenes, test plans and test suites are persisted as work items. Unlike the Test Case work item, the Test Plan and Test Suite work items are special, hidden work item types. As I mentioned in Chapter 3, hidden work item types are ones that users can’t create manually, nor do they typically want to. For example, it wouldn’t make sense to create a standalone test suite without being in the context of a test plan. Instead, Developers will use the dedicated tooling in Azure Test Plans to create a test plan and test suites.
Note For Developers using the Kanban board to track their Sprint work, each PBI card has the ability to add an associated test case. Behind the scenes this will create a default test plan and Requirement-based test suite in which it places the newly created Test Case work item. Creating tests from the Kanban board does not require those users to have a Basic + Test Plans license—which means small teams can get started with Azure Test Plans easily. If your team is using the Kanban board to plan and track its Sprint work, then it might make sense to experiment with this feature.
Tip The default sorting order of the test suites is not very interesting. Use drag and drop to reorder the test suites so that they are arranged in a more logical order, such as by Backlog Priority. Unfortunately, you will have to do this manually.
Fabrikam Fiber Case Study
By the end of Sprint Planning, the Developers have created one Requirement-based suite per forecasted PBI. Someone also drags those suites so that they are listed in the same order the PBIs are listed in the Sprint Backlog.
After the test plan and test suites are created, it’s time to create the acceptance tests themselves. This can be done in Sprint Planning or at any time during development. In Azure Test Plans, acceptance tests are persisted using Test Case work items. The Test Case work item type allows Developers to further specify acceptance criteria of a PBI in the form of a test, either manual or automated. Manual test cases will have verifiable test steps. Automated test cases are ones that will eventually be associated with an automated test—such as an MSTest, xUnit, or NUnit test—and run in a pipeline in Azure Pipelines.
For manual tests, the Test Case work item may be initially defined at a high level—perhaps containing only a description. Later, more detailed steps may emerge. Once the test cases are executed, the test runs will contain the test outcomes and any test attachments.
Test Case work items, like other work items, can be created on the Boards hub. That said, it makes more sense to create them directly in the Test Plans hub, within their respective test suite. That way, it’s easy to organize the test cases logically within the test plan. Figure 7-3 shows a single Test Case work item being created, and as you can see, it’s just another work item type.
FIGURE 7-3 Test cases in Azure DevOps are just another type of work item.
Rather than just having one test case per PBI (such as “Validate Twitter feed”), the Developers should create Test Case work items that relate to a PBI’s acceptance criteria. It could be that a single test case can cover a single criterion. It could also be that multiple test cases might be required to cover a single criterion. In other situations, the opposite may be true and multiple criteria could be covered by a single test case. As a general rule, however, Developers should plan on creating a separate Test Case work item for each acceptance criterion.
Why is this important? Consider my guidance in the previous chapter about creating enough small tasks per PBI so that the Developers can effectively swarm on the work. This applies to test cases as well. For example, if the Developers are able to create three distinct acceptance tests (test cases)—each for a distinct acceptance criterion—they can then develop, test, and deliver that PBI in a more asynchronous manner, thus reducing risk.
Smell It’s a smell when I see only one Test Case work item per PBI. It's a stench when I don’t see any. It could be that the Developers have self-managed around another method to test and verify the acceptability of the forecasted items; if so, that’s fine. I might question the transparency of their actions, but at the end of the day Professional Scrum Developers can use whatever tools and practices that they determine bring them value and reduce waste.
It’s also a smell when I see a Test Case work item (or Task work item) assigned to the Product Owner. These types of work items exist in the Sprint Backlog and, as such, are owned by the Developers doing the work. If the Product Owner wants to change acceptance criteria or suggest new features, they would need to discuss this with the Developers, and after collaborating on the impact and trade-offs, the Developers change the respective work items. If the Product Owner is also a Developer, then this smell goes away, but another one shows up that I’ve previously mentioned. Having a Product Owner also be a Developer can be problematic in its own right.
Azure Test Plans has a great feature that enables Developers to quickly create one Test Case work item per acceptance criterion. This is the grid view, and it allows Developers to quickly add or edit test cases in a two-dimensional grid, similar to Microsoft Excel.
Here are the high-level steps to quickly create test cases from acceptance criteria:
Open a PBI work item in the Sprint Backlog.
Select and copy (Ctrl+C) the acceptance criteria to the clipboard.
Return to the test plan and click the option to add new test cases using the grid.
Paste (Ctrl+V) the copied acceptance criteria into the Title column, as I’m doing in Figure 7-4.
FIGURE 7-4 You can quickly create test cases by pasting in a PBI’s acceptance criteria.
Clean up the titles and save the test cases.
Repeat for the other PBI work items in the Sprint Backlog.
This copy/paste approach works best when a PBI’s acceptance criteria are enumerated as a list, whether it’s bulleted, numbered, or simply delimited by line breaks. Unstructured acceptance criteria, such as a paragraph with rambling sentences, won’t paste into the grid view correctly and you’ll end up with a single test case.
Tip Knowing ahead of time that your team will be creating Test Case work items from acceptance criteria will start to change the way that PBIs and acceptance criteria are specified. This may be an evolution where your team progresses from crafting wordy paragraphs to simple bullets, and finally to given-when-then expressions with sample data. How a Scrum Team captures PBI details, such as acceptance criteria, is perfect for discussion at a Sprint Retrospective. New approaches and experiments can be considered and planned.
As you create or edit Test Case work items, consider the following Professional Scrum guidance while entering data into the pertinent fields:
■ Title (required) Enter a short phrase that describes the criteria to test. A naming convention that you could consider using is Verify [criteria]. You may want to consider a naming convention where you prefix the PBI’s ID and/or short title to further identify it.
■ Assigned To Select the Developer who is responsible for defining the test and ensuring that it is run. Just as with a task, leave it blank until someone starts working on it.
■ State Select the state of the test case. States are covered later in this section.
■ Area Select the best area for this test case. Typically, the area will be the same as the associated PBI.
■ Iteration This is the Sprint in which the test case will be defined and run. This should be the current Sprint and the same as the Test Plan and the associated PBI work item.
■ Steps For manual tests, these are the individual test step actions and expected results. Each step can include an attached file that provides more details, such as a screenshot. You can also use a Shared Steps work item to simplify the creation and management of test cases.
■ Parameter values For manual tests, any parameters defined in the test steps are listed here. You can then provide one or more sets of values for these parameters.
■ Discussion Add or curate rich text comments relating to the test case. You can mention someone, a group, a work item, or a pull request as you add a comment. Professional Scrum Teams prefer higher-fidelity, in-person communication instead.
■ Automation Status For automated tests, change this to Planned. Later, when you associate an automated test to the Test Case work item, this field will automatically change to Automated and the details will appear on the Associated Automation tab. This field should remain in the Not Automated state for manual tests. I will cover associating automation with test cases later in this chapter.
■ Description (Summary tab) Provide as much detail as necessary so that another Developer can understand the purpose of the test case.
■ History Every time a Developer updates the work item, Azure Boards tracks who made the change and the fields that were changed. This tab displays a history of all those changes. The contents are read-only.
■ Links Add a link to one or more work items or resources (build artifacts, code branches, commits, pull requests, tags, GitHub commits, GitHub issues, GitHub pull requests, test artifacts, wiki pages, hyperlinks, documents, and version-controlled items). You should have one—possibly more—Tests links to a PBI work item. If you’re using Requirement-based suites, this is done automatically for you.
■ Attachments Attach one or more files to provide additional details about the test case. Some Developers like to attach notes, whiteboard photos, or even audio/video recordings of the Product Backlog refinement sessions and Sprint Planning.
A Test Case work item can be in one of three states: Design, Ready, or Closed. The typical workflow progression would be Design ⇒ Ready ⇒ Closed. While a Test Case work item is being created, it is in the Design state. After the test case details have emerged—associated automation or manual test steps—the test case is ready to be run, and its state should be changed to Ready. When a test case is no longer required, its state should be changed to Closed. Test Case work items do not have a Removed state, like the other work item types in Azure Boards. Deleting the test case is always an option too.
Note Although test artifacts like test plans, test suites, and test cases are types of work items, the method for deleting them differs from deleting non-test work items. Deleting a test artifact removes it from the test case management (TCM) data store and also deletes the underlying work item. A job runs to delete any child items from both the TCM data store and the work item store. This can include all child items such as child test suites, test points across all configurations, testers, test runs, and other associated history. Azure Test Plans prompts you before deleting child artifacts, as you can see in Figure 7-5.
FIGURE 7-5 Confirmation is required before deleting a test plan and related child artifacts.
The final result is the same—all information in both stores is deleted and cannot be restored. Microsoft only supports the permanent deletion of test artifacts. In other words, deleted test artifacts won’t appear in the Recycle bin and cannot be restored. You also can’t bulk-delete test artifacts. If test artifacts are part of a bulk selection to be deleted, all other work items except the test artifact(s) will be deleted.
Observing acceptance tests is a great way to assess progress. Assuming that all forecasted PBIs are expressed as failing acceptance tests, then the more passing tests a team has, the more progress they have made. At the beginning of the Sprint, right after Sprint Planning, there should be zero passing tests. By the end of the Sprint—hopefully—all tests will be passing. At any point along the way, the number of passing tests divided by the total number of tests will roughly equate to progress. This assumes that the work to complete each acceptance criterion, and thus pass each acceptance test, is roughly the same size. It won’t be, but it will be close enough to measure progress.
Unfortunately, Azure Test Plans does not provide a first-class way to inspect progress across the whole test plan. In other words, there’s no dashboard that shows the outcome of all test cases across all Requirement-based test suites. There is a decent visualization for a single Requirement-based suite, showing test progress for an individual PBI, as you can see in Figure 7-6. Unfortunately, you will have to click through each suite to get an overall assessment of progress.
FIGURE 7-6 It’s easy to inspect the progress of a single PBI in Test Plans.
There are a few ways to measure progress across all test cases:
■ Show test points from child suites This setting allows you to view all the test points for the given suite and its children in one view without having to navigate to individual suites one at a time. This option is only visible when you are on the Execute page. You can see an example of this in Figure 7-7. You will need to adopt a test case naming convention to be able to easily identify which test case is related to which Requirement-based suite (PBI). Also, you can’t set the order to follow the original Backlog Priority, but Microsoft is considering adding an option that would let you follow the order of the Requirement-based suites.
FIGURE 7-7 View all test points across all test suites in the test plan.
■ Query-based suite By creating a Query-based suite that returns all Test Case work items for the current Sprint, the Developers are also able to see all acceptance tests across all forecasted PBIs. You can see an example in Figure 7-8. A test case naming convention would need to be adopted to identify which test case is related to which PBI. Unfortunately, the order cannot be set to follow the original Backlog Priority. Having a query drive the list of test cases does provide more control, however.
FIGURE 7-8 Use a Query-based suite to list all test cases for the Sprint.
■ Progress report Tracks the progress of testing within one or more test plans. The report indicates how much testing is complete by showing how many tests have passed or failed, and how many are blocked. The report also renders a daily snapshot to provide an execution and status trendline. This helps you anticipate whether the testing is likely to complete by the end of the Sprint. The Progress Report is also filterable in many ways. For more information on the Progress report, visit https://aka.ms/track-test-status.
■ Charts Create and use test results charts to track testing progress. Select the test plan and then view test execution progress. You can choose from a fixed set of prepopulated fields related to results. You can select pie, bar, column, stacked bar, or pivot table chart types. For more information, including specific examples, visit https://aka.ms/track-test-status.
■ OData feed OData queries are the recommended approach for pulling data out of Azure DevOps and creating custom analytics. Microsoft Power BI can consume OData queries, which return filtered or aggregated sets of data. By querying and reporting on the TestSuites, Tests, TestPoints, TestRuns, and TestResults entities, a team can create many custom reports and visualizations to inspect progress. For more information on using OData, visit https://aka.ms/extend-analytics.
Note Test cases by themselves are not executable. When you add a test case to a test suite, one or more test points are generated. A test point is a unique combination of test case, test suite, configuration, and tester. For example, if you have a test case titled “Verify login functionality” and you add two configurations to it for Chrome and Edge, then this results in two test points: “Verify login functionality” for Chrome and “Verify login functionality” for Edge. When tests are executed, test results are generated and visible per test point. The latest execution outcome for a test point is visible on the Execute page.
Think of test cases as reusable entities—across suites and plans. By including them in a test plan or suite, test points are generated. By executing test points, you determine the quality and progress of the product being developed.
An approach to practicing acceptance test-driven development is to have the Developers collaboratively discuss a PBI’s acceptance criteria, compose failing acceptance tests, and then use those tests as a guide to developing the PBI. When examples are discussed, they are from the viewpoint of the user. These conversations and examples are further refined into one or more acceptance tests. This process may have started during refinement, but it definitely finishes in the Sprint in which the PBI is forecasted, and prior to development. ATDD helps ensure that the whole Scrum Team has the same shared understanding of what it is they are developing and what Done looks like.
During the Sprint, the Developers iterate through each PBI, developing and testing and developing and testing until that PBI is done. Depending on the complexity of the PBI and the number of acceptance criteria, the Developers will probably need to create multiple acceptance tests. If the Developers include additional sad path and bad path tests, then a moderately complex PBI could contain a dozen or more acceptance tests.
Smell It’s a smell if the Developers do not have any sad path or bad path acceptance tests. These tests, sometimes collectively called unhappy path tests, are ones that pass invalid input in an attempt to root out problems caused by untrained, inattentive, or malicious users. This issue is something the Scrum Team should discuss at a Sprint Retrospective in order to ratchet up their acceptance-testing practices as well as product quality.
Acceptance tests should be created before coding begins. As I’ve previously mentioned, they can simply be Test Case work items with simple names, used as placeholders for future automated tests. When it comes time to code the automated test, you can achieve this by pairing a Developer who has strong coding skills with another Developer who has a background in testing—in my experience, those two Developers can then work together to craft high-value automated acceptance tests. Remember, all of these will be failing tests until that facet of the PBI is properly coded. For example, if a forecasted PBI has six acceptance criteria and the Definition of Done includes creating both happy and unhappy path tests, then least 12 failing acceptance tests should exist before any coding on the PBI begins.
As development progresses, more and more acceptance tests will start passing. When the last test passes and the Definition of Done is met, the Developers are finished with that PBI. The work can be inspected at the Sprint Review, and more importantly, the release of the Increment will include this new PBI. Remember, it’s the Product Owner’s decision if and when to release the Increment.
I’m sometimes asked how ATDD is different from behavior-driven development (BDD), specification by example (SBE), test-driven requirements, example-driven development, executable requirements, functional test-driven development, story test-driven development, or flavor-of-the-month-driven development. I tell people that each of these practices, regardless of their nuance differences, have the same goal: to enable better stakeholder collaboration and express abstract business needs in a more understandable and testable format.
ATDD can also provide added value for distributed teams. Those Developers collocated with the Product Owner and stakeholders can have the critical, high-fidelity conversations in order to refine the acceptance criteria into acceptance tests. These tests, written in a natural language, provide the dislocated Developers with clarity so that they can focus on passing the tests. Compared to traditional requirements, these executable specifications provide substantially more value and reduced waste. Furthermore, having a simple yet concrete goal of “make our tests pass” helps Developers who struggle to self-manage find focus in their day.
The approach I have outlined so far is just a partial explanation of an ATDD implementation. If the Developers want a true executable specification—one where the framework actually passes the specification data to the test runner to execute—they should implement a proper acceptance-testing framework. With these frameworks, if the specification is changed then it will automatically affect the test. In the approach using Test Case work items that I have outlined in this chapter, the link is only a logical one. If the specification is changed, the underlying test will not be changed automatically.
Tip It can be difficult to find a Product Owner, domain expert, or other stakeholder who is interested in learning and using an acceptance-testing framework. Unfortunately, I’ve found that it’s more common that the Product Owner or stakeholders just tell the Developers what they want and leave the decision of selecting the testing framework up to the Developers. If this were a straight “how” decision, I would agree, but acceptance testing is also about understanding the criteria, features, scenarios, samples, and so forth. These are all “what” items that must involve the Product Owner and possibly the stakeholders. As with any tool or practice, the Developers should experiment and try out a framework for a few Sprints and then embrace, enhance, or abandon it after discussing its value during Sprint Retrospective.
Fabrikam Fiber Case Study
The Developers have just started practicing ATDD. Shifting the testing activities left, prior to coding, was difficult at first, but they quickly realized the benefit of not having to delay testing or refactoring their tests. Some tests are still manual, but the Developers continue to improve their technical excellence and will be writing more and more automated acceptance tests in the future.
The Developers are also evaluating SpecFlow, a popular .NET acceptance testing framework. Compared to the other frameworks, it is far easier to integrate testing with the entire team. SpecFlow stories are written in plain language, which is appreciated by Paula and the domain experts. The fact that SpecFlow integrates with Visual Studio is also a plus. Visit https://specflow.org for more information.
Within ATDD, the Developers can employ any development practices that they choose to. Any practice should strive to minimize waste while allowing the Developers to develop something fit for the desired purpose. Beyond those basic rules, Developers should encourage one another to try new approaches to designing, coding, and testing. The usefulness of these experiments can be discussed at the Sprint Retrospective and then related practices can be embraced, enhanced, or abandoned in future Sprints.
The most popular ATDD “inner loop” practice is test-driven development (TDD). TDD suggests coding in short, repeatable cycles where the Developers (or pair, or mob) first write a failing unit test. The failing test specifies a small unit of desired functionality in the PBI. Next, the test is made to pass by adding the minimum amount of code required. Finally, the code is refactored to patterns or to meet any standards, such as the Definition of Done. The cycle repeats for the next unit of functionality—by starting with another failing unit test.
Tip ATDD can sometimes be confused with TDD. It’s akin to confusing ADHD with ADD, but I digress. One way to keep the two sorted in your mind is that unit tests (TDD) ensure that the Developers build the thing right, whereas acceptance tests (ATDD) ensure that the Developers build the right thing.
One of the tenets of TDD is that you do not write a single line of application code until you have written a test that fails in the absence of that code. Advocates of TDD explain that this practice will force requirements to be made clearer and misunderstandings and mistakes to be caught earlier. Developers will also gravitate toward architectures and design patterns that are more testable and easier to refactor. Another nice side effect of adopting TDD is that the Developers will typically end up with more code coverage than before!
The strongest argument in favor of TDD is that it uses tests as technical product requirements. Because the Developer must write a test before writing the code under the test, they are forced to understand the requirements and filter out any ambiguity in order to define the test. This process, in turn, directs Developers to think in small increments and in terms of reuse. As a result, unnecessary code is identified and removed as a clear design emerges.
TDD enables continual refactoring in order to keep the code clean and also to keep technical debt at bay. Having high-quality, fast, repeatable unit tests also provides a safety net—much like having high-performance brakes on a car. Both enable the operator to go fast and take risks. For example, assume Developers have high-quality unit tests that cover a high percentage of their code. When refactoring or experimenting, Developers can immediately see failing test results caused by any side effects. A nice safety net like this provides confidence, as well as the ability to code faster, by reducing the number of side effects and bugs that can be introduced accidentally.
Fabrikam Fiber Case Study
The Developers know TDD, understand its value, and are comfortable practicing it. They have decided as a team that they don’t see value in using it for all coding. If a specific scenario involves a lot of design work or involves working on a critical or highly complex area of the application, the Developers will pair up and use TDD to design their way through it.
Professional Scrum Developers agree that automated testing is awesome and is a must-have for software development. Even one of the Agile Manifesto’s principles demands “continuous attention to technical excellence…” This applies to automated acceptance testing as well. Short of the Product Owner or a stakeholder manually inspecting the work—assuming that is in the Definition of Done—almost every scenario that requires human verification could be covered through an automated acceptance test. It may not be easy, but by adopting an automated acceptance-testing practice, the Developers will be able to use these tests throughout the Sprint for ATDD, as well as later, for regression testing. More on that in a bit.
Note Some Developers feel that, though possible, there would be a diminishing return on investment for automating all acceptance tests. An example would be a situation where the Developers want to automate the acceptance of user interface (UI) controls being lined up, font types and sizes being consistent, and so on. Manual or exploratory testing would be better suited for this. My guidance is that if you don’t have an automated test and have instead opted for a manual acceptance test, it had better be for a good reason—and not just because it’s “easier.” Let my guidance soak in and then discuss it in an upcoming Sprint Retrospective.
As I’ve already mentioned, a Test Case is just another work item type. Test cases can be very lightweight, such as having only a title and a description. These work items would merely serve as extra points of documentation. Some Test Case work items might morph to become manual tests, including the actual test steps and expectations. Other Test Case work items can be associated with an automated test, such as a unit test. These are the ones that ATDD practitioners should be using.
Azure Test Plans does not support an end-to-end, automated ATDD solution out of the box. It does, however, provide the foundation for building one yourself. By using Test Case work items, automated tests, and Azure Pipelines, Developers can practice ATDD by using automated acceptance tests.
Next, I’ll show you how that’s done. You’ll follow these high-level steps:
Create a Test Case work item in the current Sprint’s test plan. Make sure it’s associated with the correct test suite (and thus the correct PBI). You can set the Automation Status field to Planned to help identify these tests.
Use Visual Studio to create an automated acceptance test and associate it to the Test Case work item.
Check in/push the test project code into Azure Repos.
Create a build pipeline in Azure Pipelines to generate a build that contains the test binaries that support the acceptance test.
Create a release pipeline in Azure Pipelines to run the automated test.
Configure test plan settings and select the respective build and release pipelines.
Create a build.
Run the test case.
After creating the Test Case work item, you will then associate it to the automated test. This is done in Visual Studio and assumes, of course, that you have an automated acceptance test. Supported testing frameworks include MSTest, xUnit, and NUnit. Other test types that use these frameworks, such as Selenium and SpecFlow, should also work. For more information, read the FAQ at https://aka.ms/test-case-automation-faq.
With the Visual Studio test project open and a connection to the Azure DevOps project established, you can associate the automated test to the Test Case work item. This is performed in the Test Explorer window, as you can see in Figure 7-9. You will need to know the identifier (work item ID) of the test case.
FIGURE 7-9 Use Test Explorer to associate an automated test to a Test Case work item.
After associating the automated test, you will see that the Test Case work item’s Automation Status field entry changes to Automated. At this point, the automated test name, storage (assembly filename), and test type become visible on the Associated Automation page of the Test Case work item as well. Clicking the Clear button on that page will remove the associated automation, should you need to reset things. Only one automated test may be associated with each test case. For more information, visit https://aka.ms/test-case-automation.
Tip When it comes to a naming convention for your automated tests, my recommendation is to have one! There are many ways to name your test projects, assemblies, namespaces, test classes, and test methods. Some Developers like to follow a strict BDD format, whereas others are fine just using clear names that describe the context and expected behaviors. For help getting started, visit www.stackoverflow.com to view the latest conversations on the subject.
Next, ensure that the test project exists in Azure Repos and the latest changes, including the passing test code, have been pushed. A pipeline must also be created that builds the test project containing the acceptance test. This pipeline does not need to run the automated accepted tests—it only has to generate the test binaries. More than likely, the build pipeline is not the right environment to run an acceptance test anyway. Also, it’s probably better if acceptance tests weren’t run as part of a build.
For continuous integration (CI) builds to be effective, they should provide fast feedback to the Developers—who tend to be impatient. Having a CI build run lengthy integration and acceptance tests is counterproductive to the goal of fast feedback. Refer back to Chapter 2, “Azure DevOps,” for overviews of Azure Repos and Azure Pipelines.
As a corollary, in the context of continuous delivery, continuous integration must mandate running and passing all integration and acceptance tests in production-like environments. I distinguish these two types of CI builds by referring to them as CI (unit tests only) and CI+ (all automated testing).
You’ll also need to create a release pipeline, which will support the actual running of the acceptance test. This pipeline will use the build pipeline’s artifacts as its source and will need at least one stage to represent the testing environment where the acceptance test is run. That computer will need the correct version of Visual Studio installed in order to run the automated test. As an alternative to installing Visual Studio, the pipeline can leverage the Visual Studio Test Platform Installer task to install the prerequisite software as part of the workflow.
The release pipeline definition must include a Visual Studio Test task, configured to use the Test run to select the tests. This setting instructs Azure Test Plans to pass the list of tests selected for execution to Azure Pipelines. The Visual Studio Test task will look up the test run identifier, extract the test execution information such as the container and test method names, run the tests, update the test run results, and set the test points associated with the test results in the test run.
Tip The default pipeline names can be less than clear. Make sure to use simple, meaningful names for the pipeline and pipeline artifacts, such as the stages. This will help other Developers quickly identify the build, release, and deployment environment.
If the acceptance test is a UI test, such as a Selenium test that runs on a physical browser, you must ensure that the agent is set to run as an interactive process with auto-logon enabled. Setting up an agent to run interactively must be done before deploying the release to a stage. If you are running UI tests on a headless browser, the interactive process configuration is not required. For more information on configuring interactive agents, visit https://aka.ms/pipeline-agents.
With the pipelines created and a build generated, you’ll need to return to the Test Plans hub and configure the Sprint’s test plan settings. Select the build pipeline that generates the build that contains the test binaries along with a specific build number to test. You can also leave it set to <latest build> to let the system automatically use the most recent build when tests are run. Also, select the release pipeline and stage within which to run the test. You can see my selections in Figure 7-10.
FIGURE 7-10 Configure a test plan to run automated tests in a pipeline.
To run the automated acceptance test, select the test suite that contains the automated test case and go to the Execute page (as opposed to the Design page). Choose the test(s) you want to run and click one of the Run options. Selecting Run With Options provides the most control, allowing you to override the defaults and select a different build pipeline, release pipeline, or release stage. You can see the notification that is displayed after running an automated test case in Figure 7-11.
FIGURE 7-11 Run an automated test case in Azure Test Plans.
Assuming the pipelines were configured correctly and the test binaries were built and deployed to that stage, the system will create a release for the selected release pipeline, create a test run, and then trigger the deployment of that release to the selected stage. The Visual Studio Test task will execute and, when it has completed, provide Developers with a pass or fail outcome for that acceptance test.
After triggering the test run, you can go to the Runs page to view the test progress and analyze any failed tests. Test results have the relevant information for debugging failed tests such as the error message, stack trace, console logs, and any attachments. You will notice that the test run’s title contains the release name (for example, TestRun_Fabrikam_Release-42). The summary includes a link to the release that was created to run the tests, which helps in finding the release that ran the tests if you need to come back later and analyze the results. You can also use this link if you want to open the release and view the release logs.
Fabrikam Fiber Case Study
Paula wants to move to a continuous delivery (CD) model in the near future. She wants each PBI to be released to the production servers as they meet the Definition of Done. The Developers know that this is only possible through automated acceptance testing, and they are investing in tooling and training to be able to do just that.
As I visit with software development teams, I realize that there is confusion about the concept of acceptance. For example, a common misconception I hear is that acceptance is performed by the users (known as user acceptance testing). In Scrum, this is never true—only the Developers do the work, and this means all work, including testing. Another common misconception is that having passing acceptance tests is equivalent to the PBI being Done. This is not necessarily true. Having passing acceptance tests only proves that the acceptance criteria have been satisfied. It does not necessarily mean that all aspects of the Definition of Done have been completely satisfied. Other items might have to be completed, such as creating documentation or a release note.
If one of the items in a Definition of Done relates to the Product Owner “accepting” or “liking” or “loving” the work created by the Developers, then acceptance testing and Product Owner acceptance will be two distinct activities. Table 7-1 lists some common misconceptions about acceptance.
TABLE 7-1 Common misconceptions about acceptance.
Misconception |
Why it’s a misconception |
Passing acceptance tests is equivalent to the PBI being Done. |
The Developers are done when the Definition of Done has been met, which hopefully includes acceptance testing and Product Owner acceptance or delight. |
Passing acceptance tests is equivalent to the PBI being accepted. |
Assuming the Definition of Done includes some form of Product Owner acceptance, only the Product Owner can accept a PBI, whereas any Developer can run or pass acceptance tests. |
Acceptance tests must be run by the stakeholders (users). |
In Scrum, only the Developers do the work, which includes testing. If the stakeholders want an opportunity to provide feedback, they should be given that chance, especially at the Sprint Review. They don’t, however, get access to the “red button” indicating that a particular PBI is not Done. |
Acceptance tests must be manual tests. |
Almost all scenarios can be verified using automated tests. For the sake of regression testing, having fast, high-quality, automated acceptance tests is highly recommended. Continuous delivery demands this. |
Acceptance can occur only at Sprint Review. |
Assuming the Definition of Done includes some form of Product Owner acceptance, this can occur at any time during the Sprint and should take place at the earliest opportunity. In fact, a PBI can even be released to production at any time during the Sprint. Sprint Reviews are about stakeholder feedback. In fact, it’s a smell if a Product Owner is inspecting PBIs for the first time during a Sprint Review. |
I believe that a Professional Scrum Team is one where the Product Owner is regularly involved with the Developers. I like seeing a Product Owner have regular engagement with the Developers and inspecting the PBIs and Increment throughout the Sprint. Unfortunately, I still meet teams where the Product Owner is just another stakeholder at Sprint Review, seeing the functionality for the first time along with everyone else. This pains me.
Even having a strong Definition of Done where the Product Owner must “accept” an item is no guarantee that they will. Putting it off until Sprint Review is even riskier. Professional Scrum Developers know this and will pursue a more collaborative style of working to ensure that the Product Owner provides their feedback earlier in the Sprint. This is especially important if the Definition of Done uses verbiage like “Product Owner likes…” or “Product Owner is delighted by…” This kind of subjective testing is difficult to capture in an executable specification and impossible to automate. This means that Product Owner acceptance, in this form, will always be a carbon-based test, meaning that the Product Owner themselves will need to put eyes and fingers on it.
Tip Product Owners are just people, and people have a hard time specifying their wants and desires. This is especially true with something as abstract (and invisible) as software. This means that you can only count on people telling you what they don’t like after inspecting it. Scrum embraces this fact, and so should you. For example, if you and your colleagues are working on a PBI and have just finished designing the user interface, have the Product Owner—and even some stakeholders—look at it and give a nod before any additional work is spent wiring it up. What’s the harm? Be open and have some courage!
Fabrikam Fiber Case Study
Although Paula is a busy Product Owner, the Developers are fortunate that she works in the same building and makes herself regularly available for questions and feedback. In order to help secure her availability, their Definition of Done includes an item about Paula liking their work. As the Developers make progress on a PBI, they make sure Paula likes what they are doing. The same is true when the Developers are brainstorming complex plumbing or UX designs. Paula wants the product to be the best for her users, and she knows that her regular involvement will produce better results. When it comes time to “accept” the work, the odds are Paula has already accepted it, just not in so many words. It’s because of this collaborative ethic and mindset that the Developers believe that a continuous delivery model is within their reach.
As I’ve already discussed, when the Developers set up their testing for a Sprint, they need to create a test plan and then add the appropriate test suites and test cases. Generally speaking, each forecasted PBI should have as many test cases as they do acceptance criteria, multiplied by the types of path testing and configurations that they plan on performing. By design, a single Test Case work item can be associated with multiple PBIs. For example, you might create a generic test case that verifies that a page request returns a response in 5 seconds or less. Since that is such a common acceptance criterion, you might want to reuse this test case for other PBIs in this and later Sprints. You can easily do so by simply adding an existing Test Case work item to a test suite.
When reusing a test case like this, be aware that if you tweak the test case to better support the current Sprint’s PBI (such as making it 3 seconds instead of 5), those changes will affect all instances of that test case. This is the nature of having test plans simply reference a test case. Microsoft also recognized the need for a true copy and has included the ability to copy test cases and even entire test plans.
When copying a test case, you can specify the destination project, destination test plan (such as the current Sprint), and destination test suite in which to copy the test case(s). An option to include existing links and attachments is also available. You can also copy an entire test plan—test suites, test cases, and all. When doing so, you can choose to reference the existing test cases or make deep copies of them. Figure 7-12 shows the options available for copying a test plan.
FIGURE 7-12 Azure Test Plans provides the ability to copy an entire test plan.
Smell It’s a smell when I see Developers repeatedly using the Copy Test Plan feature. Perhaps their testing efforts are very similar from Sprint to Sprint and PBI to PBI, but it could also be that the Developers are not completing their work and simply carrying over PBIs to the next Sprint. Ideally there would be no need to copy the last Sprint’s test plan, because the new/current Sprint would be all new PBIs with all new acceptance criteria requiring all new test cases. Exceptions will exist, of course.
Regression testing is the rerunning of acceptance tests for done PBIs to ensure that the Increment still meets the Definition of Done and is releasable. These regression tests could be from the current Sprint as well as previous Sprints. Having a solid foundation of valuable unit tests will help, but because any change to the codebase could potentially cause instability in the Increment, it’s important for Developers to continuously perform acceptance regression testing as well. This means that test cases from previous Sprints should be executed in the current Sprint to ensure the integrity of the Increment.
Deciding which of the test cases to run during regression testing is the difficult part. A team may have hundreds of test cases. The team should consider this in Sprint Planning, but also throughout the Sprint as more is learned. As far as how to organize the test cases into a regression suite, that’s actually quite straightforward, as you will see.
Tip Some Professional Scrum teams will actually add the item “Select applicable regression tests” to their Definition of Done. This will ensure that a done PBI will already have regression tests selected. It could be that some PBIs don’t require any regression tests, whereas others may require all of their acceptance tests to be used for regression. Test cases that cover brittle areas, high technical debt areas, and areas of core/critical functionality are good candidates for regression testing. For Sprints with a high number of regression tests, making sure they are fast and reliable automated acceptance tests is important.
Once a team has discussed and identified the test cases that should be used for regression, they should edit each Test Case work item and add a “Regression” tag. If there are multiple sets or types of regression tests, different tags could be used (such as “Regression-A,” “Regression-UI,” “Regression-Financial”). Figure 7-13 shows that three of the five test cases for the Twitter feed PBI have been tagged for regression.
FIGURE 7-13 Use tags to identify existing test cases as regression tests.
The next step is to make those regression tests visible in the new Sprint’s test plan. You can easily do so by creating a Query-based suite configured to show all Test Case work items that contain a “Regression” tag. You could further restrict the test cases returned by adding additional criteria (area, iteration, state, automation status, etc.) to the query. Figure 7-14 shows the creation of a Query-based suite for regression tests.
FIGURE 7-14 Use a Query-based suite to list those test cases tagged for regression.
Going forward, this Query-based test suite will always show an up-to-date list of test cases tagged as regression tests. To add or remove a test from the regression suite, a Developer adds or removes the “Regression” tag from the respective Test Case work item. The Developers may want to adopt a test case naming convention that prefixes the test case name with the PBI name or abbreviation to make them easier to identify in a flat, alphabetical list.
One downside to this approach is that the regression test suites are not static. There won’t be a history of which specific regression tests applied to which Sprint. For example, if the Developers are currently on Sprint 7 and its regression test suite contains 20 test cases, when they open Sprint 3’s regression test suite, it will show the same 20 test cases. If the Developers want to keep a static, historical record of each Sprint’s regression tests, they should create a Static test suite and then manually add all of those regression Test Case work items before closing each Sprint. This way, the membership will remain static, even though tags continue to change in future Sprints.
Fabrikam Fiber Case Study
Until continuous delivery becomes a reality, one or more Sprints’ worth of finished, tested work can get stacked up, waiting for release. Given the amount of technical debt in the codebase—and the fear it generates—it’s important for the Developers to ensure that they don’t compromise the integrity of their Done functionality. For this reason, they are adopting a rigorous regression testing approach. They will tag Test Case work items so that they can have a dynamic list of regression tests each Sprint. They will also create a static test suite of regression tests each Sprint so that they won’t miss any important regression tests prior to releasing the Increment.
Every Scrum Team tests their work in some way to ensure that the results are acceptable. These tests can be manual or automated. They can be created before or during development. They can be created by one Developer and run by another. In other words, there are many ways Developers can approach testing. One thing is common across all Scrum Teams—in order to ensure that the Increment is constantly releasable, the Definition of Done should include some form of validation, verification, testing, or other quality checks.
Starting with this assumption, this chapter has proposed shifting the authoring of those acceptance tests earlier and then using them as the Sprint plan, which also provides a way to inspect progress during the Sprint. This is a completely optional proposition. Although Professional Scrum Developers don’t need to plan a Sprint using acceptance tests, I hope by now you can see the benefits. In doing so, I also hope that you see the benefits in using Azure Test Plans with its tight integration with Azure Boards and Azure Pipelines as the framework for supporting acceptance test-driven development.
Here is a checklist for using acceptance tests to represent the Sprint plan, to provide focus, and to inspect progress:
Scrum
❒ Capture acceptance criteria in PBIs.
❒ Identify which PBIs are forecasted for the Sprint. This is an output of Sprint Planning.
❒ Identify acceptance criteria that are common across the forecasted PBIs.
Azure DevOps
❒ Create a new test plan for the current Sprint. Ensure the start and finish dates match the Sprint.
❒ Create a Requirement-based suite for each forecasted PBI, reordering them according to the Backlog Priority.
❒ Create a Query-based suite that includes all Test Case work items containing a “Regression” tag. This assumes your team has previously tagged some Test Case work items as regression tests. Ensure that the list of regression test cases is accurate.
❒ Create/add/import any test cases to support acceptance criteria testing for each Requirement-based suite (PBI). The grid view is your friend for quickly pasting in test case titles and even steps. Consider a naming convention to be able to quickly identify the PBI when looking at the test case title.
❒ Ensure all Test Case work item Iteration fields are set to the current Sprint.
❒ For those Test Case work items that will be automated, set their Automation Status field to Planned. This will help identify them in lists and queries.
❒ For those Test Case work items that will not be automated, specify test steps and expected results. Consider a minimal amount of detail here—just what is sufficient to assure the tests are repeatable and deliver consistent results. Pairing two Developers is preferred over one Developer writing detailed testing instructions for the other.
❒ Use Visual Studio to connect automated acceptance tests to their respective Test Case work items.
❒ Use Azure Pipelines to create a build that compiles the test binaries.
❒ Use Azure Pipelines to create a release pipeline to run the test in a specific stage (environment).
❒ As a team, work down the list of Requirement-based suites (PBIs)—in Backlog Priority order—collaborating/swarming, test by test, to turn those lights from red to green.
❒ Use one of the visualization methods to gain insight into the quality of the Increment and the progress of the team.
❒ Tag the appropriate Test Case work items as “Regression” tests as late as responsible—preferably during the Sprint, while the context is fresh.
Here are the key concepts I covered in this chapter:
■ Acceptance criteria The Product Owner’s or stakeholders’ definition of success for a given PBI.
■ Acceptance test A manual or automated test, created and run by the Developers to verify that an aspect of the PBI is done. Acceptance tests typically map to individual acceptance criterion.
■ Test plan A work item type that Developers can use to organize their tests. For Scrum Teams, a test plan typically maps to a Sprint.
■ Static suite A simple test suite that only has a name.
■ Requirement-based suite A test suite that maps to a specific PBI work item and is meant to contain acceptance tests for just that PBI.
■ Query-based suite A dynamic test suite that returns Test Case work items that meet specific criteria, such as Tag = “Regression.”
■ Test case A work item type that Developers can use to further specify a PBI’s acceptance criteria in the form of an acceptance test. Test cases can be either manual or automated. Professional Scrum Developers prefer automated acceptance tests.
■ Grid view A two-dimensional data-entry view, similar to Microsoft Excel, that a Developer can use to quickly create or edit Test Case work items.
■ Test point A unique combination of a test case, a test suite, a test configuration, and a tester. Test points are associated with a test run.
■ OData Open Data Protocol is a standard for building and consuming RESTful APIs, such as the data from Azure DevOps Analytics. Tools like Power BI can consume OData via an OData feed.
■ Acceptance test-driven development (ATDD) The practice of defining executable specifications in the form of failing automated tests prior to writing any application code. ATDD helps Developers build the right thing.
■ Test-driven development (TDD) The practice of writing unit tests prior to writing a functional piece of code. When the code is designed to pass the test, refactoring occurs and then another TDD cycle. TDD helps Developers build the thing right.
■ Associated automation Test Case work items can be associated with an automated acceptance test supported by MSTest, xUnit, or NUnit. The association is performed within Visual Studio and the test is executed within a release pipeline.
■ SpecFlow The most popular third-party acceptance-testing framework for .NET.
■ Acceptance Acceptance testing and Product Owner acceptance are two separate activities. A Definition of Done should include some form of Product Owner acceptance or delight. The Product Owner should never accept work that hasn’t passed acceptance tests.
■ Regression testing The preferred practice of running test cases from prior Sprints to ensure the integrity of the Increment remains even as the codebase changes.