Chapter 16. Test Automation Design Patterns and Approaches

Image

Since writing our first book, we’ve met more and more teams that have achieved their test automation goals. This is great news. However, it is still hard to learn how to automate tests and probably always will be. Current automation frameworks enable teams to automate tests at all levels in a syntax that business experts can understand. Yet difficulties such as sporadic test failures due to timing issues in user interface (UI) tests continue to plague us. The test automation “hump of pain,” as described in Agile Testing (p. 266), still looms as an imposing barrier to teams that are new to test automation.

Teams that adopted agile found existing functional test automation frameworks and drivers wanting, so they created their own. Then, many of them open-sourced those tools. Today we have many sophisticated functional test tools designed for use in agile projects. Testers and programmers doing acceptance-test-driven development (ATDD)/specification by example (SBE)/behavior-driven development (BDD) no longer have cause to envy the cool tool sets available to test-driven development (TDD) practitioners.

We need to evolve our tests and processes to take advantage of the tool changes. Our products will change, which likely means changes to existing test structures. Adam Knight (Knight, 2014) suggests asking ourselves some questions, such as, “Is our test structure extensible if we need to support new interfaces?” Over the years, and through the ability to develop and extend the test harnesses, Adam has been able to add in support for parallel execution and include iterative scaling of test execution and multiple server runs. We hear about experiences like Adam’s more frequently with each year that passes. Each team must be prepared to take on similar challenges.

Involve the Whole Team

The whole-team approach to testing and quality is possibly most critical when it comes to automating tests. Automated tests are code. Not only do they protect us against regression failures, but they help us to document our production code, telling us exactly what our system does. As we’ve mentioned before, they deserve the same care and feeding as our production code.

When testers own test automation, they must spend large portions of time writing test scripts for stories in the current iteration, investigating test failures, and maintaining the existing automated tests so they continue to work as the production code is updated. There’s often little time left for crucial activities such as exploratory testing. Programmers who aren’t automating functional tests have no incentive to create testable code because they don’t feel the pain of code that’s not automation friendly.

When the whole team is involved in test automation, the programmers recognize how they can make their code testable. For example, they can design the code with different layers, each of which can be tested independently. For a web-based application, simply using unique identifiers for HTML elements rather than using dynamic naming makes automating UI tests easier.

It is important for the whole team to own, and see the value of, automation, so that the work can be shared where it makes the most sense. It makes sense for the people who are best at writing code to write the test code. We do know people who self-identify as testers who are also excellent coders and do a great job of designing automated tests. However, on most teams, the people with the most coding experience are the programmers, and their skills can be used for the test code. Creating tests and writing the code to make them run are two different skill sets.

Testers are good at knowing which tests to specify and which tests to change if existing functionality is being changed. Collaborating with each other to implement test automation makes sense (see the section later in this chapter on “Testing through the UI”). As we said in Agile Testing (pp. 300–01), team members in other roles, such as system administrators and database administrators, also contribute to good automation solutions.

Starting Off Right

As more test automation frameworks and drivers dazzle us, it’s tempting to go the, “Oooh! Shiny! And it would look good on my résumé!” route. However, as Markus Gärtner advises (Gärtner, 2012), each team must first decide what their tests should look like. This takes lots of thought and experimentation. You need to answer many questions, such as, “Who needs to be able to read the tests? Who will specify them? Who will automate them? Into what continuous integration (CI) tool will they be integrated? Who will maintain them?”

Image

Liz Keogh (Keogh, 2013a) suggests that teams get certain capabilities in place before “heading down the tools path”: an eye on the big picture, the ability to question and explore, the ability to spot and embrace uncertainty, as well as having great relationships between people. Learn how to have conversations to elicit examples of desired behavior before deciding how to encapsulate them into a particular tool. This keeps the focus on collaborating with business experts and helps the team creatively find what works best for their situation.

Create a domain-specific language (DSL) that will let your team guide development with customer-facing tests. A good place to start is by getting examples from your domain experts (see Chapter 11, “Getting Examples,” for more about that). Then, and only then, look for the right tools to help you create tests that business users can read; these in turn become executable specifications. By starting with Quadrant 2 tests (business-facing tests that guide and support development), the programmers will continue to use the same language throughout their unit tests and production code. This continuity creates a common language that helps to provide living documentation—documentation that is guaranteed to be up-to-date, as long as the tests are passing. As a bonus, these tests are incorporated into test suites run by your CI process, where they also protect your end users from regression failures. See the bibliography for Part VI, “Test Automation,” for resources on learning more about DSLs.

Design Principles and Patterns

In Agile Testing we noted that principles of good code design apply to test code as well as to production code. Principles such as Don’t Repeat Yourself (DRY) help avoid duplication and ensure that when something changes in the system under test (SUT), only one test component needs to be updated. The Arrange-Act-Assert pattern (Ottinger and Langr, 2009a) is commonly used in unit tests but applies to higher-level acceptance tests as well. In this pattern, you arrange the context by creating an object and setting its values, act by executing some method, and assert that the expected result was returned. The software community has continued to evolve design principles and patterns that reduce the cost of writing and maintaining automated test scripts.

Whether you are automating at the unit, API, or UI level, look for ways to improve your test design to keep long-term maintenance costs to a minimum while getting fast and useful feedback. Simple steps such as documenting your test design patterns for your team or development organization can help ensure consistency and maintainability, as well enable others to understand the structure of the tests.

Table 16-1 shows some basic design rules we think are important to keep tests maintainable. It is by no means exhaustive, but it can give you a good start on experimenting to see what works best for your team.

Image

Table 16-1 Simple Rules to Live By for Automating Tests

Testing through the API (at the Service Level)

When you start out to automate business-facing tests that guide development, spend time with the stakeholders and the delivery team deciding how you want the tests to look. The tests should be useful and understandable to all who need to use them. Remember that these tests will provide valuable living documentation about how the system behaves, if they continue to pass when changes are made.

Figure 16-1 shows how API-level testing frameworks generally work. The diagram doesn’t reflect the overhead needed to create test libraries and abstractions. Rather, we want to draw attention to the magic in the middle piece—the “glue,” or test method. When testers and programmers collaborate to determine what the tests should look like, amazing things happen to enhance the shared understanding of the story. Once you have that, writing the automation code is a smaller effort.

Image

Figure 16-1 API test structure

Figure 16-2 is an example of a test for a simple login. There are two tests: one for the happy path, a valid username and password; and one with an invalid username. The input data that is passed in the first test to the TestLogIn test method would be <JanetGregory, Validpwd1>. It is then passed through as input variables to the production code. The expected result <Access System as Janet Gregory> is what will be compared to the actual results coming back from the production code via the test framework. Of course, how the data is passed back and forth is something testers and coders must discuss. This collaboration on what is being passed, and how to represent it, is where the magic happens.

Image

Figure 16-2 API test example

Chapter 11, “Getting Examples,” has more on testing below the UI and discusses how to guide development with examples. There are multiple patterns that can be used to work with this structure. One example is the Ports and Adapters pattern (Cockburn, 2005) discussed in Agile Testing (p. 112).

Testing through the UI

In recent years, the agile community has come up with more effective patterns and practices for designing automated tests that deliver a great ROI. This is true for all levels of automation, including the regression tests that run through the application’s UI.

Automating tests that exercise the application’s UI continues to pose the most difficult challenges. Many teams have invested time and money to automate UI tests, only to find the long-term maintenance cost overwhelming. Over the years, better approaches have evolved. Gojko Adzic (Adzic, 2010b) recommends thinking about UI automation at three levels:

Image Business rule or functionality level: what the test is demonstrating or exercising—for example, get free shipping within the continental United States with an order totaling a certain amount

Image UI workflow level: what high-level activities the user has to do in the UI to exercise the functionality being tested—for example, create an order whose total amount qualifies for free shipping

Image Technical activity: the technical steps to exercise the functionality in the test—for example, open a page, type text in input fields, click buttons

This approach can be implemented via several popular testing frameworks, using step definitions and scenarios, keywords, and, with some additions at the business level, Page Objects. Alister Scott also shares his experience with the three-level split approach, with examples of executable specifications and living documentation using Cucumber (Scott, 2011a).

Page Objects, and page resources for non-object-oriented frameworks, can be used to encapsulate all the things that are testable on each page of a UI. The Page Object (see Figure 16-3) includes all the functionality to interact with the SUT via third-party test libraries such as Selenium. It is most applicable where pages and activities are well aligned. When most activities span multiple screens or pages, or where a single screen does multiple activities, the Page Object may not be a good fit, and that may lead to maintenance problems.

Image

Figure 16-3 Page Object pattern

Jeff “Cheezy” Morgan created an open-source Page Object Ruby Gem for testing browser-based applications (see the “Tools” section of the bibliography for the link), and there are other implementations for different programming languages. It can be used in testing web applications, desktop applications, mobile applications, and even mainframes. See Chapter 20, “Agile Testing for Mobile and Embedded Systems,” for Cheezy’s story of how he automates testing mobile apps.

Cheezy explains how the Page Object pattern can be used with the three-layered approach (Morgan, 2014):

PageObject is a great pattern for building an abstraction over the system under test. You still need to provide a place where you clearly express the business rules to be verified, the workflows or paths to be taken through the application to complete the behavior, and the data needed by the application to complete the workflow. I look at these three additional parts of the test as very distinct. I use a BDD tool like Cucumber to express the business rules or behavior and other libraries to provide the navigation and tests. I find that this separation of concerns makes my test code much cleaner and easier to adapt as the application changes.

Your team should decide how you want your tests to look and then experiment with different patterns and approaches to see what works best.


Lisa’s Story

Our team’s product was a web application that originally had a thin client. The business logic was all on the server side, and we automated regression tests for it at the API level. Our UI test tool worked through the HTTP layer, so we sometimes had trouble with client-side events, but it worked well to cover the UI regression testing.

Later, our team came up with new front-end code that we knew would help reduce costly user mistakes, but our UI test tool couldn’t “see” the event. We felt it was too risky to implement it without automated regression tests, so it was time to find a new UI test driver and framework.

The Page Object pattern provided an appealing way to create maintainable UI tests. We kicked off a series of “bake-offs” to identify the best approach that used the Page Object and enabled the testers and coders to collaborate closely. We used a set-based-development-style approach. First, we agreed on how we’d like our tests to look. We considered a given_then_when style, which our product owner liked. But knowing that he preferred not to look at the detailed test cases, we decided to stick with an assertion format close to that of our existing tests.

Next, two people each tried a different set of tools for a proof of concept and shared their results with the rest of the team. It took a couple of rounds and a lot of time, but the investment paid off. The whole team chose the best tool set for our needs, and we brought in an expert coach to help us get a good start. The learning curve took a few weeks, but soon we were writing new tests quickly and enjoying a short feedback loop from maintainable tests.

Some people might think that taking time to experiment with how tests look, and the best framework to create them, is too expensive. But I can tell you from experience, it’s much more expensive over the long term if you’re trying to make do with the wrong test format and framework. Tests will fail more frequently, and every test failure will take longer to diagnose and correct. You will spend more and more time refactoring.


Lisa’s experience illustrates the value of team members with different specialties and skill sets collaborating to find the most appropriate automation solutions for their team. One of the main proponents of the Page Object on her team was the lead system administrator, who also is a programmer and a proponent of useful test automation. His story follows.

See Appendix A, “Page Objects in Practice: Examples,” for Tony’s examples of how to implement the Page Object pattern, with code snippets and technical details. See the Part VI bibliography for more resources, including one from Anand Ramdeo on handling common problems with the Page Object model (Ramdeo, 2013).

Test Maintenance

As we noted earlier, you need to apply good design practices to create automated test code whose value exceeds its cost of maintenance. If you’re new to test automation, Dale Emery’s paper on how to write maintainable automated tests is a good place to start (Emery, 2009). Remember, this is an area where both programming and testing expertise is required, so collaborate and experiment to evolve test automation that works for your team.

Image

Many teams automate and then forget about the tests, as long as they are passing. Over time, though, you may notice that the tests take longer and longer to run. Automated tests require continual attention. You may want to watch for duplication or places for refactoring because you find a better way. You may discover gaps in your automated test coverage. By giving visibility to your automated test results, you can continually monitor and ensure that they are still doing what you expect. Today’s test frameworks, whether you are testing at the service level or through the UI, can make writing automated tests fast and easy, so be aware that that isn’t the end of the story. You have to maintain those tests over time, make sure they return accurate results, and keep their feedback loop short. There are many areas to think about beyond writing the initial test.

Data management is an important part of test maintenance and often doesn’t get the respect it deserves. There are libraries to generate randomized but realistic test data, such as Ruby Faker and jdefault (see the “Tools” section of the bibliography for links). Experiment with different options for creating and managing data for automated test to use. Make sure you have a way to reproduce any issues uncovered by automated tests using randomly generated data.

Another aspect of maintainability is ease of debugging test failures. It’s important to be able to drill down into test failures to quickly identify exactly what went wrong. It can be helpful for test scripts to write information to log files for more in-depth debugging when appropriate. Note that if you have kept your tests to single purpose, you may not need as much logging information, because it is simpler to find the failure.

Summary

The same design principles, such as Don’t Repeat Yourself (DRY), that help create robust production code also apply to test code. How we develop our tests will determine how much value we get out of them.

Image Automated test code is as valuable as production code. It makes sense for the team’s programmers to collaborate with testers to write that automated test code.

Image In this chapter we discussed different design principles and patterns, including a three-level approach for UI automation and the Page Object pattern. Experiment with different principles and patterns to find what applies best to your tests.

Image Understand how to collaborate to take advantage of tools that automate through the API/service level and beneath the UI layer.

Image Data management for automated tests is critical, so spend time creating the right data strategy for your team or organization.

Image Monitor the costs of maintaining automated regression tests, and take steps to keep them timely, with a positive return on investment.