Review and evaluate test execution documents such as test results, defect reporting and tracking records, test completion metrics, trouble reports, input/output specifications. (Evaluate)
BODY OF KNOWLEDGE VI.D
B oth the IEEE Standard for Software and System Test Documentation (IEEE 2008a) and the ISO/IEC/IEEE Software and Systems Engineering—Software Testing—Part 3: Test Documentation standard (ISO/IEC/IEEE 2013) describe a set of basic project-level software test documents. These standards recommend the form and content for each of the individual test documents, not the required set of test documents. There are a variety of different kinds of test documents, as illustrated in Figure 23.1 :
Figure 23.1 Types of testing documentation.
Test execution and results documentation are necessary because they record the testing process and help software testers and management know what needs to be accomplished, the status of the current testing process, what has and has not tested, and how the testing was done. This test documentation also helps maintainers and other practitioners understand the testing performed by others, and supports the repeatability of tests during debugging and regression testing. All test documentation should be an outgrowth of the normal testing process, not an end in itself. Testers should think carefully about the purpose of the test documentation and whether it makes the final testing outcome better. If not, the tester must determine if that documentation is value-added and necessary. Experience is required to strike a proper balance between too much and too little test documentation.
Test Execution
Test execution is the process of actually executing tests and observing, analyzing, and recording the results of those tests. Test execution activities include the testers:
Test Execution Documentation—Test Case
Test cases , also called test scripts , are the fundamental building blocks of testing and are the smallest test unit to be directly executed during testing. Test cases are typically used to test individual paths in a source code module, interfaces, or individual functions or sub-functions of the software. Test cases define:
Each test case should include a description and may be assigned a priority. Test cases should always be traceable back to the requirements being tested. Depending on the level of testing, test cases may also trace back to design elements and/or code components. The IEEE Standard for Software and System Test Documentation (IEEE 2008a) states that each test case should have a unique identifier and include special procedural requirements and inter-case dependencies as part of its test case outline.
Test cases can be defined in a document, in a spreadsheet, in automated test code, or by using automated testing tools. The level of rigor and detail needed in a documented test case will vary based on the risk and/or integrity level of the item being tested, as well as, the needs of the project and testing staff. For example:
Figure 23.2 illustrates two examples of test case documentation: one simple example for unit test cases and one more rigorously documented example for a system test case.
The input specification part of the test case defines the requirements for the inputs into the test case. This input specification should be as generic as possible to:
Figure 23.2 Test case—examples.
For example, if the input is a range of values, instead of specifying a specific number such as 7 or 73, the input specification might read “input an integer from one through one hundred.” In this case, the actual number chosen as input for each execution of the test case is recorded as part of the test log information to allow reproducibility of that specific test execution (and the reproducibility of any problem or anomaly identified). In other cases, when specific values are required as inputs into the test case, those specific inputs must be specified. For example, when performing boundary testing on that same range of values, those boundary values of 0, 1, 100 and 101 must be specified in the test cases.
The expected output part of the test case defines the expected results from the execution of the software with the specified inputs. This might includes specific outputs (for example, reports to the printer, flashing lights, error messages, or specific numeric values). Alternatively, if inputs are selected from a range or set of possible inputs, this might include criteria for determining those outputs based on the selected input, for example:
The environmental needs (setup instructions) part of the test case defines any special test bed requirements for running the test case. For example, if it is a throughput-type requirement, there may be a need to attach a simulator that emulates multiple transaction inputs. Another example might be a requirement to run the test case while the temperature is over 140 degrees Fahrenheit for reliability testing.
Test Execution Documentation—Test Procedure
A test procedure , also called a test script or test scenario , is a collection of test cases intended to be executed as a set, in the specified order, for a particular purpose or objective. For example, test procedures are used to evaluate a sequence or set of features by chaining together test cases to test a complete user scenario or usage thread within the software. For example, the “pay at the pump with a valid credit card” test procedure might chain together the following test cases:
Alternative procedures might chain many of these same test cases together with other test cases and/or substitute other test cases into the chain. For example, the pay at the pump with a valid debit card test procedure, which would include the addition of an accept valid PIN test case after the first test case and use the validate a valid debit card with credit card clearinghouse test case in place of the Validate a valid credit card with credit card clearinghouse test case in the above test procedure.
The documentation of a test procedure defines the purpose of the procedure, any special requirements, including environmental needs or tester skills, and the detailed instructional steps for setting up and executing that procedure. Each test procedure should have a unique identifier so that it can be easily referenced. The IEEE Standard for Software and System Test Documentation (IEEE 2008a) says the procedural steps should include:
Test procedures, like test cases, can be defined in documents, in a spreadsheet, in automated test code, or using automated testing tools. Like test cases, the level of detail needed in a documented test procedure will also vary based on the needs of the project and testing staff.
Test Execution Results Documentation—Test Log
A test log , also called a test execution log , is a chronological record that documents the execution of one or more test cases and/or test procedures in a testing session. The test log can be a very useful source of information later when the authors are having trouble reproducing a reported problem, or the tester is trying to replicate test results. For example, sometimes it is not the execution of the last test case or procedure that caused the failure, but the sequence of multiple test cases or procedures. The log can be helpful in repeating those sequences exactly. It may also be important to understand the time of day or day of the week that the tests were executed, the test environment configuration, or what other applications were running in the background at the time the failed test was executed.
The test log information can be logged manually by the testers into a document or spreadsheet, directly into the test cases/procedures, or into a testing tool. If test execution is automated, some or all of the test log information may be captured as part of the outputs recorded and reported by that automation. The test log should include generic information about each test execution session, including:
As the individual tests are executed during the session, specific chronological information about the execution of each test case and/or procedure is also logged, including:
Test Execution Results Documentation—Problem Report
A problem report , also called incident report, anomaly report, bug report, defect report, error report, failure report, issue, trouble report , and so on, documents a failure or other anomaly observed during the execution of the tests. Problem reports may be part of the formal configuration status accounting information, if the item under test is baselined, or they may be reported informally, if the item is not baselined. A problem report should include information about the identified failure or anomaly with enough detail so that the appropriate change control board can analyze its impact and make informed decisions, and/or so that the author can investigate, replicate, and resolve the problem, if necessary. A description of the information recorded in a problem report type change request is discussed in Chapter 26 .
Problem report descriptions should:
The contents and tracking of problem reports are further discussed in Chapter 26 . While problem reports can be documented on paper forms, most organizations use some kind of automated problem reporting tool, or database, to document these reports and track them to resolution. A project’s V&V plans should specify which problem reporting and tracking mechanisms are used by that project.
Test Execution Results Documentation—Test Results Data
Test results data provides the objective evidence of whether or not a specific test case or test procedures passed or failed test execution. Test results data are compared with the expected results, as defined in the test cases, and evaluated to determine the outcome of the test execution.
Test Execution Results Documentation—Test Status Report
Test status reports provide ongoing information about the status of the testing part of the software projects over time. Status reports are used to provide management visibility into the testing process including variances between plan and actual results, the status of risks (including new or changed risks) and issues that may require corrective action. Test status reports may be written reports or may be given verbally (for example, verbal status reports are given as part of daily meeting of an agile team).
Test Execution Results Documentation—Test Metrics
Test management, and product and process quality metrics are fundamental to the tracking and controlling of the testing activities of the project. They also provide management with information to make better, informed decisions. Specific test measurement reports for selected test metric many be stand alone reports (Chapter 19 included an example dashboard for a system test sufficiency dashboard) or many be included in test status reports or test completion reports.
Test management metrics report the test activities’ progress against the test plans. Test management metrics provide visibility into trends in current progress, indicate accuracy in test planning estimation, flag needed control actions, can be used to forecast future progress, and provide data to indicate the completeness of the testing effort. Examples of test management metrics include:
Product quality metrics collected during testing can provide insight into the quality of the products being tested and the effectiveness of the testing activities. Product quality metrics also provide data to indicate the completeness of the testing effort, and the product’s readiness to transition to the next software life cycle activity, or be released. Examples of product quality metrics include:
Process quality metrics can help evaluate the effectiveness and efficiency of the testing activities, and identify opportunities for future process improvement. Examples of test process quality metrics include:
Test Execution Results Documentation—Test Completion Report
A test completion report , also called a test report or test summary report , summarizes the results and conclusions from a major cycle of testing, or a designated set of testing activities, after the completion of that testing. For example, there may be an integration test completion report for each subsystem, a system completion test report, an acceptance completion test report, or other test completion reports, as appropriate. Basically, at a minimum, every test plan should have an associated test report. A project’s V&V plans should specify which test reports will be written for that project.
A test report includes a summary of the testing cycle or activities and their results, including any variances between what was planned and what was actually tested. The summary also includes an evaluation, by the testers, of the quality of the work products that were tested, and their readiness (and associated risks) to transition to the next stage of development or to be released into operations. The report also includes a detailed, comprehensive assessment of:
Finally, the test report includes a summary of test activities, and information including test measures, testing budget, effort and staffing actuals, number of test cases and procedures executed, and other information that would be beneficial when evaluating lessons learned, and planning and estimating future testing efforts.