Define and distinguish between functional and physical configuration audits and how they are used in relation to product specifications. (Understand)
BODY OF KNOWLEDGE VII.D
Software configuration management (SCM) audits provide independent, objective assurance that the SCM processes are being followed, the software configuration items are being built as required, and at delivery, the software products are ready to be released. By using standardized checklists, tailored to the specifics of each project, these SCM audits can be more effective, efficient, and consistent, as well as aiding in continual process improvement.
Two types of SCM audits are typically performed:
-
Functional configuration audit (FCA):
An independent evaluation of the completed software products to determine their conformance to their requirements specification(s) in terms of completeness, performance, and functional characteristics
-
Physical configuration audit (PCA):
An independent evaluation of each configuration item to determine its conformance to the technical documentation that defines
it
Functional and physical configuration audits differ from process audits of the SCM processes. Process audits, as discussed in
chapter 8
, are ongoing independent evaluations, conducted to provide information about compliance to processes, and the effectiveness and efficiency of those processes. The primary purpose of functional and physical configuration audits is to maintain the integrity of configuration baselines. These audits verify that the baselines are complete, correct, and consistent in relationship to the functional and physical specifications that were agreed to by the stakeholders. These audits verify that changes approved through configuration control were correctly implemented and that no unauthorized changes have occurred.
This section of the Certified Software Quality Engineering Body of Knowledge talks about configuration audit. However, for organizations that do not require a high level of rigor or independence, configuration reviews, rather than audit, can be conducted
by project personnel to accomplish the same objective of maintaining the integrity of configuration baselines. For example, Moreira (2010) says, “In general, most agile teams will find the traditional CM audit approach as too heavy, even if they find value in knowing the level of integrity of their baselines.” For agile development, the configuration review could be based on the iteration baseline and incorporated into the agile review at the end of the iteration. Combining these reviews by automating the build and configuration reporting processes could keep the process lean and remove potential redundancies from having separate baseline and end-of-iteration reviews.
FUNCTIONAL CONFIGURATION AUDIT (FCA)
According to the ISO/IEC/IEEE
Systems and Software Engineering—Vocabulary (ISO/IEC/ IEEE 2010), a functional configuration audit
(FCA) is “an audit conducted to verify that:
-
The development of a configuration item has been completed satisfactorily
-
The item has achieved the performance and functional characteristics specified in the functional or allocated configuration identification
-
The item’s operational and support documents are complete and satisfactory.”
A functional configuration audit is performed to provide an independent evaluation that the as-built, as-tested system/software and its deliverable documentation meet the specified functional, performance, quality attributes, and other requirements. An FCA is conducted at least once during the life cycle, typically just before the final ready-to- beta-test or ready-to-ship review, and provides input information into those reviews. However, FCAs can also be conducted throughout the life cycle at checkpoints to verify the proper transition of the requirements into the subsequent successor work products. An FCA is essentially an independent review of the system’s/software’s verification and validation data to make certain that the deliverables meet their completion and/or quality requirements, and that they are sufficiently mature for transition into the next phase of development, or into either beta testing or operations, depending on where in the life cycle the FCA is conducted. It should be noted that it is not the role of the FCA to verify the product’s conformance to the requirements. That is the role of verification and validation. The role of the FCA is to make sure that all of the requirements have been completely implemented and tested through the examination of development and testing
records and other objective evidence.
Table 27.1
illustrates an example of an FCA checklist and proposes possible objective evidence-gathering techniques for each example of checklist item. While several suggested evidence-gathering techniques are listed for each checklist item, the level of rigor and scope for the audit will dictate which of these techniques (or other techniques) will actually be used by the auditors. The level of rigor chosen for each SCM audit should be based on a trade-off analysis of cost/schedule/product and risk. For example, when evaluating whether the code implements all, and only, the specified requirements, a less rigorous approach would be to evaluate the traceability matrix, while a more rigorous audit might examine sampled source code modules, comparing them against their allocated requirements.
Table 27.1
FCA checklist items and evidence-gathering techniques—example.
FCA checklist item
|
Suggestions for evidence-gathering techniques
|
1. Do the product-level requirements implement all, and only, the documented and authorized system/software stakeholder-level requirements?
|
-
Evaluate stakeholder-level to product-level requirements forward and backward traceability information for completeness and to verify that only authorized functionality has been implemented.
-
Sample a set of source code modules and compare them against their allocated requirements.
|
2. Does the design implement all, and only, the documented
system/software requirements?
|
-
Evaluate requirements-to–design forward and backward traceability information for completeness and to verify that only authorized functionality has been implemented.
-
Sample a set of requirements and, using the traceability information, review the associated design for implementation completeness and consistency.
-
Sample a set of approved enhancement requests and review their resolution status (or if approved for change, evaluate their associated design for implementation completeness and consistency).
|
3. Does the code implement all, and only, the documented system/software requirements?
|
-
Evaluate requirements-to–source code module forward and backward traceability information for completeness and to verify that only authorized functionality has been implemented.
-
Sample a set of requirements and, using the traceability information, review the associated source code modules for implementation completeness and consistency.
-
Sample a set of approved enhancement requests and review their resolution status (or if approved for change, evaluate their associated source code modules for implementation completeness and consistency).
|
4. Can each system/software
requirement be traced forward into tests (test cases, procedures, scripts) that verify that requirement?
|
-
Evaluate requirements-to-tests traceability information for completeness.
-
Sample a set of requirements and, using the traceability information, review the associated test documentation for adequacy of verification by making certain the appropriate level of test coverage for each requirement.
|
5. Is comprehensive system/ software testing complete, including functional testing, interface testing, and the testing of required quality attributes (performance, usability, safety, security, and so on)?
|
-
Review approved verification and validation reports for accuracy and completeness.
-
Evaluate approved test documentation (for example, test plans, defined tests) against test results data (for example, test logs, test case status, test metrics) to confirm adequate test coverage of the requirements and system/software.
-
Execute a sample set of test cases to confirm that the observed results match those recorded in the test reporting documentation to evaluate accuracy of test results.
|
6. Are all the problems reported during testing adequately resolved (or the appropriate waivers/deviations obtained and known defects with work-arounds documented in the release notes)?
|
-
Review a sample set of approved test problem report records for evidence of adequate resolution.
-
Sample a set of test problem report records and review their resolution status (or if approved for change, evaluate their associated source code modules for implementation completeness and consistency).
-
Review regression test results data (for example, test logs, test case execution outputs/status, test metrics) to confirm adequate test coverage after defect correction.
|
7. Is the deliverable documentation consistent with the requirements and the as-built system/software?
|
-
Review minutes and defect resolution information from peer reviews of deliverable documentation for evidence of consistency.
-
Evaluate approved test documentation (for example, test plans, defined tests) against test results data (for example, test logs, test case execution outputs/status, test metrics) to confirm adequate test coverage of the deliverable documentation during test execution.
-
Review a sample set of updates to previously delivered documents to confirm consistency with requirements and as-built system/software.
|
8. Are the findings from peer reviews incorporated into the software deliverables (system/ software and/or documentation)?
|
-
Review records from major milestone/phase gate reviews that verified the resolution of defects identified in peer reviews.
-
Review a sample set of peer review records for evidence of the resolution of all identified defects.
-
Review a sample set of minutes from peer reviews and evaluate the defect lists against the
associated work products to confirm that the defects were adequately resolved.
|
9. Have approved corrective actions been implemented for all findings from in-process SCM audits?
|
|
PHYSICAL CONFIGURATION AUDIT (PCA)
According to the ISO/IEC/IEEE Systems and Software Engineering—Vocabulary
(ISO/IEC/ IEEE 2010), a physical configuration audit (PCA) is “an audit conducted to verify that each configuration item, as built, conforms to the technical documentation that defines it.” A PCA verifies that:
-
All items identified as being part of the configuration are present in the product baseline
-
The correct version and revision of each item is included in the product baseline
-
Each item corresponds to information contained in the baseline’s configuration status report
A PCA is performed to provide an independent evaluation that the software, as implemented, has been described adequately in the documentation that will be delivered with it, and that the software and its documentation have been captured in the software configuration status accounting records and are ready for delivery. Finally, the PCA may also be used to evaluate adherence to legal obligations, including licensing, royalties and export compliance requirements.
Like the FCA, the PCA is conducted at least once during the life cycle, typically just before the final ready-to-beta-test or ready-to-ship review, and provides input information into those reviews. However, PCAs can also be conducted throughout the life cycle at check points to verify the proper transition of the requirements into the subsequent successor work products. The PCA is typically held either in conjunction with the FCA or soon after the FCA (once any issues identified during the FCA are resolved). A PCA is essentially a review of the software configuration status accounting data to make certain that the software products and their deliverable documentation are appropriately baselined and properly built prior to release to beta testing or operations, depending on where in the life cycle the PCA is conducted.
Table 27.2
illustrates an example of a PCA checklist and proposes possible objective evidence-gathering techniques for each item.
Table 27.2
PCA checklist items and evidence-gathering techniques—example.
PCA checklist item
|
Suggestions for evidence-gathering techniques
|
1. Has each nonconformance or noncompliance from the FCA been resolved?
|
-
Review findings from the FCA audit report, associated corrective actions, follow-up and verification records to evaluate adequacy of actions taken (or verify that appropriate approved waivers/deviations exist).
|
2. Have all of the identified configuration items (for example, source code modules, documentation, and so
on) been baselined?
|
|
3. Has the software been built from the correct configuration items and in accordance with the specification?
|
-
Evaluate the build records against the configuration status accounting information to confirm that the correct version and revision of each configuration item was included in the build.
-
Evaluate any patches/temporary fixes made to the software to confirm their completeness and correctness.
-
Sample a set of design elements from the architectural design and trace them to their associated component design elements and source code modules. Compare those elements with the build records to evaluate them for completeness and consistency with the as-built software.
-
Sample a set of configuration items and review them to confirm that their physical characteristics match their documented specifications.
|
4. Is the deliverable documentation set complete and consistent?
|
-
Evaluate the master copy of each document against the configuration status accounting information to confirm that the correct version and revision of each document constituent configuration item (for example, chapter, section, and figure) is included in the document.
-
Sample the set of copied documents ready for shipment and review them for completeness and quality
against the master copy.
-
Evaluate the version description document against the build records for completeness and consistency.
-
Compare the current build records to the build records from the last release to identify changed configuration items. Evaluate this list of changed items against the version description document to evaluate the version description document's completeness and consistency.
|
5. Do the actual system deliverables conform to their specifications and are they frozen to prevent further change? Has the deliverables been appropriately marked/labeled?
|
-
Evaluate the items on the master media, downloadable files or Web-accessible files against the required software deliverables (executables, help files, data, and documentation) to confirm that the correct versions and revisions were included.
-
Rebuild a sample set of software deliverables from the SCM repository and evaluate them to confirm that the controlled configuration items match those built into the deliverables.
-
Sample a set of copied media, downloadable files or Web -accessible files, ready for shipment/access and review them for completeness and quality against the master media.
-
Sample a set of copied media, downloadable files or Web -accessible files ready for shipment/access and review their marking/labeling against
specifications.
|
6. Do the deliverables for shipment match the list of required deliverables?
|
-
Evaluate the packing list against the list of documented deliverables to verify completeness.
-
Sample a set of ready-to-ship packages and evaluate them against the packing list to confirm that media (for example, CD, disks, tape), downloadable files or Web -accessible files, hard copy documentation, and/or other deliverables are included in each package.
|
7. Have all third-party licensing requirements been met?
|
|
8. Have all export compliance requirements been met?
|
|