Jose Ortiz asked Massood Zewail if he would also prepare and deliver a tutorial on the processes used in software quality measurement and tracking.
MiniTutorial 3.3: Quality Measurement and Defect Tracking
Evidence of the presence, absence, or variance from the expected quality of a software assurance activity can be determined through measurement. Establishing a solid measurement strategy is one of the most important components of any quality assurance organization. As the IEEE Standard for a Software Quality Metrics Methodology [IEEE 1061] states “…defining software quality for a system is equivalent to defining a list of software quality attributes required for that system. In order to measure the software quality attributes, an appropriate set of software metrics shall be identified”. When planning for a measurement strategy, the following questions are appropriate:
■ Why do we need to measure?
■ What do we need to measure?
■ How do we conduct these measurements?
■ Who will be looking at these data?
■ What do we do with the data collected?
The answer to these questions provides guidelines for the establishment of the metrics that are needed, the personnel that are going to conduct these measurements, the infrastructure that are needed to collect and maintain the data, the necessary analysis of the data, and personnel who will have access to the data and the driven analysis.
Of course, care must be taken to make sure there is a good balance between the amount of time and effort that is required to conduct the measurement and that needed for the corresponding analysis of the data measured. An overwhelming measurement strategy is not sustainable in a long haul and will result in an inaccurate and incomplete data set. Therefore, an ideal measurement strategy should have the following characteristics:
■ Simplicity – it is intuitive and easy to understand.
■ Validity – it measures what it claims to measure.
■ Robustness – the measurement value is insensitive to outside factors.
■ Independence – the measurement values interpretation is not dependent on other measurement values.
■ Prescriptiveness – the measurement helps project management accurately identify and avoid potential problems (you know which direction is good and which is bad).
■ Analyzability – the measurement can be analyzed using statistical techniques.
Defining Metrics
As previously discussed, one of the first questions to ask in order to establish a measurement strategy is “Why we need to measure?” Of course, the main reason is to produce a quality product. Therefore, there should be clear quality assurance goals and the appropriate measurements to evaluate whether those goals have been achieved. There are several techniques for accomplishing this task. One such technique called Goal/Question/Metrics ( GQM) [Basili 1984], by which an organization can establish a set of quality goals that need to be achieved. The GQM technique establishes a process to monitor the achievement of the goals.
The GQM technique first requires the organization (or project team) to establish a set of quality goals for the project. It is possible that a large number of quality goals have been identified, and as a result, it may be very difficult, given the project resources and schedule, to achieve all the identified goals. In such a case, the team should prioritize these goals and try to achieve as many of the highest priority goals as possible. Once the quality goals are identified, the team can identify questions about how these goals can be achieved and know when the goals have been achieved. The purpose of these questions is to identify the appropriate metrics that needs to be collected to evaluate whether those goals are achieved. Figure 3.5 represents a GQM analysis.
Figure 3.5 GQM analysis.
For example, let us assume that one of our quality goals for the organization is to make sure no more than 5% of the product defects are found after the product release. In this case, we could ask the following questions:
■ During life of the product, how many defects (before and after product release) will be found?
■ What percent of the defects are found before the product release?
■ What percent of the defects are found after the product release?
■ Which quality assurance activities are best at discovering product defects during the development?
Given the question about percent of defects found before the product release, the following measurements are relevant:
■ Number of defects found before the release.
■ Number of defects found after the release.
■ Number of new defects that are found after the defect fixes.
■ Number of defects that are found in the field after the defect fixes.
The four metrics should help answer the question about percent of defects found before the product release. If a quality goal is not achieved, then the organization needs to modify its quality improvement procedures to achieve the quality goal. This may require identifying additional questions, and the corresponding metrics that help achieve the goal, and integrating additional defect prevention and detection activities during the development.
Quality Metrics
There are several product metrics that can be used to measure quality; however, generally, they can be grouped into two major categories:
Quality Metrics During Development
Software metrics provide insight to the development team about the status of the project, quality of the product, quality of the processes that are used, and provide the basis for decision making. The following are advantages of these metrics:
■ Keeping schedules realistic.
■ Making project control and risk assessment more quantitative.
■ Identifying overly complex programs.
■ Identifying areas where tools are needed.
■ Providing data for root cause analysis (a method used for identifying the root causes of faults or problems).
■ Avoiding premature product release.
Some examples of these metrics include:
■ Project metrics: These metrics help the development team to monitor and control the project progress. Some examples of these metrics include:
– Schedule metrics identify any variation from the project schedule.
– Cost metrics identify variation from the expected project cost.
– Requirement change request metrics monitor the number of change requests, their frequency, and when these changes are requested. Unstable requirements are the major cause of project cost and schedule overrun.
– Effort metrics monitor the project actual progress versus the planned progress.
– Repair/rework metrics provide information about the quality of the product while under development, and the potential for project schedule and cost overruns.
■ Product metrics: These metrics provide the development team insight into the quality of the product under the development.
– Requirements metrics provide information about the quality of requirements, such as their ambiguity, incompleteness, and complexity. A lower quality requirement points to potential problems throughout the development life cycle.
– Design metrics provide information about the quality of the design, such as its cyclomatic complexity, cohesion, coupling, function points, depth of inheritance, and stability. The more complex the product design, the higher cost for product maintainability.
– Implementation metrics provide information about the quality of the code, such as defect density (e.g., defects/KLOC), size of source code (in LOC), size of binary code, code path lengths, and percent of source code compliance with coding standards.
– Test metrics provide information about product units and the overall product, such as the number of tests cases per software unit, the degree to which test cases cover the requirements, and the number of defects discovered with testing.
■ Process metrics: These metrics provides insight about the effectiveness of the processes used to develop a product.
– Efficiency metrics provide insight about the quality of the process implemented by the development team. For example, the defect removal efficiency provides insight about the quality of the defect removal process, and how many defects were removed before the product release versus the defects found by the end user.
– Cycle time metrics provide insight into the agility of the development team from the time an idea is developed to the time it is in place. For example, cycle time for code development measures from the time design is committed to and then implemented.
– Turnaround time metrics measure the time a defect is identified to the time that defect is fixed.
Quality Metrics Post Rel ease
These metrics provide insight into the quality of the product from the customer point of view. Some examples of these metrics include:
■ Customer satisfaction
– Number of system enhancement requests
– Number of maintenance requests
– Number of product recalls
■ Responsiveness to user requests
■ Reliability
■ Cost of defects to customer
■ Ease of use (complexity of the product in operation)
Exercise 3.4: DH Quality Measurement Strategy
After Massood’s discussion of quality measurement, Disha Chandra decided the team needed to create a quality measurement strategy.
The exercise, described at the end of this chapter, deals with the DH Team’s discussion.
Exercise 3.5: DH Goal/Question/Metric
Disha asked the team to use the GQM technique to evaluate the effectiveness of the coding standard being considered for the DH project: Java Code Conventions [Sun 1997].
The exercise, described at the end of this chapter, deals with the DH Team’s use of the GQM technique.
After Massood’s tutorial on SQA, Disha Chandra asked Michel Jackson to prepare and deliver additional material on common quality assurance activities.
MiniTutorial 3.4: Quality Assurance Activities
Validation and Verification (V&V)
Validation and Verification are two common terminologies that are associated with quality assurance:
Verification – The process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase [IEEE 1012]. In another words, it answers the question of whether the product is being built the right way.
Validation – The process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements [IEEE 1012]. In another words, it answers the question of whether the right product is being built.
Verification typically involves evaluation of the documents, plans, requirements and design specifications, code units, and test cases. These are sometimes referred to as static evaluation, since no code is executed. On the other hand, validation is the evaluation of the product itself. This is referred to as dynamic evaluation (testing) of the product.
Validation typically follows the verification process. Therefore, the inputs to verification are static products, and its output is the same product with hopefully much higher quality, whereas the input to the validation process is a set of test cases that evaluate the product, and its output is hopefully a perfect product.
There are quality assurance techniques that are associated with V&V, some of these techniques are covered in the following sections.
Independent Validation and Verification (IV&V)
V&V can either be conducted by the development team or by an independent group which may or may not work in the same organization as the development team; but the independent group is not part of the development team. There are advantages and disadvantages associated with conducting the quality reviews by an independent group. Table 3.2 lists some of the advantages and disadvantages of the IV&V.
Table 3.2 IV&V Advantages and Disadvantages
IV&V Advantages
IV&V Disadvantages
New set of eyes
Not invested in the development
Not reporting to the same manager
Independent appraisal (potentially higher confidence)
Quality Reviews
By performing quality reviews throughout the development life cycle we are able to identify and remove the defects as early as possible. As illustrated in Figure 3.2 the earlier a defect if detected, the cheaper it is to fix. Table 3.3 shows the approximate cost/effort associated with the removal of a requirement defect in different phases of the development life cycle.
Table 3.3 Cost of Removing a Requirement Defect
Phase
Activities/Resources
Cost
Requirements
Interaction with Stakeholders
Requirement documentation update
100–200
Design
Interaction with Stakeholders
Requirement documentation update
Redesign
Design document update
300–1000
Construction and Testing
Interaction with Stakeholders
Requirement documentation update
Redesign
Design document update
Coding
Testing
1000–10,000
Operation
Interaction with Stakeholders
Requirement documentation update
Redesign
Design document update
Coding
Testing
Repackaging and deployment
1000–10,000 + cost of
Recall
Redistribution
Unsatisfied customers
Legal (lawyer, litigation, fines, etc.)
Note : Table 3.3 does not consider the activities that are associated with the defect detections and the corresponding cost associated with those activities.
There are number of different quality review techniques; three such techniques are walk-through, Technical Review, and Inspection [IEEE 1028]. These quality reviews are conducted on different software artifacts, such as requirements specifications, design descriptions, source code, or test plans.
Walk-through
A walk-through is the least formal quality review. During a walk-through, the software artifact owner will lead members of the walk-through team through a review of the artifact. The participants in the walk-through have the opportunity to ask questions and/or make comments about potential problems in the artifact, or potential violation of standards.
There are typically two major objectives for conducting a walk-through:
■ To collect input from the walk-through participants about the quality of the product, and whether the product under development is meeting the identified standards.
■ To give the participants an opportunity to learn about the product under review; the walk-through is used as a vehicle for knowledge transfer.
The participants in a walk-through are:
■ Walk-through Leader – conducts the walk-through, handles the administrative tasks pertaining to the walk-through, and ensures that the walk-through is conducted in an orderly manner.
■ Author of the Product – presents the software artifact in the walk-through.
■ Recorder – records all decisions and identified actions arising during the walk-through meeting.
■ Team Member – prepares for and actively participates in the walk-through, and identifies and describes anomalies in the software product.
Technical Review
Technical Review is more formal than a typical walk-through. The purpose of a technical review is to evaluate a software artifact using a team of qualified personnel to determine its suitability for its intended use and identify discrepancies from specifications and standards. It provides management with evidence to confirm the technical status of the project. Technical reviews may also provide recommendation and examination of various alternatives, which may require more than one meeting. In a technical review, almost all the participants are technically competent in the same area as the software artifact under review. For example, if the artifact under the review is software design specification, then almost all participants have technical background in software design.
The following are the list of potential participants in a technical review meeting:
■ Decision Maker – the person who requested the technical review and is the final authority on whether the meeting has achieved its stated objective.
■ Review Leader – performs administrative tasks pertaining to the review, ensures that the review is conducted in an orderly manner, and ensures that the review meets its objectives.
■ Recorder – documents anomalies, action items, decisions, and recommendations made by the review team.
■ Technical Reviewers – actively participate in the review and evaluation of the software artifact.
■ Management Representatives – participate in the technical review to identify issues that require management resolution. (e.g., additional resources needed due to a suggested alternative solution).
■ Customer or User Representative – may also participate in the meeting, either for better understanding of the solution approach and/or clarification of its need.
Inspection
The purpose of an inspection is to detect and identify software product defects. An inspection is a systematic peer examination that does one or more of the following [IEEE 1028]:
■ Verifies that the software product satisfies its specifications.
■ Verifies that the software product exhibits specified quality attributes.
■ Verifies that the software product conforms to applicable regulations, standards, guidelines, plans, specifications, and procedures.
■ Collects software engineering data (for example, anomaly and effort data) – data that may be used to improve the inspection (e.g., improved checklists).
■ Requests or grants waivers for violation of standards where the adjudication of the type and extent of violations are assigned to the inspection jurisdiction.
■ Uses the data as input to project management decisions as appropriate (e.g., to make trade-offs between additional inspections versus additional testing).
Inspection is the most formal quality review technique. The main difference between the inspection and technical review is that in a technical review, the participants may suggest alternative solutions; however, an inspection focuses on detecting and identifying defects. A technical review meeting can be long and laborious, as participants might have long technical discussions about what is the best solution to a specific problem. Inspections meetings are orderly and systematized. This discussion of inspection is based on [IEEE 1028], and the work of Michael Fagan and his “Fagan Inspection” [Fagan 1986].
The following are the list of participants in an inspection meeting:
■ Moderator (Inspection Leader) – responsible for planning and organizing the inspection, ensuring the inspection is conducted in an orderly manner and meets its objectives, and ensuring that the inspection data is collected.
■ Author – the owner of the document being inspected; contributes to the inspection based on special understanding of the software product, and performs any rework required to make the software product meet its inspection exit criteria.
■ Reader – leads the inspection team through the software product in a comprehensive and logical fashion, interpreting sections of the work, and highlights important aspects. This role may be carried out by the author.
■ Recorder – documents defects, action items, decisions, waivers, and recommendations made by the inspection team, and records inspection data required for process analysis.
■ Inspectors – identify and describe defects in the software product. Only those viewpoints pertinent to the inspection of the product should be presented. In an inspection, everyone also serves as the inspector except the author.
The inspection process is a formal process with number of distinct activities. Figure 3.6 displays the Fagan Inspection process.
Figure 3.6 Fagan inspection process.
The work product can be any software artifact (e.g., requirements specification, design description, test plan, source code, etc.). Once the work product is ready for review, then the inspection leader is informed.
For the preparation, the moderator identifies the appropriate participants in the inspection process; coordinates with the participants to set up the meeting; collects the work products and all appropriate supporting documents (i.e., references, checklists, inspection logs, etc.); and distributes them to the inspection participants.
The overview phase is an optional activity. In cases where the inspectors are not familiar with the work product, the author will provide an overview of the product to make sure everyone has a good understanding of what is being inspected.
During the individual inspection phase, each inspector inspects the work product (or the portion assigned). The inspectors record the defects discovered and the time spent inspecting in their individual inspection logs, which are turned into the moderator prior to the inspection meeting.
The formal meeting is the meeting in which the inspection participants meet to go over the product and identify product defects. Prior to the meeting, the moderator reviews the individual inspection logs to ensure all inspectors have completed their inspection tasks in a proper manner. As the meeting proceeds, the reader goes over the document one item at the time, and the inspectors identify the defects that they have discovered. The recorder records all the defects in the inspection meeting log, and at the end of the meeting, the meeting log is turned into the moderator.
The moderator gives the author a list of all the defects identified during the inspection process. The author will attempt to fix all the defects during the rework phase and return the modified work product to the moderator. In some cases, the author may think that an issue identified is a not a defect. In this case, the author will inform the moderator.
Depending on the quality of the product, and the number of defects identified, or other factors, the moderator or some other stakeholder may decide to call for follow-up activities. A follow-up activity may be another inspection for the work product, or additional rework by the author.
In some cases, there may be a final phase in the inspection process. This phase is called inspection analysis or sometimes referred to as the causal analysis phase. In this phase, the author, selected members of the inspection team, and other stakeholders meet to analyze the type of the defects that were discovered during the inspection phase and look for the root cause for the defects being introduced into the work product, and what can be done to prevent this from occurring again.
Exercise 3.6: Requirements Inspection
After the discussion of Quality Reviews, Disha Chandra asked the team to practice a requirements inspection on the DH High-Level Requirements Definition (HLRD).
The exercise, described at the end of this chapter, deals with the DH Team’s inspection.
Exercise 3.7: Code Review
After the discussion of Quality Reviews, Disha Chandra asked the team to practice a code review on pseudocode for a program unit.
The exercise, described at the end of this chapter, deals with the DH Team’s code review.
After Michel completed his tutorial on Quality Assurance Activities, Disha Chandra asked Michel “Why didn’t you include any material on testing?” Michel said he thought since testing was such a big topic, it would be best to cover it in a separate tutorial. Georgia Magee reminded Disha that she had worked for the last three years as a test engineer at HomeOwner and said she would be glad to cover software testing for the team. Disha said “fine – I never turn down a volunteer”.
MiniTutorial 3.5: Software Testing
The SWEBOK [Bourque 2014] states “Software testing consists of the dynamic verification that a program provides expected behaviors on a finite set of test cases, suitably selected from the usually infinite execution domain”. Software Testing is a critical element of SQA. It represents the ultimate review of the requirements specification, the design, and the code. It is the most widely used method to insure software quality. Many organizations spend 50% or more of development time in testing.
Key points about software testing:
■ Testing is concerned with establishing the presence of program defects, while debugging is concerned with finding where defects occur (in code, design, or requirements) and removing them (fault identification and removal). Even if the best review methods are used (throughout the entire development of software), testing is necessary.
■ Testing is the one step in software engineering process that could be viewed as destructive rather than constructive: “A successful test is one that breaks the software” [McConnell 2004].
■ A successful test is one that uncovers undiscovered defect.
■ Testing cannot show the absence of defects, it can only show that software defects are present.
■ For most software, exhaustive testing (test all possible cases) is not possible.
■ Software tests typically use a Test Harness : a system of test drivers, stubs, and other tools to support test execution.
– A Test Driver is a utility program that applies test cases to a component being tested.
– A Test Stub is a temporary, minimal implementation of a component to increase controllability and observability in testing. When testing a unit that references another unit, the unit must either be complete (and tested) or stubs must be created that can be used when executing a test case referencing the other unit.
Testing Types
Figure 3.7 shows the various types of tests associated with the software development life cycle [Pfleeger 2009].
Figure 3.7 Software development tests.
Unit Testing
■ Unit Testing – checks that an individual program unit (function/procedure/method, object class, package, module) behaves correctly. There are two broad categories of unit testing:
– Static Testing – testing a unit without executing the unit code using a formal review (e.g., performing a symbolic execution of the unit).
– Dynamic Testing – testing a unit by executing a program unit using test data.
There is some debate about what constitutes a “unit”. Here are some common definitions of a unit [McConnell 2004]:
■ the smallest chunk that can be compiled by itself
■ a stand-alone procedure or function
■ something so small that it would be developed by a single person
Figure 3.8 illustrates the various techniques used for unit testing [Pressman 2015]. A little detail about some of these techniques:
Figure 3.8 Unit testing techniques.
■ Static Techniques
Symbolic Execution – works well for small programs.
Program Proving – difficult and costly to apply, and may not guarantee correctness.
Anomaly Analysis –
Black Box Techniques –
Design of software tests relies on module purpose/description to devise test data.
Uses inputs, functionality, outputs in test design.
Treats a module like a “black box” (given specific input what would be the expected output?).
White Box Techniques –
The following issues should be considered when preparing a unit test plan :
■ Unit test planning is typically performed by the unit developer.
■ Planning of a unit test should take place as soon as possible.
■ Black box test planning can take place as soon as the functional requirements for a unit are specified.
■ White box test planning can take place as soon as the detailed design for a unit is complete.
■ If one is testing a single, simple unit that does not interact with other units, then one can write a program that runs the test cases in the test plan. However, if the unit must interact with other units, then it can be difficult to test it in isolation.
– For example, a unit that loads a system initialization file may require numerous interactions with other units.
– Such unit testing may require a test harness using drivers and stubs.
Integration Testing
The DigitalHome development process, defined in Table 3.1 , prescribes an incremental construction procedure. As increments/units are developed, they are integrated with the other units previously tested and integrated. As each increment is developed, integration tests are conducted to uncover interfacing and interaction errors between the units.
There are various integration testing strategies:
■ Bottom-Up – test drivers assist in first testing lower level components.
■ Top-Down – component stubs assist in first testing higher level components.
■ Sandwich – a mixture of top-down and bottom-up approach.
■ Big-Bang – integrate all components together at same time and test them.
Figure 3.9 depicts a set of units developed incrementally, with a test harness of a driver and some stubs. For example, if a top-down approach was used, unit A would be developed first; then followed by integration and testing of units B, C, and D; and then followed by integration and testing of unit E. Such testing can result in significant improvements in finding and correcting interface defects; however, there are added overhead costs.
Figure 3.9 Integration testing units.
Typically, a “test scaffold” is built as part of test planning to support such hierarchical testing. Each time a new module is added (or changed), the previous tests of units must be executed again to check for problems not discovered earlier. This is called regression testing .
System Testing
System testing is testing conducted on a complete integrated system to evaluate the system’s compliance with its specified requirements.
There are various categories of system testing. Here are some of the categories:
■ Volume Testing – testing whether the system can cope with large data volumes.
■ Stress Testing – testing beyond the limits of normal operation (e.g., availability with high loads of use).
■ Endurance Testing – testing whether the system can operate for long periods?
■ Usability Testing – testing ease of use.
■ Security Testing – testing whether the system can withstand security attacks.
■ Performance Testing – testing the system response time.
■ Reliability Testing – testing whether the system operates effectively over time.
If use cases (use cases are discussed in Chapter 5 ) are used to specify the functional requirements for a system, the use cases can be transformed into operational test scenarios (OTS). Each OTS represents the execution of the program for some part of the requirements. Figure 3.10 shows a use case that represents part of the requirements for a system used to manage an airport. The Main Scenario states the requirements associated with use case.
Figure 3.10 Closed airport use case.
Figure 3.11 shows a test scenario derived from the Close Airport use case. The test scenario describes how the use case test should be carried out, and lists the order of interaction between a user and the system, with the inputs from the user and how the system should react.
Figure 3.11 Closed airport test scenario.
For most systems, a special type system test, an Acceptance Test , is conducted to see if the system satisfies the customer’s requirements. There are two types: of acceptance testing
Finally, a few key points about software testing:
■ Test Plans should be completed as soon as possible:
■ System test planning can start during the requirements phase.
■ Integration test planning can start during the high-level design phase.
■ Unit test planning can start during the detailed design phase.
Exercise 3.8: Unit Test Planning
After the discussion of Unit Testing, Georgia Magee asked the team to study a search algorithm and determine a set of test cases for the unit.
The exercise, described at the end of this chapter, deals with the DH Team’s unit test design.
Exercise 3.9: Integration Test Planning
After the discussion of Integration Testing, Georgia Magee asked the team to study a design diagram and determine a strategy for incremental development and integration testing.
The exercise, described at the end of this chapter, deals with the DH Team’s unit test design.
Exercise 3.9: Use Case Test Planning
After the discussion of System Testing, Georgia Magee asked the team to study a use case and then develop a test scenario for the use case.
The exercise, described at the end of this chapter, deals with the DH Team’s unit test design.
Case Study Exercises
Exercise 3.1: Security Attacks
After the discussion of the “Security Engineering”, Massood lead the team in a discussion about potential attacks on the security of the DigitalHome system.
Learning Objectives
Upon completion of this exercise, students will have increased ability to:
Describe potential attacks on the security of a software system.
Understand the importance of building secure software systems.
Classify the different types of security problems.
Assess the costs associated with building a secure system.
Exercise Description
As preparation for the exercise, read Chapter 1 and the Case Study Artifact listed above.
You will be assigned to a small development team (like the DH Team).
Each team meets, organizes, and carries out the following activities:
Each member of team discusses the following items:
Security problems they have experienced as a developer or user, or have knowledge in existing systems.
Potential security problems they see with the DigitalHome system.
Whether they have the appropriate level of background and experience to address the security problems.
The team identifies the most significant security issues in the team’s discussion of their software activities, and summarizes them in a report.
Choose one member of the team to present the team’s quality report.
Exercise 3.2: Cost of Quality Experiences
After the discussion of the “Cost of Quality”, Massood lead the team in a discussion of their experiences with the types of costs of quality they had experienced in their previous software development work.
Learning Objectives
Upon completion of this exercise, students will have increased ability to:
Describe the quality of their software development activities.
Reflect on the defect life cycle in software they have developed.
Identify the cost of quality in software they have developed.
Classify the different types of quality costs.
Assess whether they have used the appropriate level of resources in their software development activities to achieve the desired level of quality.
Exercise Description
As preparation for the exercise, read Chapter 1 and the Case Study Artifact listed above.
You will be assigned to a small development team (like the DH Team).
Each team meets, organizes, and carries out the following activities:
Choose one member of the team to present the team’s quality report.
Exercise 3.3: Software Quality Assurance Planning
After Massood’s discussion of the SQA planning, Disha Chandra lead the team in a discussion SQA planning.
Learning Objectives
Upon completion of this exercise, students will have increased ability to:
Describe the elements of SQA planning.
Determine the scope and extent of SQA planning needed for a moderate sized project.
Determine which elements of SQAP, specified in [ IEEE 730], are appropriate for DigitalHome project.
Exercise Description
As preparation for the exercise, read Chapter 1 and the Case Study Artifacts listed above.
You will be assigned to a small development team (like the DH Team).
Each team meets, organizes, and carries out the following activities:
Determine which elements of SQAP, specified in [IEEE 730], are appropriate for DigitalHome project.
Determine the purpose and scope of the SQA activities.
Describe the DH project’s SQA organization structure. What are its roles and responsibilities?
Decide how problem reports are collected, who will see those data, and what procedures should be followed for implementing the appropriate corrective actions.
Choose one member of the team to report to the class on the team’s quality planning activity.
Exercise 3.4: DH Quality Measurement Strategy
After Massood’s discussion of quality measurement, Disha Chandra decided the team needed to create a quality measurement strategy.
Learning Objectives
Upon completion of this exercise, students will have increased ability to:
Understand the value of software measurement.
Measure software and software development data.
Analyze and use measured data.
Exercise Description
As preparation for the exercise, read Chapter 1 and the Case Study Artifacts listed above. You will be assigned to a small development team (like the DH Team). Each team meets, organizes, and carries out the below described activity.
The team discusses how to develop a quality measurement strategy and what elements it should contain. As part of this discussion, the team answers the below questions:
Each team answers the questions in writing and chooses one member of their team to report to the class on the group’s conclusions.
Exercise 3.5: DH Goal/Question/Metric
Disha asked the team to use the GQM technique to evaluate the effectiveness of the coding standard being considered for the DH project: Java Code Conventions [Sun 1997].
Learning Objectives
Upon completion of this exercise, students will have increased ability to:
Describe the GQM technique.
Use the GQM techniques determine the goals, questions, and metrics to solve a problem.
Explain the values of software metrics and how they may be used.
Exercise Description
As preparation for the exercise, read Chapter 1 and the Case Study Artifacts listed above. You will be assigned to a small development team (like the DH Team). Each team meets, organizes, and carries out the below described activity.
Acting as the DH Team, the team examines the (GQM) method to select metrics that could be used to assess the appropriateness of using Java Code Conventions [Sun 1997] as the coding standards for DH code. Determine the following:
A statement of a quality goal (or goals).
Questions about the goal.
Metrics appropriate for answering the questions.
Issues, problems, or costs in collecting the metrics.
Each team describes, in writing, the results of its GQM exercise and reports the results to the class.
Exercise 3.6: Requirements Inspection
After the discussion Quality Reviews, Disha Chandra asked the team to practice a requirements inspection on the DH HLRD.
Learning Objectives
Upon completion of this exercise, students will have increased ability to:
Organize and perform a requirements specification inspection.
Appreciate the value of an inspection.
Develop correct requirements statements.
Exercise Description
As preparation for the exercise, read Chapter 1 and the Case Study Artifacts listed above. You will be assigned to a small development team (like the DH Team). Each team meets, organizes, and carries out the below described activity.
Acting as the DH Team, the team organizes as an inspection team: roles are assigned (e.g., moderator, recorder, etc.); material is prepared for recording defect and inspection time. Then, the team carries out the following activities:
Inspectors inspect the HLRD in Chapter 1 , and record defects found, and time spent.
The inspection team meets and discusses the defects found.
The team agrees on defects in the product and discusses needed product rework.
The team prepares a report on the inspection.
One of the team member reports to the class on the team’s inspection.
Exercise 3.7: Code Review
After the discussion of Quality Reviews, Disha Chandra asked the team to practice a code review on pseudocode for a program unit.
Learning Objectives
Upon completion of this exercise, students will have increased ability to:
Perform a code review of a software code unit.
Understand the value of a careful code review.
Use code review data to improve reviewing effectiveness.
DH Case Study Artifacts
Exercise Description
As preparation for the exercise, read Chapter 1 and the Case Study Artifacts listed above. You will be assigned to a small development team (like the DH Team). Each team meets, organizes, and carries out the below described activity.
Acting as the DH Team, each member of the team acts as code reviewer. Then, each reviewer carries out the following activities:
Reviews the below gcd (greatest common divisor) procedure, and records defects found and time spent.
The team meets and compares what reviewers found.
The team agrees on defects in the procedure and discusses needed product rework.
The team prepares a report on the code review.
One of the team members reports to the class on the team’s code review.
Exercise 3.8: Unit Test Planning
After the discussion of Unit Testing, Georgia Magee asked the team to study a search algorithm and determine a set of test cases for the unit.
Learning Objectives
Upon completion of this exercise, students will have increased ability to:
Determine a set of test cases for a software unit.
Plan for effective testing of a software unit.
Understand the differences between white-box testing and black-box testing.
Exercise Description
As preparation for the exercise, read Chapter 1 and the Case Study Artifacts listed above. You will be assigned to a small development team (like the DH Team). Each team meets, organizes, and carries out the below described activity.
Acting as the DH Team, carries out the following activities:
Each member reviews the below binarySearch procedure and develops a set of test cases for the procedure.
The team meets and compares what the team members developed. There is a discussion of which test cases are related to black-box testing and which to white-box testing.
The team agrees on a set of test cases to be used for testing the unit.
The team prepares a report on unit test planning.
One of the team members reports to the class on the team’s unit test planning.
Exercise 3.9: Integration Test Planning
After the discussion of Integration Testing, Georgia Magee asked the team to study a design diagram and determine a strategy for incremental development and integration testing.
Learning Objectives
Upon completion of this exercise, students will have increased ability to:
Understand incremental development.
Determine a strategy for integration testing.
Understand the differences between top-down and bottom-up integration testing.
Exercise Description
As preparation for the exercise, read Chapter 1 and the Case Study Artifacts listed above. You will be assigned to a small development team (like the DH Team). Each team meets, organizes, and carries out the below described activity.
Acting as the DH Team, carries out the following activities:
Each member reviews the below DigitalHome Conceptual Design Diagram (Figure 2.11 in Chapter 2 ).
The team meets and discusses in what order the modules/units will be developed and integrated. The team will decide on whether to implement a top-down or a bottom-up integration testing strategy.
The team agrees on it strategy.
The team prepares a report on integration testing.
One of the team members reports to the class on the team’s discussion and decisions.
Exercise 3.10: Use Case Test Planning
After the discussion of System Testing, Georgia Magee asked the team to study a use case and then develop a test scenario for the use case.
Learning Objectives
Upon completion of this exercise, students will have increased ability to:
Understand system test planning.
Understand the structure of a use case.
Develop a test scenario from a use case.
DH Case Study Artifacts
Exercise Description
As preparation for the exercise, read Chapter 1 and the Case Study Artifacts listed above. You will be assigned to a small development team (like the DH Team). Each team meets, organizes, and carries out the below described activity.
Acting as the DH Team, carries out the following activities:
Each member reviews the below use case “Set Traffic Signal Properties”.
Each member develops a test scenario (like the one in Figure 3.11 ) for the Set Traffic Signal Properties use case.
The team meets and agrees on a team test scenario for the Set Traffic Signal Properties use case.
The team prepares a report on the scenario.
One of the team members reports to the class on the team’s discussion and decisions.
Use Case ID : UC3
Use Case Name : Set Traffic Signal Properties
Goal : Set/Change traffic signal properties
Primary Actors : Traffic Manager
Secondary Actors : TMS Database
Pre
Traffic Manager is logged into his/her account.
TMS Traffic Signal Configuration Page is displayed on User Display Device.
Post
All newly entered traffic signal properties have been stored in the TMS Database.
TMS Traffic Signal Configuration Page is displayed on User Display Device.
Main Success Scenario
Step
Actor Action
Step
System Reaction
1
Select “Set Traffic Signal Properties”.
2
Display “Select the traffic signal(s) for which you wish to set properties”.
3
Select one or more traffic signals.
4
Display “Enter Green Light duration (in seconds)”.
5
Enter seconds of Green Light duration.
6
Store Green Light duration for selected traffic signals in the TMS database.
7
Display “Enter Red Light duration (in seconds)”.
8
Enter seconds of Red Light duration.
9
Store Red Light duration for selected traffic signals in the TMS database.
10
Display “Enter Amber Light duration (in seconds)”.
11
Enter seconds of Amber Light duration.
10
Store Amber Light duration for selected traffic lights in the TMS database.
12
Display “Do you wish to set the properties for other traffic signals? YES/NO?”
13A
Enter “YES”.
14A
Go to Step 1.
13B
Enter “NO”.
14B
Display TMS Traffic Signal Configuration Page.
UC GUIs : DH Configuration Page, DH Default Parameter Configuration Page
Exceptions (Alternative Scenarios of Failure):
At any time, User fails to make an entry.
Time out after five minutes and log out from the system.
System detects a failure (loss of Internet connection, power loss, hardware failure, etc.).
Display an error message on the User Display Device.
System restores system data from TMS Database.
Log off user.
Use Cases Utilized : None
Notes and Issues : The duration of individual traffic signal states must be in the following duration ranges: Red [30–120 seconds], Green [30–120 seconds], and Amber [5–10 seconds]