Appendix C: Tool and Equipment Validation Program

Introduction

All digital forensic tools and equipment work differently, and may behave differently, when used on different evidence sources. Before using any tools or equipment to gather or process evidence, investigators have to be familiar with how those tools operate by practicing on a variety of evidence sources.
This testing must demonstrate that these tools and equipment follow the proven principles, methodologies, and techniques used throughout digital forensic science. This process of testing introduces a level of assurance that the tools and equipment being used by investigators are forensically sound, will not result in the evidence being inadmissible or discredited.

Standards and Baselines

For data to be admissible as evidence in legal proceedings, testing and experimentation must be completed that generates repeatable1 and reproducible2 results; meaning that results must consistently produce the same results.
In 1993, the US Supreme Court decided in Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 that Rule 702 of the Federal Rules of Evidence (1975) did not incorporate a “general acceptance” test as the basis for assessing whether scientific expert testimony is based on reasoning or methodology that is scientifically valid and can properly be applied to facts.
The Court stated that evidence based on innovative or unusual scientific knowledge may be admitted only after it has been established that the evidence is reliable and scientifically valid. Under this ruling, the Daubert Standard was established with the following criteria applied for determining the reliability of scientific techniques:
1. Has the theory or technique in question undergone empirical testing?
2. Has the theory or technique been subjected to peer review and publication?
3. Does the theory or technique have any known or potential error rate?
4. Do standards exist, and are they maintained, for the control of the theory or technique’s operation?
5. Has the theory or technique received general acceptance in the relevant scientific community?
These criteria require that scientific theory or techniques must be subjected to hypotheses and experimentation—based on gathering, observing, and demonstrating repeatable and reproducible results—to prove or falsify the theory or techniques.
As a result of the Daubert Standard, all digital forensic tools and equipment must be validated and verified to meet specific evidentiary and scientific criteria in order for evidence to be admissible in legal proceedings. In the context of applying the Daubert Standard to software testing, there is a clear distinction between the activities and steps performed as part of both validation3 and verification.4

Building a Program

The ability to design, implement, and maintain a defensible validation and verification program is an essential characteristic that a digital forensic professional should have. With this type of program in place, the digital forensic team will be able to provide a level of assurance of what the capabilities of their tools and equipment are as well as to identify what, if any, limitations exist so that compensating actions can be applied; such as acquiring other tools and equipment or creating additional procedures.
The methodology for performing tools and equipment testing consists of several distinct activities and steps that must be completed in a linear workflow. To formalize this workflow, a process model must be implemented to govern the completion of each activity and step in the sequence they must be executed. Illustrated in Figure C.1, the phases proposed in chapter “Investigative Process Models” for both the high-level digital forensic process model and digital forensic readiness process model are consistently applied to the activities and steps performed in tool and equipment testing. Consisting of four phases, the digital forensic tool testing process model focuses on the basic categories of tools and equipment testing.

Preparation

Gathering

This phase of the testing program is either the longest and most time-consuming or the easiest and fastest. The determination for this depends on how well the plan’s objectives, scope, and schedule were documented during the preparation phase that took place previously. In this phase, the tactical approaches outlined in the plan’s strategy are completed to acquire the tool or equipment that will be subject to the testing. Prior to making any purchases, it is essential that both parties enter into contractual agreement with each other; such as a nondisclosure agreement (NDA) and statement of work (SOW).
• An NDA is a formal document that creates a mutual relationship between parties to protect nonpublic, confidential, or proprietary information and specifies the materials, knowledge, or information that will be shared but must remain restricted from disclosure to other entities.
• An SOW is a formal document that contains details often viewed as legally equivalent to a contract to capture and address details of the testing.
Both documents contain terms and conditions that are considered legally binding between the parties involved. These documents must be reviewed and approved by appropriate legal representatives before each party commits to them by signing. In the absence of providing wording for how the content within these documents should be structured, at a minimum the following sections should be included:
Introduction: A statement referring to NDA as the governing agreement for terms and provisions incorporated in the SOW

Processing

Software testing is one of many activities used as part of the systems development life cycle (SDLC) to determine if the output results meet the input requirements. This phase is where the documented test cases are executed and success criteria are measured to verify and validate the functionality and capabilities of the tool or equipment. Before starting the activities and steps involved in executing test cases, it is important to understand the differences between verification and validation.

Verification

functional assessments of documented features to identify and determine actual capabilities
structural review of individual components to further assess specific functionalities
random evaluation to detect faults or unexpected output from documented features
Static analysis involves performing a series of test cases using the tool or equipment following manual or automated techniques to assess the nonfunctional components. This category applies a series of programmatic testing methodologies to support:
consistency of internal coding properties such as syntax, typing, parameters matching between procedures, and translation of specifications
measurement of internal coding properties such as structure, logic, and probability for error

Validation

In general terms, a validation process confirms through objective examination and provisioning if “you built it right” to prove that requirements and specifications have been implemented correctly and completely. Validation activities rely on the application of an all-inclusive testing methodology that happens both during and after the SDLC. Techniques used during the validation of tools or equipment can be performed by:
• intentionally initiating faults into different components (eg, hardware, software, memory) to observe the response
• determining what the probability of reoccurrence is for a weakness identified in different components (eg, hardware, software, memory), and subsequently selecting countermeasures as a means of reducing or mitigating exposures
Completing test cases can be a lengthy and time-consuming process. Completing test cases should be thorough because it is fundamental in proving that the tool or equipment maintains and protects the admissibility and credibility of digital evidence; ultimately protecting the credibility of forensic professionals. While there are indirect factors such as caseload or other work responsibilities that impact the amount of time spent on testing, the following direct influences cannot be overlooked and must be maintained during testing.
• Regulating testing processes within secure and controlled lab environments

Presentation

Once testing has concluded a summary of all activities, test results, conclusions, etc. must be prepared using a formalized test case report; as seen in the template provided in the Templates section of this book. While the initial draft of the final report might be performed by a single person, it should be reviewed for accuracy and authenticity by peers and management before being finalized. This review process will ensure that, as illustrated in the test case report template, the scope and results of the testing meet the specific business objectives so that when it comes time to obtain approvals to finalize, the testing process will not be challenged.
Having obtained final authorizations and approvals on the test case report, it can now be published and distributed to stakeholders who will be influenced as a result of the testing outcomes. Using the testing results, these stakeholders can now develop standard operating procedures (SOP) to use the tool or equipment for gathering and processing digital evidence.

Summary

Maintaining the integrity of digital evidence throughout its lifetime is an essential requirement of every digital forensics investigation. Organizations must consistently demonstrate their due diligence by providing a level of assurance that the principles, methodologies, and techniques used during a digital forensic investigation are forensically sound.