Log In
Or create an account ->
Imperial Library
Home
About
News
Upload
Forum
Help
Login/SignUp
Index
Cover
Title Page
Copyright
Introduction
Chapter 1: Formal Techniques for Verification and Validation
1.1. Introduction
1.2. Realization of a software application
1.3. Characteristics of a software application
1.4. Realization cycle
1.4.1. Cycle in V and other realization cycles
1.4.2. Quality control (the impact of ISO standard 9001)
1.4.3. Verification and validation
1.4.3.1. Verification
1.4.3.2. Validation
1.5. Techniques, methods and practices
1.5.1. Static verification
1.5.1.1. Manual static analysis
1.5.1.1.1. Software inspection: principles
1.5.1.1.2. Documentary inspection
1.5.1.1.3. Code inspection
1.5.1.1.4. Software error effects analysis
1.5.1.2. Tool-driven static analysis
1.5.1.2.1. Compilation
1.5.1.2.2. Identification of differences
1.5.1.2.3. Verification of coding rules
1.5.1.2.4. Measure of complexity
1.5.1.2.5. Structural analysis
1.5.1.2.6. Symbolic execution
1.5.1.2.7. Abstract program interpretation
1.5.1.2.8. Modeling
1.5.1.2.9. Simulation
1.5.1.2.10. Model for verification
1.5.1.2.11. Model for proof
1.5.1.2.12. Formal methods
1.5.2. Dynamic verification
1.5.2.1. Analysis of execution
1.5.2.2. Test
1.5.2.2.1. Selection of case tests
1.5.2.2.2. Execution of test cases
1.5.2.2.3. Test scoring
1.5.2.2.4. Measure of the cover
1.5.2.2.5. Test and verification
1.5.3. Validation
1.6. New issues with verification and validation
1.7. Conclusion
1.8. Bibliography
Chapter 2: Airbus: Formal Verification in Avionics
2.1. Industrial context
2.1.1. Avionic systems
2.1.2. A few examples
2.1.3. Regulatory framework
2.1.4. Avionic functions
2.1.5. Development of avionics levels
2.1.5.1. Regulatory and development frame
2.1.5.2. Development of avionic software
2.2. Two methods for formal verification
2.2.1. General principle of program proof
2.2.2. Static analysis by abstract interpretation
2.2.2.1. Calculation of the abstract invariant by a static analyzer
2.2.2.1.1. Commentary regarding Table 2.2
2.2.2.1.2. Abstract domains
2.2.2.1.3. Non-relational domains
2.2.2.1.4. Relational domains
2.2.2.2. Safety (soundness) of the abstract invariant
2.2.2.2.1. Commentary regarding Table 2.3
2.2.2.2.2. Commentary on Table 2.4
2.2.2.2.3. Safety of F#
2.2.2.2.4. Verification
2.2.2.2.5. F# is a result of the composition of several abstractions
2.2.3. Program proof by calculation of the weakest precondition
2.3. Four formal verification tools
2.3.1. Caveat
2.3.1.1. Simplifier
2.3.1.2. Demonstrator
2.3.1.3. Proof process
2.3.1.4. Interactive transformer of predicates
2.3.2. Proof of the absence of run-time errors: Astrée
2.3.2.1. Implicit specification
2.3.2.2. Characteristics
2.3.2.3. Abstract domains
2.3.2.3.1. Intervals domain
2.3.2.3.2. Octagon domains
2.3.2.3.3. Domain of the decision trees
2.3.2.3.4. Domain of the arithmetic-geometric deviations and clocks
2.3.2.3.5. Linearization domain for floating-point expressions
2.3.2.3.6. Filter domains
2.3.2.3.7. High-pass filters
2.3.2.3.8. Second-class filters
2.3.2.3.9. Partitioning domain
2.3.3. Stability and numerical precision: Fluctuat
2.3.3.1. The IEEE-754 norm
2.3.3.2. The Fluctuat static analyzer
2.3.3.2.1. General presentation
2.3.3.2.2. Abstract domains based on affine arithmetic
2.3.4. Calculation of the worst case execution time: aiT (AbsInt GmbH)
2.4. Examples of industrial use
2.4.1. Unitary Proof (verification of low level requirements)
2.4.1.1. Introduction
2.4.1.2. Methodology of unitary proof
2.4.1.2.1. Reminder of the methodology of unitary tests
2.4.1.2.2. Integration of unitary proof in the development cycle
2.4.1.2.3. Specification phase
2.4.1.2.4. Example
2.4.1.2.5. The proof phase
2.4.1.2.6. Example
2.4.1.3. Report
2.4.2. The calculation of worst case execution time
2.4.2.1. Presentation of the software analyzed
2.4.2.1.1. From SCADE® to C
2.4.2.1.2. Dynamic architecture
2.4.2.2. Calculation of the WCET by static analysis
2.4.2.2.1. Why static analysis?
2.4.2.2.2. Calculation of the WCET of a Ti
2.4.2.3. Report
2.4.3. Proof of the absence of run-time errors
2.4.3.1. Preparation of the program
2.4.3.2. Configuration of the analysis
2.4.3.3. Execution of the analysis
2.4.3.4. Study of alarms
2.4.3.4.1. Why zero alarm?
2.4.3.4.2. Investigating an alarm
2.4.3.5. Parameter tuning
2.4.3.6. Conclusion
2.6. Bibliography
Chapter 3: Polyspace
3.1. Overview
3.2. Introduction to software quality and verification procedures
3.3. Static analysis
3.4. Dynamic tests
3.5. Abstract interpretation
3.6. Code verification
3.7. Robustness verification or contextual verification
3.7.1. Robustness verifications
3.7.2. Contextual verification
3.8. Examples of Polyspace® results
3.8.1. Example of safe code
3.8.2. Example: dereferencing of a pointer outside its bounds
3.8.3. Example: inter-procedural calls
3.9. Carrying out a code verification with Polyspace
3.10. Use of Polyspace® can improve the quality of embedded software
3.10.1. Begin by establishing models and objectives for software quality
3.10.2. Example of a software quality model with objectives
3.10.3. Use of a subset of languages to satisfy coding rules
3.10.4. Use of Polyspace® to reach software quality objectives
3.11. Carrying out certification with Polyspace®
3.12. The creation of critical onboard software
3.13. Concrete uses of Polyspace®
3.13.1. Automobile: Cummins engines improves the reliability of its motor’s controllers
3.13.2. Aerospace: EADS guarantees the reliability of satellite launches
3.13.3. Medical devices: a code analysis leads to a recall of the device
3.13.4. Other examples of the use of Polyspace®
3.13.4.1. Tackling sudden acceleration problems
3.13.4.2. Use of Polyspace® on ADA code
3.13.4.3. Ensure the reliability of diesel engines
3.13.4.4. Verification of software in airbag opening systems
3.13.4.5. Validation of control-command software in nuclear power stations
3.13.4.6. Verification of Nissan motor control software
3.14. Conclusion
3.15. Bibliography
Chapter 4: Software Robustness with Regards to Dysfunctional Values from Static Analysis
4.1. Introduction
4.2. Normative context
4.3. Elaboration of the proof of the robustness method
4.4. General description of the method
4.4.1. Required or effective value control
4.4.2. Computation of the required control
4.4.2.1. Identification of software and function inputs
4.4.2.2. Location of production and consumption
4.4.2.3. Computation of functional value domains
4.4.3. Verification of effective control
4.5. Computation of the control required
4.5.1. Identification of production/consumption of inputs
4.5.2. Computation of value domains
4.6. Verification of the effective control of an industrial application
4.6.1. Target software
4.6.2. Implementation
4.6.2.1. Preliminary analysis of the application
4.6.2.2. Instrumentation and analysis of the instrumented code
4.6.2.3. Source code audit
4.6.3. Results
4.6.3.1. Direct results
4.6.3.2. Indirect results
4.7. Discussion and viewpoints
4.8. Conclusion
4.9. Bibliography
Chapter 5: CodePeer — Beyond Bug-finding with Static Analysis
5.1. Positioning of CodePeer
5.1.1. Mixing static checking and code understanding
5.1.2. Generating contracts by abstract interpretation
5.2. A tour of CodePeer capabilities
5.2.1. Find defects in code
5.2.1.1. Run-time errors
5.2.1.2. User checks
5.2.1.4. Logic errors/warnings
5.2.1.5. Race conditions
5.2.2. Using annotations for code reviews
5.2.3. Categorization of messages
5.2.4. Help writing run-time tests
5.2.5. Different kinds of output
5.3. CodePeer’s inner working
5.3.1. Overview
5.3.2. From Ada to SCIL
5.3.3. Object identification
5.3.4. Static single assignment and global value numbering
5.3.5. Possible value propagation
5.4. Conclusions
5.5. Bibiliography
Chapter 6: Formal Methods and Compliance to the DO-178C/ED-12C Standard in Aeronautics
6.1. Introduction
6.2. Principles of the DO-178/ED-12 standard
6.2.1. Inputs of the software development process
6.2.2. Prescription of objectives
6.3. Verification process
6.4. The formal methods technical supplement
6.4.1. Classes of formal methods
6.4.2. Benefits of formal methods to meet DO-178C/ED-12C objectives
6.4.3. Verification of the executable code at the source level
6.4.4. Revision of the role of structural coverage
6.4.5. Verification of the completeness of requirements and detection of unintended functions
6.5. LLR verification by model-checking
6.6. Contribution to the verification of robustness properties with Frama-C
6.6.1. Introduction to Frama-C
6.6.1.1. ACSL: an annotation language which is common to all plug-ins
6.6.1.2. Abstract interpretation with the Value Analysis plug-in
6.6.1.3. Deductive verification: Jessie and WP plug-ins
6.6.1.3.1. Notations and principles
6.6.1.3.2. Implementations
6.6.1.4. Reducing code by slicing
6.6.1.5. Other plug-ins distributed with the Frama-C platform
6.6.1.6. Plug-in development
6.6.2. Presentation of the case study
6.6.3. Analysis process of the case study
6.6.3.1. Appropriation of the code
6.6.3.2. Chronology of analyses
6.6.4. Conclusion on the case study
6.7. Static analysis and preservation of properties
6.8. Conclusion and perspectives
6.9. Appendices
6.9.1. Automatically annotating a source code
6.9.2. Automatically subdividing input intervals
6.9.3. Introducing cut strategies for deductive verification
6.9.4. Combining abstract interpretation, deductive verification and functions which can be evaluated in assertions
6.9.5. Validating ACSL lemmas by formal calculus
6.9.6. Combining static and dynamic analysis
6.9.7. Finalizing
6.10. Acknowledgements
6.11. Bibliography
Chapter 7: Efficient Method Developed by Thales for Safety Evaluation of Real-to-Integer Discretization and Overflows in SIL4 Software
7.1. Introduction
7.2. Discretization errors in the embedded code production chain
7.2.1. Presentation of the issue
7.2.2. Objective of the analysis of the real-to-integer discretization
7.3. Modeling of the creation and propagation of uncertainties
7.3.1. Creation of uncertainties
7.3.1.1. Basic hypotheses of the analysis
7.3.1.2. Uncertainties computing: definitions and formalism
7.3.1.3. Arithmetic of uncertainty intervals
7.3.1.3.1. Calculation rules
7.3.1.3.2. Properties of operations on intervals
7.3.2. Propagation of uncertainties
7.3.2.1. Addition
7.3.2.2. Subtraction
7.3.2.3. Multiplication
7.3.2.4. Division
7.3.2.5. Didactic example of propagated uncertainties
7.4. Good practice of an analysis of real-to-integer discretization
7.4.1. Code extraction
7.4.2. Functional code reorganisation
7.4.3. Algorithmic breakdown in basic arithmetic relations
7.4.4. Computation of uncertainties
7.5. Arithmetic overflow and division by zero
7.5.1. Analysis of arithmetic overflow risk
7.5.2. Analysis of the risk of division by zero
7.6. Application to a rail signalling example
7.6.1. General presentation of the communication-based train controller system
7.6.2. Example of analysis of the behavior of speed control
7.6.2.1. Simplified speed control algorithm
7.6.2.2. Static analysis of code
7.6.2.3. Numerical application
7.6.3. Industrial scale view: a few numbers
7.7. Conclusion
7.8. Annexe: proof supplements
7.8.1. Proof 1: existence and unicity of integer division
7.8.2. Proof 2: framing the error of integer division
7.8.3. Proof 3: rules of the arithmetic of uncertainty intervals
7.8.4. Proof 4: framing of uncertainties from a product
7.9. Bibliography
Conclusion and viewpoints
Glossary
List of Authors
Index
← Prev
Back
Next →
← Prev
Back
Next →