© Springer Nature Switzerland AG 2019
Reinhard Haberfellner, Olivier de Weck, Ernst Fricke and Siegfried VössnerSystems Engineeringhttps://doi.org/10.1007/978-3-030-13431-0_16

16. Encyclopedia/Glossary

Reinhard Haberfellner1 , Olivier de Weck2, Ernst Fricke3 and Siegfried Vössner4
(1)
Institute of General Management and Organization, Graz University of Technology, Graz, Austria
(2)
Engineering Systems Division, MIT, Cambridge, MA, USA
(3)
BMW AG, Munich, Germany
(4)
Engineering and Business Informatics, Graz University of Technology, Graz, Austria
 

An → arrow in the text refers to a corresponding key word in the encyclopedia.

ABC Analysis (Synonyms: Pareto Analysis, 80–20 Rule)

An important categorization technique in information processing that makes it possible to identify groups of items that contribute in certain intensities to a chosen output measure. For example: 80% of the sales are made with 20% of the articles (or clients), or 10% of the population has control over 90% of the assets.
  • Reference:

  • Koch, R. (2004): Living the 80/20 Way.

Activity Sampling (Synonyms: Work Sampling, Multi-Moment Observations)

Activity sampling is a statistical method for determining the proportion of time spent by workers in various defined categories of activity (e.g., setting up a machine, assembling parts, waiting, etc.). Its great advantage over other statistical techniques is the efficiency with which it measures and analyzes the nature and performance figures of complex processes and interactions.

In an activity sampling study, a sufficiently large and representative number of random observations is made during a specified amount of time. The nature and frequency of observed activities are recorded and later analyzed.

Activity sampling is frequently used when calculating standard times for manual manufacturing tasks or for analyzing extremely complex process interactions in socio-technical systems.
  • Reference:

  • Groover, M. P. Work Systems and Methods, measurement, and Management of Work.

Agile Systems Engineering

Refers either to an AGILE Systems ENGINEERING Approach or to Engineering of AGILE SYSTEMS. See Sect. 1.​3 on the “Agility of Systems.”

Analogy Method

→ Creativity technique for finding solutions. The main idea is to look for possible solutions to a problem outside the problem scope by searching for analogies. These can be similarities of form, characteristic, and function. → Bionics, for example, is such a method that systematically screens nature for principles that are transferable to mechanical design improvements or which can be used to develop products with completely new characteristics.
  • References:

  • Gerardin, L. (1968): Bionics.

  • Rossmann T.; a.o. (2007): Bionics - Natural Technologies and Biomimetics.

Analysis Techniques

Techniques that are used for a systematic investigation of all aspects/components/elements of an object (or subject) based on defined criteria. These are then sorted, structured, and evaluated – very often also with respect to their interaction. Analysis techniques are very important to systems engineering and are used to investigate the past, present, and, when designing, the future (requirements) of systems.

Mathematical methods, simulation runs, plausibility tests, and destruction analyses support the various techniques, which are often named after their purpose (→ reliability analysis, → security analysis, disaster analysis, compatibility analysis, consequence analysis, → risk analysis, cause-effect analyses, → cost-effectiveness analyses, etc.).

In designing systems analysis, techniques can either be used to predict a model’s behaviors ahead of time (ex-ante) or to investigate the behavior of the realized model (object) in the real environment. In spite of all the progress in mathematical/scientific modeling and computer performance, and despite sophisticated simulations of system behaviors, often only the construction and analysis of a real, functioning system can clarify the functionality of principles and solution ideas. When designing chemical engineering processes, for example, miniature plants are often constructed for testing first or pilot plants. When designing machines, we often see functional models, pilot samples, or prototypes. Finally, pilot runs are used to test production processes and means of serial and mass production.

With workflow planning in particular, test runs of various sizes right up to parallel operation of the old and new systems are carried out.

Analytic Hierarchy Process, AHP

A method for supporting decision-making processes, similar to the → value-benefit analysis (VBA). The essential difference compared with the VBA is the method of weighting criteria. These are determined along the lines of paired comparisons. The method chosen is largely a matter of taste. Besides, there is no such thing as a truly objective method for the evaluation of variants.
  • Reference:

  • Saaty, Th. L. (2001): Decision Making for Leaders

Assignment or Allocation Problems

Special cases of → operations research, for which special solution methods were developed:
  • Transportation problems can generally be described as follows: at certain starting points physically separated from one another (points of departure), specific resources (e.g., vehicles, goods for shipment) are available in certain quantities. At certain endpoints (recipients) there is a specific need for the same resources. The connection paths including transport times and costs of the transfer from each starting point to each endpoint are given. The available resources are to be sent from the starting points to the endpoints so that the shipping expense (costs, times, vehicle use) is minimal.

  • An allocation problem can also be seen as a special instance of a transport problem. At each starting point, there is just one unit of the required resource available, and at each endpoint just one unit is requested. (Example: the best possible allocation of n persons to n workplaces or the transfer of n vehicles from n starting points to n endpoints so that the overall mileage is minimized).

    References: See → linear programming, → operations research.

Bar Chart (Synonym: Gantt Chart)

An aid for the graphic representation of the duration (by the length of the bar) and the temporal arrangement of activities, e.g., in a project (by the length of the time axis). In their original form, bar graphs contain no logical dependencies of the represented activities. However, these can be added. IT-supported projects management systems permit the automatic generation of bar graphs from a → network plan.

Benchmarking

Benchmarking is a continual process of comparing entities such as one’s own products, performance, practices, processes, with others to set goals to perform as well as or better than the best and thereby achieve a competitive advantage.

The origin of the term (according to Wikipedia): a cabinet-maker’s workbench with a mark for making, for example, all the legs of a table the same length.
  • References:

  • Boxwell, R.G. (1994): Benchmarking for Competitive Advantage.

  • Walleck, A.S., a.o. (1991): Benchmarking world-class performance

Bionics

A special → creativity technique belonging to the group of → analogy methods. The essential idea: looking for analogies in biology and using them for the solution of technical problems. Familiar examples are material surfaces that exhibit the lotus effect, that is, minimal wettability and a high degree of self-cleaning. Also: wing shapes for airplanes (so-called winglet shape), which are based on bird wings (eagle, buzzard, condor). Also: Velcro tape based on the adhesive properties of burdock.
  • References:

  • Rossmann T.; Tropea C.; Vincent, J. (2007): Bionics.

  • Gerardin, L. (1968): Bionics.

  • Marteka, V. (1965): Bionics.

Brainstorming

→ A creativity technique with which a group’s problem-solving skills are to be used so that the flow of ideas is encouraged by a type of game rule.

Brainstorming rules: (1) clear questioning; (2) public protocol for the remarks made (e.g., on a flipchart) to show that there is no censorship of ideas; (3) temporal division between idea gathering (unordered, uncritical – no judgements) and the subsequent evaluation (critical assessment of applicability only at this point).
  • Reference:

  • Osborn, A. F. (1957): Applied Imagination.

Business Process Model and Notation, BPMN

The business process model and notation (BPMN) is a standard for business process modeling that provides a graphical notation for specifying business processes in a business process diagram (BPD), based on a flowcharting technique very similar to activity diagrams from → unified modeling language (UML), which is also developed and standardized by the object management group (OMG). The objective of the BPMN is to support business process management, for both technical users and business users, by providing a notation system that is intuitive to business users, yet able to represent complex process semantics. The specification also provides mapping between the graphics of the notation and the underlying constructs of the execution languages, particularly the business process execution language (BPEL).
  • References:

  • Silver Bruce (2011): BPMN Method and Style with BPMN Implementer’s Guide

  • White, Stephen A.; Bock, Conrad (2011). BPMN 2.0 Handbook: Methods, Concepts, Case Studies and Standards in Business Process Management Notation.

  • Grosskopf, A.; Decker, G.; Weske, M. (2009): The Process: Business Process Modeling Using BPMN.

Business Re-engineering, See → Re-engineering

Capability Maturity Model, CMM; Capability Maturity Model Integration, CMMI

The CMM in its original form was a model for evaluating the quality (maturity) of software processes in organizations (software development, maintenance, configuration, etc.), plus determining measures for their improvement. The CMM was replaced by the more sophisticated CMMI in 2003.
  • References:

  • Gallagher B. P.; Phillips, M.; Richter, K.J; Shrum. S. (2009): CMMI-ACQ.

  • Chrissis, M.B.; Konrad, M.; Shrum, S. (2011): CMMI.

Card Technique (Synonym: Metaplan Method)

→ Creativity technique in which the ideas and statements are not expressed orally, as in → brainstorming, but rather are written on small cards by every participant and posted on a bulletin board. Advantage: easier evaluation, e.g., in terms of clustering similar ideas. Disadvantage: depends on the availability of aids such as a bulletin board.

Checklists

Lists of activities that are necessary for the completion of tasks. There is a meaningful distinction between (1) checklists of a compulsory nature: every activity must be carried out (for example: airplane takeoff) and (2) checklists of a discretionary nature as an aid in searching for ideas – or as a stimulus for one’s own thinking (remember the important things). Checklists are usually strongly task-/context-oriented and therefore not universally applicable.

Configuration Management, CM

The impetus for configuration management (CM) was the continually increasing complexity of products caused by a variety of possible combinations involving various modules and the continual change in product configurations. The first solution approaches to CM were developed in the aircraft and aerospace industries. Similar complexity problems were also evident in other sectors in which – for both suppliers and clients – it always had to be clear what parts or modules constituted the purchased or delivered product. Methods and instruments of CM were refined and specialized for various application fields. Today, CM transformations are part of many disciplines, such as product data management (PDM), software configuration management, etc.

The American National Standards Institute (ANSI), in cooperation with the Electronic Industries Alliance (EIA) has defined CM as follows: “Configuration management is a management process for establishing and maintaining consistency of a product’s performance, its functional and physical attributes, with its requirements, design, and operational information, throughout its life.” (Wikipedia)

  • References:

  • ISO 10007:2003: Quality management systems - Guidelines for configuration management.

  • Lyon, D.D. (2000): Practical Configuration Management.

Continuous Improvement Process

SeeKaizen

Correlation Analysis

Statistical procedure in which the magnitude of the mutual dependencies of various variables out of a sample (for example: people’s weights and body heights) is modeled often linearly as in the Pearson product–moment correlation. Correlation is expressed by a so-called correlation coefficient r, which can take on values between −1 and +1:
  • A positive (+) correlation means that variables are coupled and changing their magnitude in the same direction (such as horsepower and acceleration of a car).

  • A negative (−) correlation means that variables are coupled and changing their magnitude in opposite directions (such as the dependency of fuel consumption and weight or engine power of a car).

  • r = 0 means that the variables are not coupled, i.e., totally independent from each other, such as the body height and political beliefs of a person.

However: even a correlation coefficient r near 1 cannot mean a true dependency of two variables, for an outside, third variable can influence both of them, or there can be any other, unknown relationship. Therefore, it is important not to confuse correlation with causation!

A classic example of such a spurious correlation is the statistically significant correlation of the number of stork nests and the number of births in a given year in Copenhagen, where one should not draw the conclusion that storks deliver babies.
  • References:

  • Walpole, R.E. (2007, 9.ed.): Probability & Statistics For Engineers & Scientists.

  • Silver, N. (2013): The Signal and the Noise.

  • Mann, P.S. (2012): Introductory Statistics

Cost-Benefit Analysis, CBA

Cost-benefit analysis (CBA) is particularly used in public services to support decision-making, whereby it aids in evaluating macroeconomic proposals (measures by public authorities) or the not-for-profit effects of microeconomic proposals. A characteristic of CBA is the expression of costs and benefits in monetary units. For more information, see Sect. 6.​4.​2 (evaluation methods) and an example can be found in the case study in Chap. 9.
  • References:

  • Sassone P.G.; Schaffer, W. A. (1978): Cost-benefit Analysis - A Handbook.

  • Nas, T.F. (1996): Cost-benefit analysis: Theory and application.

Cost-Effectiveness Analysis

→ Valuation technique for the comparative assessment of product variants, investment decisions, etc. Cost criteria are expressed as monetary factors, the benefit or the effectiveness in a key figure that is specified the same way as with the value-benefit analysis (determination of the criteria, weighting, assignment of scores, multiplication with weighting points and summation of all effectiveness criteria). In a final step, the sum of the costs applied in a single period are divided by the cumulative effectiveness key figure. The product is an abstract key figure that indicates what an effectiveness point for each variant costs. A decision-making rule is of course to choose the variant with which a point costs the least. The results of an assessment can also be represented graphically by placing on one axis the cumulative costs and on the other, the cumulative effectiveness key figure. See Part III, Sect. 6.​4.​2.​4.
  • Reference:

  • Levin H M (1983): Cost-Effectiveness: A Primer.

Creativity Techniques, CTs

Creativity techniques (CTs) are intended to overcome passive waiting for ideas and increasing the probability of finding good solutions actively and quickly. Depending on their purpose, they can be divided into those that:
  • Are ascribed to purely intuitive processes

  • Promote analogous and/or contrasting linking of ideas (restructuring of the solution field)

  • Are intended to lead to a variety of solutions by means of a combinational process

Thus, individual techniques can be assigned to several categories. Creativity techniques include in particular → brainstorming, → card technique, and → method 635. The characteristic of these techniques is that criticism and discussion of the reasonableness and practicability of an idea are not allowed at first. On the other hand, it is allowable, and even desirable, to seize on an expressed idea and modify it or twist its meaning around.

This glossary also includes → analogy methods → synectics, → bionics, and → morphological analysis. Attribute listing seeks further manifestations of the characteristics, functions, and effects (traits) of the existing or a discovered solution based on each trait. The emphasis on applying attribute listing is improvement. Beyond these methods, there are comprehensive systems for finding solutions such as G. Nadler’s ideals concept (see Part I, Sect. 2.​1.​4.​2), the systematic heuristics of G. Altshuller’s theory of imaginative problem-solving (→ TRIZ).

In most cases, the above-mentioned techniques are applicable to both individual and group work. Requirements for good, efficient progress are experienced moderators and a little training with the group members. With many techniques, the recording of the results from group work requires special care and tact (e.g., when preference is given to one formulation from among several statements of comparable content). While dealing with a problem, phases of individual and group work are also replaced as various techniques are applied. To achieve, for example, the broad field sought by the morphological analysis, it is useful to draw up a list of parameters using intuitive techniques (e.g., brainstorming), and a broad array of solutions to which the parameters are applied to test whether they are usable in differentiating among solutions.
  • References:

  • De Bono, E. (1970): The Use of Lateral Thinking.

  • Osborn, A.F. (1957): Applied Imagination.

  • Csikszentmihalyi, M. (2013): Creativity: Flow and the Psychology of Discovery and Invention.

  • Teramata, T.; Nijstad, B.A. (Eds. 2003): Group Creativity: Innovation Through Collaboration.

Criteria Plan

List of variables and their measures used for a comparative evaluation of solutions (see Part III, Sect. 6.​4.​3.​2).

Critical Path Method, CPM

The critical path method (CPM) is a widely used algorithm for planning and scheduling project activities → network planning techniques.

Decision Theory

A branch of applied probability theory for analyzing consequences of decisions and, if possible, to find optimal decisions. Decision theory uses both analytical and qualitative methods and can handle decision situations under both certainty and uncertainty. Some widely used methods are: → decision tree methods, → value-benefit analysis and the → analytic hierarchy process, in which criteria and alternatives are represented and evaluated comparatively to find the optimal solution. These methods are based on secure assumptions, whereby the situation in question can be described precisely (deterministic), or if it is permissible to use such a simplification. In cases of decisions under uncertainty, uncertainty has to be modeled using statistical methods. An advanced technique in that area is the real options theory (see Sect. 2.​2.​5), which makes it possible to include uncertainty and risk in the planning and evaluation process. In simple cases → sensitivity analyses can be conducted to get a feeling for the variety of possible decision consequences.
  • References:

  • Saaty, Th. L. (2001): Decision Making for Leaders.

  • Schuyler, J. R. (2001): Risk and Decision Analysis in Projects.

  • Beer, S. (1966): Decision and Control.

  • Keeney, R.L.; Raiffa, H. (1976): Decisions with Multiple Objectives.

  • Peterson, M. (2009): An Introduction to Decision Theory.

  • Goodwin, P.; Wright, G. (2004): Decision Analysis for Management Judgment

Decision Tree Method

A tree-like graphic representation of mostly multiple, consecutive decisions, where the leaves of the tree (terminal nodes) are evaluated and weighted with the probability of their occurrence and/or the risk incurred risk. The path through the tree with the highest value at the terminal node corresponds to the optimal set of the optimal decision values.
  • References:

  • Magee, J.F. (1964): Decision Trees for Decision Making

  • Foster, Provost; Fawcett, Tom (2013): Data Science for Business. What You Need to Know about Data Mining and Data-Analytic Thinking. O’Reilly Media.

  • de Ville, Barry; Neville, Padraic (2013): Decision Trees for Analytics Using SAS® Enterprise Miner™. SAS Institute

Delphi Method

A systematic, multi-stage survey technique that is helpful in assessing future events or developments from an expert point of view. Various experts are questioned individually on a particular subject, e.g.: “How long do you think it will take for renewable energy to meet 50% of the overall energy need?” The results are evaluated and shared with all respondents. In the next round of questioning, the experts can adapt/change their opinion. After two or three rounds Delphi gives a good, harmonized view on a subject with fewer extreme or contradicting opinions.
  • Reference:

  • Hsu, Chia-Chien and Sandford, Brian A. (2007). The Delphi Technique

Design Structure Matrix, DSM

(also referred to as: dependency structure matrix, problem-solving matrix, design precedence matrix)

The design structure matrix (DSM) provides a simple way of both analyzing and the managing complex systems by modeling the system structure or processes and is therefore an important tool in systems engineering. The dependencies of all constituent modules, assemblies, subsystems or activities and their interactions, information exchange, and dependencies are modeled as a matrix. This can involve, for example, product architecture or a design and development process, etc. DSM is also increasingly being used as a management tool in project management, where it also facilitates process presentations of projects whose activities present feedback and cyclical dependencies.
  • References:

  • Eppinger, Steven D.; Browning, Tyson R. (2012): Design Structure Matrix Methods and Applications.

  • Lindemann, U., et al. (2009): Structural Complexity Management

Digital Factory

A concept that virtualizes all processes involved in an industrial factory as a computer model/simulation.

It can be used to design, optimize or analyze individual production or factory processes and resources associated with its products (e.g., automobiles, airplanes, process plants) without the need for a physical test/mockup. A digital factory can be divided into four levels – database/data core, integration platform, tools, organization, and design work flow – and consists in practice of many digital submodules, methods, and tools, including simulation and 3D visualization and is used for a holistic design, implementation, management, and ongoing improvement.
  • References:

  • Canetta, L.; Redaelli, Cl.; Flores, M. (Eds.) 2011: Digital Factory for Human-oriented Production Systems

Economic Feasibility Calculation

A method of evaluating the profitability of an existing or a planned system, or for a profitability comparison of several variants. Important elements of an economic feasibility calculation are the intended useful life, the interest rate or the discount rate, and the performance expressed in monetary units, which are compared with the use of resources (costs). The profitability principle demands either minimizing costs with given performance or maximizing performance with given costs.

With respect to the degree of detail at which these elements are considered, we distinguish among static, dynamic, investment chain, relative profitability- (e.g., messaging application programming interface), and simultaneous models. Newer models include, for example, → real options.
  • References:

  • Farris, P W.; Bendle N T.; Pfeifer Ph E; Reibstein D J. (2010). Marketing Metrics: The Definitive Guide to Measuring Marketing Performance.

  • Feibel B J. (2003): Investment Performance Measurement.

  • Brealey R A., Myers S C. and Allen F. (20013): Principles of Corporate Finance.

Effect Networks/Influence Matrices

Important components of a method for representing and analyzing complex effect relationships (see → Network Thinking) to draw conclusions from them.

Recommended procedure and suggestions:
  1. 1.

    Possible use of an effect network diagram in which the relations among the individual elements (variables) are illustrated (Fig. 16.1).

     
  2. 2.

    Use of a matrix in which the columns and rows represent the network (Fig. 16.2)

     
  3. 3.

    Assessment of the strengths of the influences and entries in the matrix. The meaning of the numbers is as follows: 0 = no influence; 3 = strong influence, etc. (the scale can be chosen arbitrarily).

     
  4. 4.

    Calculation of row sums (active effect, means “element influences”). This sum indicates the level of influence taken by the particular element in the row. Calculation of the column sums (passive effect, means “element is influenced”) to show as a whole how strongly the particular element in the column is influenced by others.

     
  5. 5.
    Interpreting the results:
    1. (a)

      If the sum of active effects is high, changes in this element can have large effects on the system. As long as the element is not determined from the outside, but rather can be changed through action, this indicates a possibility for intervention. If an element has both high active effects and passive effects (the product of multiplying the total active effects by the total passive effects is large), this means that changes also produce major backlashes or retroactive effects.

       
    2. (b)

      If an element exhibits both low total active effects and low total passive effects (total active effects times total passive effects = small), then the element should be considered relatively neutral in comparison with other elements and has a “buffering” character.

       
    3. (c)

      If the total active effects are high and the total passive effects are low (total active effects divided by total passive effects = large), then this result indicates that intervention is relatively ineffective in this case.

      If the total passive effects are high and the total active effects are low (total active effects divided by total passive effects = small), then this element exerts a very small influence and the element is greatly influenced by other factors.

       
     
  6. 6.

    Application: when considering which measures to use in changing a complex causal-networked system, these considerations help in thinking through the effectiveness of measures and the desirability of the effects.

     
../images/460299_1_En_16_Chapter/460299_1_En_16_Fig1_HTML.png
Fig. 16.1

Bookstore – effect network. (Inspired by Gomez and Probst)

../images/460299_1_En_16_Chapter/460299_1_En_16_Fig2_HTML.png
Fig. 16.2

Bookstore – influence matrix. (Inspired by Gomez and Probst)

The methodology shown here constitutes an aid to quantitatively supported systems analysis in terms of system thinking. It is particularly appropriate for situation analysis and concept analysis.
  • References:

  • Probst Gilbert J. B. and Gomez Peter (1992): Thinking in Networks to Avoid Pitfalls of Managerial Thinking

  • Vester, F. (2007): The Art of Interconnected Thinking.

  • Colgan St. (2009): Joined-Up Thinking.

EFQM Model

A → Quality management system in the context of → total quality management. It was developed in 1988 by the European Foundation for Quality Management (EFQM) and is used by, according to current estimates, over 10,000 companies.

The EFQM model enables a holistic view of quality and consists of three pillars: human resources (management and workers) who work in processes and achieve results, which in turn benefit people (customers, society).
  • References:

  • Gryna, F M. (2001): Quality Planning and Analysis

  • Deming, W.E. (1997): Out of the Crisis,

Eisenhower Method

A matrix-based method for dividing decision-making situations according to the criteria important/unimportant and urgent/not urgent, which is attributed to the former general and US president Dwight D. Eisenhower: only important decisions (which also require a more thorough, methodical preparation) should be placed at the higher levels of the hierarchy. Urgent decisions indicate the priority with which they are to be treated. When urgent decisions also involve important issues (overall importance and risk), time pressure should be eliminated and if possible decisions broken into partial decisions: what needs to be done right away to take as much time pressure off the situation and win as much time as possible? What is important to accomplish during the time gained? Positions and bodies fairly high on the hierarchical ladder should not allow themselves to become cluttered with unimportant tasks that are not urgent.
  • Reference:

  • Covey Stephen R. (2004): The 7 Habits of Highly Effective People

Failure Cause Analysis

A conceptual approach that is aimed at preventing focusing on measures too prematurely before dealing with failures and their causes. The basic idea is shown in Fig. 6.​6 in Sect. 6.​1.​3.​2.

Fault Tree Analysis

A procedure that is used to understand how systems can fail and determine the associated probabilities. A fault tree analysis, which can be used for all kinds of systems, analyzes an undesired state of a system using Boolean logic to combine a series of lower-level events and is a major tool for reliability engineering and system analysis. It is described in DIN 25424-1.
  • References:

  • Roberts, N. H.;Vesely W.E. (1987): Fault Tree Handbook.

  • Ericson, Clifton A.: (2011): Fault Tree Analysis Primer.

  • DIN standard 25424-1

Flow Charts

Show logical and/or quantitative connections between elements of a system and/or a process. The logical relations are usually information relations, or they express logical dependencies (“is a requirement for…”). Quantitative connections, e.g., in the form of energy amounts, materials of all kinds, concentrations, etc., can be represented in the form of energy flow (Sankey) diagrams, in which the amount of flow is represented by the breadth of the connecting lines.
  • Reference:

  • Bohl, Marilyn and Rynn, Maria (2007): Tools for Structured and Object-Oriented Design.

FMEA (= Failure Mode and Effects Analysis)

Synonyms: FMECA (Failure Mode and Effects and Criticality Analysis).

Failure mode and effects analysis (FMEA) follows the basic idea of precautionary error prevention rather than subsequently dealing with an error (error detection and correction). This should be done as early as in the design phase by identification of potential error causes. Costs of controlling and error correction in the manufacturing phase or in the field (with the client) should be avoided. The goal is to lower overall costs. Using such a systematic approach and building on the knowledge and experience gained can prevent to repeat design flaws in new products and processes.

Failure mode and effects analysis is used in the development of products and processes, and in many sectors, e.g., the automobile industry, it is generally required by the supplier.
  • Reference:

  • SAE (2009): Potential Failure Mode and Effects Analysis in Design (Design FMEA) and Potential Failure Mode and Effects Analysis in Manufacturing and Assembly Processes (Process FMEA).

Forecasting Techniques

Methods and techniques with which future developments, results, or conditions can be predicted. The time frames for forecasting can be short, medium, or long term. With respect to techniques, there is a distinction between intuitive and analytical. Intuitive methods take in more subjective opinions and assessments (which may also be influenced by facts): → interview, surveys, → scenario writing, → Delphi method.

Among analytical methods there are endogenous and exogenous models. Endogenous models (in terms of a clarification of the future from the intrinsic development of the past) are extrapolations of time series, linear, progressive, digressive trend extrapolations, saturation curves, etc. Exogenous models (consideration of relevant external factors) include, for example, the consideration of multiple linear developments or changes in relevant influence factors on the issue to be predicted. Here, there is a fluid transition to → simulation methods, → system dynamics, etc.
  • References:

  • Armstrong, J. S. (ed.) (2001). Principles of forecasting: a handbook for researchers and practitioners.

  • Rescher, N. (1998): Predicting the future: An introduction to the theory of forecasting.

Game Theory

Subarea of → operations research. A game in terms of game theory involves a situation in which there are several players that mutually influence the results of their decisions. This is described in a mathematical model. Game theory in particular attempts to characterize and predict rational behaviors in games. Examples for applications include concepts for auctioning radio and mobile communications licenses and the coordination of radio frequencies for disorganized rescue efforts, etc.
  • References:

  • Brandenburger, A M.; Nalebuff B. J. (1996): Co-Opetition. Currency Doubleday

  • Nash, John (1950) “Equilibrium points in n-person games” Proceedings of the National Academy of Sciences 36(1):48–49

  • Harsanyi, J C. and Selten, R A. (1988): General Theory of Equilibrium Selection in Games

  • McCain, Roger A. (2014): Game Theory: A Nontechnical Introduction to the Analysis of Strategy

Heuristics, Heuristic Methods

A term for solution-seeking methods, which are practical (performant, simple. etc.), promising, but not guaranteed to find an optimal solution. Heuristics can also be described as a way of coming up with (sufficiently) good solutions under time and resource constraints.

Example: putting together an efficient production schedule.
  • References:

  • Michalewicz, Z.; Fogel, D.B. (2004): How To Solve It: Modern Heuristics.

  • Gigerenzer, G., Todd P. M. (2000): Simple Heuristics That Make Us Smart.

  • Dörner, D. (1980): Heuristics and Cognition in Complex Systems.

Histogram

A graphic representation of the frequency distribution of measurement or count values. It starts with the data arranged by size and divides the entire area of the sample into classes (bins) (Fig. 16.3).
../images/460299_1_En_16_Chapter/460299_1_En_16_Fig3_HTML.png
Fig. 16.3

Histogram

The indicators of a histogram can be used for the evaluation, that is, the general curve shape, the distribution, and the centering (symmetrical, crooked distribution, etc.). In creating a histogram it makes sense to use practical instructions such as the smallest sample size, number of classes with known number of measured values, desired confidence interval, procedure in selecting measured values. Refer to the relevant literature.
  • Also see References under → statistics.

Information Acquisition Plan

Before conducting a fairly large survey, an information acquisition plan should be prepared based on the following questions: what information is needed? Why? What information is essential? Why? What conclusions should be possible or supported? What degree of precision or detail is necessary? What timeframe will the survey cover? Subsequently, one must consider the quickest and easiest way of gathering this information.

Information Acquisition Techniques

There is a distinction between information oriented toward the past, the present, and the future. Additional characteristics deal with the difference between primary and secondary information: primary information is obtained right at the appropriate source. Secondary information is taken from documents already on hand and is acquired by analyzing them. Specific information acquisition techniques are → interviews, → questionnaires, → observation, → Delphi method, → scenario techniques, → forecasting techniques, etc.

Information Preparation/Processing Methods

All methods that serve the preparation/compression of information or the recognition of inherent laws, dependencies, etc. This includes methods of → mathematical statistics, → correlation analysis, → regression analysis, all types of → key indicator systems, etc. The results are illustrated using → visualization techniques.

Interview

Information gathering through oral questioning, which can be divided into procedures with various characteristics: conversation, conversation with a list of questions, interview with open answers, interview with predetermined answers ((multiple) choice).

Interviewing Techniques

An umbrella term for various possibilities for oral questioning (→ interview) and/or written polling (→ survey). A (regular) recurring inquiry among a steady circle of people is referred to as a → panel survey or poll. See also → Delphi method.
  • References:

  • Innes, J. (2009): The Interview Book.

  • Gordon N J., Fleisher, W L. (2011): Effective Interviewing and Interrogation Techniques.

Ishikawa Diagram

(Synonyms: cause-effect diagram, fishbone diagram)

Diagram for systematic representation and analysis of possible causes of problems. The cause-effect diagram was developed by the Japanese Kaoru Ishikawa and was later was named after him. Originally used in the realm of quality management for analyzing quality problems and their causes. Today, it is also used in → security analyses, risk analyses, etc. Its fishbone-like structure inspired the synonym “fishbone diagram.”
  • References:

  • Ishikawa, K. (1990): Introduction to Quality Control.

  • Tague N R. (2005): The Quality Toolbox.

ISO 9001

This popular standard establishes the requirements for a quality management (QM) system. Such a system is useful if an organization needs to prove its capabilities producing products that meet the demands of the customers and/or authorities, or simply seeks to increase customer satisfaction. This ISO standard describes the entire QM system in model form.

The eight basic principles of quality management are: (1) client orientation; (2) responsibility of management; (3) involvement of the relevant people; (4) process-oriented approach; (5) system-oriented management approach; (6) continuous improvement; (7) fact-based decision-making approach; (8) supplier relationships for mutual benefit.
  • References:

  • ISO 9001 in Plain English, (2015) by Craig Cochran

  • ISO 9001 - What does it mean in the supply chain? Available from: http://www.iso.org/iso/pub100304.pdf

Just in Sequence, JIS

Seejust in time

Just in Time, JIT

Just in time (JIT) involves the delivery of materials (raw materials, parts, assembly groups, or products) at precisely the right time, with the necessary quality and in the desired quantity (including packing) to the agreed-upon location. Storage costs are largely eliminated, and the usual administrative expenses are also significantly reduced.

An extension of JIT is what is known as just in sequence (JIS). Building on the JIT principle, with JIS, the products are also delivered to the client in the right sequence. JIT and JIS are widely applied standards in the automobile industry today.
  • References:

  • Hirano, Hiroyuki and Makota, Furuya (2006): JIT Is Flow

  • Womack, James P. and Jones, Daniel T. (2003): Lean Thinking.

  • Takeda, Hitoshi (2006): The Synchronized Production System: Going Beyond Just-in-time Through Kaizen

Kaizen

(means in Japanese a change for the better) is a philosophy of life and work with striving for continual improvement as its main idea. This concept has been further developed into a management system in Japanese industry. Another term used for this is continuous improvement process (CIP). The → just in time concept is a reflection of the Kaizen philosophy.
  • Reference:

  • Masaaki Imai (2012): Gemba Kaizen: A Commonsense Approach to a Continuous Improvement Strategy.

(Key) Indicator System

Indicators are easily remembered, meaningful methods for presenting numerical issues or conditions. A distinction can be made among various types of indicators: structural indicators: ratios of sizes or characteristics, such as the portion of plastic materials in an automobile (as a percentage of weight). Measurement indicators: characteristics of the same kind at various points in time (e.g., sales for 2015, 2016, etc.). Index indicators: the normalization of a characteristic based on 100. Conversion of other values relative to this one (cost of living in Munich, 100: Berlin, 92, Paris, 105, etc.). Relationship indicator: representation of relationships of different measurers to one another (e.g., parts produced per worker and day), etc. Important indicators, which are useful for controlling and steering, are called key indicators.
  • References:

  • Reichert, F., Kunz, A., Moryson, R, 2008, MAE-P3 - A System to Gain Transparency of Production Structure

  • Austin, Robert D. (1996): Measuring and Managing Performance in Organizations.

Lessons Learned

Analysis and determination of positive and/or negative experiences in projects to learn from them. Lessons learned, which may be part of the final project documentation, are fairly structured compilations of information and are very important for the completed project. Carefully archived in accessible form, they can be used to prepare similar projects and to improve project management.

Linear Programming (Synonym: Linear Optimization)

A widely used and very powerful method in → operations research, for finding optimal solutions by adjusting or combining factors. Examples: optimal production program; waste optimizing with parts made of sheet metal, wood, etc.; optimal routing in transportation and communication networks; mixing problems, such as steel production (in what proportions must the ingredients such as different ores be mixed so that the alloy contains the whole required combination of elements at the lowest cost?); refining fuels, etc.; → game theory; nonlinear and (mixed) integer optimization.
  • References:

  • Dantzig, G. B. (1963): Linear Programming and Extensions

  • Schrijver, A. (1998): Theory of Linear and Integer Programming.

  • Also see → operations research.

Mathematical Statistics

Methods for analyzing mass phenomena in terms of recognizing patterns and internal laws. There is a distinction between descriptive statistics (determining the characteristic of a distribution of numerical values, mean, variance, standard deviation, etc.), and evaluating or inferential statistics (based on descriptive statistics and → probability calculation).

On the one hand, estimation procedures (inference from a sampling on the parameters of the basic population), and on the other, testing procedures (checking whether deviations that occur and are noticed between, for example, empirical and theoretical values are random or significant are used).
  • References:

  • Pitman, Jim (1999): Probability.

  • Walpole, R.E. (2007, 9.ed.): Probability & Statistics For Engineers & Scientists

  • Wheelan, Ch. (2013) Naked Statistics: Stripping the Dread from the data

  • Field, A. (2013): Discovering Statistics using IBM SPSS Statistics.

Mean Time Between Failure, MTBF

Measurement of the reliability of units (assemblies , devices, or installations) that are repaired. The MTBF is the inverse of the failure rate.
  • Reference:

  • Jones, James V. (2006): Integrated Logistics Support Handbook

Mean Time to Failure, MTTF

Mean time interval between malfunctions of objects that cannot or should not be repaired.
  • Reference: also see → mean time between failure

Mean Time to Repair, MTTR

Average time between the failure of a unit and the repair performed.
  • Reference: also see → mean time between failure.

Method 6-3-5

→ Creativity technique in which six people are confronted with a problem described as precisely as possible and write down three ideas each on a sheet of paper within 5 min. The paper is passed to the next person in the circle every 5 min and the same process is repeated. In 30 min, this theoretically produces 6 × 3 × 6 = 108 solutions. In practice, one would expect fewer solutions, as there are often duplicate entries or the papers are passed on without new ideas being added.
  • References:

  • Schroer, B.; Kain A. and Lindemann U. (2010): Supporting Creativity In Conceptual Design: Method 635-Extended

  • Linsey J S. and Becker B. (2011): Effectiveness of Brainwriting Techniques

  • See also → creativity techniques.

Milestone Monitoring (Progress/Slip Charts)

A project management tool for tracking changes of milestones during the course (review/report points) of a project. Project managers and stakeholders (e.g., steering committee) can be informed graphically about impacts on the final deadline of a project and, if there is a deviation from the original plan, about corrective measures required or already implemented (Fig. 16.4).
../images/460299_1_En_16_Chapter/460299_1_En_16_Fig4_HTML.png
Fig. 16.4

Milestone monitoring diagram

Mind Mapping

The creation of graphic memory maps, thought maps, or mind maps, coined and first published by Tony Buzan. In the center of the mind map is the topic (or problem) to be dealt with. Extending from it are main and secondary branches that structure and subdivide the topic. On each branch there is just one term. Mind maps are well suited for developing a thought or topics, and for documenting and sorting results from creative sessions such as brainstorming.
  • Reference:

  • Buzan, T. and B. (2010): The Mind Map Book

Modeling and Representation Techniques

Modeling and representing the results of the synthesis, i.e., of solutions, theoretically involves the techniques that have already been described in connection with systems thinking and situation analysis. These include in particular system representations of all types such as bubble charts, system hierarchical representation, effect networks, special views of systems, black box representations, and many more.

See also → effect networks/influence matrices, correlation matrices, → process charts, → flow charts, all kinds of plans, drawings, sketches, tables, diagrams, and physical modeling (reduced-scale physical models) or abstract, mathematical, and usually IT-supported modeling. Numerical methods facilitate through dynamic simulation of procedures and processes (e.g., changing parameters) the possibility of quickly evaluating and visualizing changes of system performance and behavior.

Monte Carlo Method

Is a powerful → simulation technique useful for stochastic processes that are dependent on random variables following often unknown probability distributions and was originally developed in the 1930s and 1940s. The Monte Carlo method assigns in multiple runs values to these variables, which are created by random stochastic sampling from historic data (in the case of known/assumed probability distributions the values are generated in such a way that they follow their respective distribution). With these values the process is evaluated (e.g., customer waiting times in a queue) and the result recorded for each run. These results, as they are a based on the stochastic variation of (input) variables, follow a distribution function (with expected value and variance) themselves. The number of runs (iterations) depends on the desired quality/reliability of the result (confidence interval).

The term Monte Carlo method was chosen as a project name in reference to the Casino in Monaco.
  • References:

  • Rubinstein, R. Y.; Kroese, D. P. (2007). Simulation and the Monte Carlo Method

  • Asmussen, S. and Glynn, Peter W.(2007): Stochastic Simulation: Algorithms and Analysis

  • Lemieux, Ch. (2009): Monte Carlo and Quasi-Monte Carlo Sampling.

  • See also → operations research

Morphological Analysis

Analytical, systematic → creativity technique used to search systematically for the greatest possible number of solutions. The concept was developed by F. Zwicky and consists of the following steps (Fig. 16.5):
  1. 1.

    Defining the problem: what do we want to find? In the example: the greatest possible variety (or all conceivable) types of vehicle.

     
  2. 2.

    Setting the parameters that can be used to describe the numerous variants. In the example: space layout – arrangement of drive unit and cargo space (SL); type of steering (ST); and drive type (DT).

     
  3. 3.

    Generating possible parameter values: in the example, SL with SL1 to SL3, ST with ST1 and ST2, and DT with DT1 to DT4.

     
  4. 4.

    Coming up with the different configurations: all possible combination parameter values are created (at first without assessment or critical evaluation). In the example, there is a possible total of 3 × 2 × 4 = 24 different combinations (configurations).

     
  5. 5.
    Creating a shortlist of technically feasible configurations – without judgment and prejudice. In our example the obvious (and known) configurations are:
    1. (a)

      SL1 + ST1 + DT2 = automobile or bus

       
    2. (b)

      SL2 + ST2 + DT3 = streetcar (tram)

       
    3. (c)

      SL3 + ST2 + DT3 = train (locomotive + rail wagon)

       
    4. (d)

      SL1 + ST1 + DT3 = trolley-bus

       
     
  6. 6.

    Evaluation and final decision about best configuration (Fig. 16.5).

     
../images/460299_1_En_16_Chapter/460299_1_En_16_Fig5_HTML.png
Fig. 16.5

Morphological box

  • Reference:

  • Zwicky, F. (1969): Discovery, Invention, Research – Through the Morphological Approach.

Multi-Moment Observations, See → Activity Scanning

Text

Network Planning Techniques

A set of methods for planning logical and temporal sequence of projects using a mathematical graph representation. Network planning techniques can be used in many ways:
  1. 1.

    Structure analysis: in a first step, the activities to be carried out are determined (questioning experts, brainstorming, etc.) and put into a logical sequence according to established rules (what activities must be completed so that a certain activity can begin, which activities can run entirely or partially in parallel, which activity subsequently starts, etc.) The result is represented graphically to create what is known as a network plan.

     
  2. 2.

    Time analysis: if the respective temporal duration of the individual activities is provided – or can be estimated, it is possible – based on the dependencies established in the structure – to calculate the completion times of the network plan. The earliest possible or latest allowable times for results (condition) and activities are established. The most important result is the critical path, the path that is crucial to the final deadline. Each delay along this path leads to a delay in the overall project completion (some methods also allow for dealing with uncertain activity times, such as → PERT). Activities on the other paths may have time buffers (buffer times) within which they can be postponed or preponed without having any effect on the final deadline.

     
  3. 3.

    Cost analysis: when costs can be assigned to the individual activities (personnel, materials, machine hours, and investment costs), it is possible to calculate and predict the project costs at each project stage. It is also possible to calculate the optimal allocation of investments or to speed up the project by optimally investing in additional resources (crash costs) using → linear optimization.

     
  4. 4.

    Capacity analysis: with the assignment of resources such as machines, equipment, work groups, personnel, etc., to the individual activities, useful load overviews can be created. Their analysis allows load peaks to be smoothed out by postponing or bringing forward certain activities using their buffer times.

     

There are two ways of representing the network of activities as a mathematical graph. One way is called event node network, where activities are represented as arcs (edges) and their start and end events as nodes (CPM, PERT). The other way is called activity node network, where the assignment is just the other way around: the activities are on the nodes and their connections drawn as arcs (edges). Metra-Potential Method (MPM) and many others use this notation.

With respect to the time estimate for the duration of the activities, there are also various possibilities. With deterministic time estimates a single, constant value for the duration of a process is assumed; with stochastic activity times a stochastic distribution function is given – or in the simplified case of → PERT – approximates the general distribution with the skewed beta distribution (often called PERT distribution) using a three-value estimate of the duration of the activities: an optimistic one, a pessimistic one, and a most likely one. This allows probabilities to be calculated for individual and overall project completion times.

Most IT-supported → project management software systems support using network planning techniques, especially for time, cost, and capacity analysis. Graphic charts and tables, visualizing the project’s network plans, capacity, and load overviews, in addition to task lists, are also standard features of these software systems.
  • References:

  • Milosevic, D. Z. (2003): Project Management ToolBox.

  • O’Brien, J.J.; Plotnick, F.L. (2010): CPM in Construction Management

  • Lewis, James P. (2004): Project Planning, Scheduling & Control.

Network Thinking

See → effect network thinking.

Numerical/Computer Simulation

Numerical systems models that are evaluated by computers (or powerful computer clusters) to simulate the system’s behavior. Well-known examples are a computational fluid dynamics, finite element method (FEM) strength calculation, weather and climate forecasts, thermal, energetic or functional building simulation, to name but a few.

In virtual product development numerical simulation is used to visualize and test features or characteristics of products before a physical prototype is built. This approach can lead to a significant shortening of development or time-to-market times. Often, developers use the time gained rather to evaluate more variants of solutions to improve the product.
  • References:

  • Angermann, L. ed. (2011): Numerical Simulations - Applications, Examples and Theory

  • Awrejcewicz, J. ed. (2011): Numerical Simulations of Physical and Engineering Processes.

Observation Techniques

A method of information gathering that is suitable for observable phenomena (states, conditions, procedures, processes, behavior, etc.). Here, the information does not have to be gathered by questioning experts or people involved, but can be collected by observation, e.g., by doing a complete survey (video surveillance) or observation at random, discrete, selected times and extrapolating the results with statistical methods (→ multi-moment observations; → activity scanning).

Operationalization

Operationalization defines the measurement of phenomena that are not directly measurable or traceable by supplementing fuzzy, abstract terms with indicators or proxies that are easy to capture and/or measure. For example, the term work atmosphere can be operationalized by indicators such as: personnel fluctuation, rate of sick leaves, etc. Corporate success could be measured with indicators such as: profit, cash flow, growth, market share, average product age, sales volume with new (e.g.: <5 years old) products, employer attractiveness indicator, etc.

Operations Research, OR

The application of mathematical methods to systems to give them an optimal shape (structure) or an optimal functional performance. The term optimal is always understood with respect to a target (objective) function and given constraints that must be defined. OR in systems engineering is used primarily in the solution search (synthesis/analysis).

Well-known OR application areas are blending problems, inventory optimization, → assignment or allocation problems, → queuing theory, → sequencing problems, job shop scheduling, maintenance planning, or transportation. Important OR methods are → linear programming, (mixed) integer programming, nonlinear programming, dynamic programming, simulation, etc.

See also: → game theory, → simulation technique, → Monte Carlo method, → heuristic methods, etc.
  • References:

  • Hillier, F.S.; Lieberman, G.J. (2010): Introduction to Operations Research.

  • Winston, W.L. (2008): Operations Research: Applications and Algorithms.

  • Baldick R. (2006): Applied Optimization. Formulation and Algorithms for Engineering Systems.

  • Taha, H.A. (1992): Operations research: An introduction.

  • Eiselt, H.A. (2012): Operations Research. A Model Based Approach.

Panel Polling (Polls)

→ Information acquisition technique with which certain matters such as assessment of the situation, opinions, desires, estimates of future developments, etc., are reviewed and collected, repeatedly, often at regular intervals, from a particular group of people (clients, suppliers, residents, etc.). This facilitates a dynamic observation of changes and trends. Example: → Delphi method.

Polarity Profile (Spider Web Diagram)

→ Visualization technique that can be used to illustrate characteristics of object systems, people, etc., which can be put on a scale. These diagrams visualize a wide variety of object properties in one view. This greatly simplifies comparisons.

In a situation analysis, this representation is helpful in characterizing various phenomena or systems. Different solution variants and the degree to which the goal is met can be presented for evaluation (see Part III, Sect. 6.​4, Evaluation and Decision). It is also possible to represent areas that are suited to using methods and resources (see Part II, Chap. 4, Project Management, Figure 4.​9). For operationalizing abstract or fuzzy terms see → operationalization. The example below should illustrate the idea and application (Fig. 16.6).
../images/460299_1_En_16_Chapter/460299_1_En_16_Fig6_HTML.png
Fig. 16.6

Polarity profile of two competitors

The coordinate axes represent the degree to which the goal is met. Possible constraints could also be plotted that way. The scales should be arranged in such a way that the best performance of each criterion moves in the same direction, good either inward or outward for all criteria. As there is often no meaningful order or correlation between the different axes or scaling factors, the area spanned by the polar diagrams does not simply correlate with the overall quality of an object and therefore cannot be used for a comparison.

Process Analysis

Systematic investigation of business processes by decomposition into partial steps/sub-processes, and precise analysis of the process logic and contents. The goal is often to identify opportunities for improvement such as simplifying, speeding up, reducing costs. Various → visualization techniques can be helpful. Also, methods such as → Benchmarking can help to raise awareness for problems and opportunities, and facilitate a better understanding of the process itself and/or identify weaknesses.

Process Modelling Diagrams

Techniques belonging to the set of → Information Preparation/Processing Methods. They show graphic representations of processes or procedures as the basis for understanding them, critical analysis, improvements, programming, automatization, etc. Examples are data flow charts, → business process model and notation, program flow charts, work flow process diagrams, technical procedures, project procedures, etc.
  • References:

  • Hommes, Bart-Jan; van Reijswoud Victor (2000):The evaluation of business process modeling techniques.

  • Mendling, J.; Reijers, H. A.; van der Aalst, W. M. P. (2010). Seven process modeling guidelines (7PMG).

Program Evaluation and Review Technique, PERT

A stochastic → network planning technique where activities are assumed to follow a probability distribution (PERT distribution) determined by an optimistic, a most likely, and a pessimistic duration estimate. The expected value of the total project duration is calculated similar to the → critical path method as the sum of the expected durations of the activities on the critical path (= the sequence of activities that determine the project duration – each delay of one of those activities leads to a delay of the project) – its variance as the sum of the activities’ variances. The big advantage over CPM is that this technique calculates a stochastic forecast of project durations and confidence intervals for achieving them, which is essential in → risk management and far superior to the “single point estimates” of CPM.

Project Audit

Tool for assessing a project’s progress. The objective is to determine to what extent the project is progressing according to plan in terms of time, cost, quality, etc., or whether measures need to be taken to make sure the project will be successful.

A project audit consists of the following steps:
  1. 1.

    Establishing a baseline based on the planned deadlines, costs, and accomplishments (target or plan)

     
  2. 2.

    Assessing the current status with respect to completion/progress, costs, and deadlines (actual situation)

     
  3. 3.

    Determining the resources still needed to complete the project (e.g., cost to complete, CTC)

     
  4. 4.

    Conducting a deviation analysis: why and to what degree is there a deviation from the planning up to this point (deadlines, costs, completion)?

     
  5. 5.

    Conducting a critical analysis of the project premise: evaluation of the assumptions in respect for plausibility, probability of occurrence, prospects of success

     
  6. 6.

    Evaluation of opportunities and risks

     
  7. 7.

    Adoption of a catalog of measures for the remainder of the project

     

The audit can be initiated by different management levels depending on the degree of the suspected/feared deviation and the risk associated therewith (project administration, the company’s business management, executive board). The selection of the auditors takes place in accordance with the initiating management level. For reasons of objectivity, the auditors should not be part of the project team or the particular environment in which the project is situated. Entirely external auditors are often a good choice.

Project Management Body of Knowledge Guide, PMBoK Guide

A widespread project management standard of the US Project Management Institute, by which it is published and supported. The methods described in the PMBoK Guide are usable with projects in various application fields: construction, software development, mechanical engineering, automotive industry, etc.

The PMBoK Guide is process-oriented; in other words, it uses a model by which the work is accomplished through processes. A project is carried out using the interaction of many processes. The PMBoK structures all methodological skills along processes. Inputs (documents, plans, designs, etc.), tools and techniques (mechanisms applied to inputs), and outputs (documents, plans, designs, etc.) are described for every process.

The PMBoK Guide is the basis for the certification examination for project management professionals (PMPs), and others.
  • Reference:

  • Snijders, P.; Wuttke, Th.; Zandhuis, A. (2009) PMBOK Guide

Project Management Software, PMS

Information technology tools for supporting project management, mostly on the basis of → network planning techniques.

With the help of PMS, the structure of projects can be designed and time calculations, including determining the critical path, can be made. PMS also offers many opportunities for automatic dispositions or evaluations. Depending on the scope of their function and their purpose, various categories of PMS can be distinguished (according to Wikipedia):
  • Single project management systems (usually design software for designing and tracking a single project).

  • Multi-project management systems (planning software for administering and managing multiple projects).

  • Enterprise project management systems (for integration into company-wide planning, e.g., enterprise resource planning software).

  • Project collaboration platforms with communication solutions (e.g., groupware solutions, portal software), but also many other programs that really are not project-management-specific, but which may, for example, have interfaces with project management software, such as office solutions, creativity tools (e.g., mind mapping).

  • Some software solutions are sector-specific.

Quality Control, QC

Quality control (QC) describes a process by which the quality of all factors involved in production are reviewed. The ISO 9000 standard defines quality control as “A part of quality management focused on fulfilling quality requirements”.

Since the 1930s, different QC paradigms, which build on various concepts, have been applied. Among those are: statistical quality control (SQC; 1930s) using statistical methods and sampling, ➔ total quality control (TQC; 1950s) favoring a holistic approach to involve all of the company’s quality stakeholders, statistical process control (SPC; 1960s) building on process control systems, company-wide quality control (CWQC; 1960s) which is similar to TQC, total quality management (TQM; 1980s) using methods of statistical quality control for organizational improvement and (lean) six sigma (6σ; 1980s), which applies statistical quality control methods to business strategy.

See ➔ total quality management, TQM, ➔ total quality control, TQC ➔ Six Sigma.
  • Reference:

  • Juran, Joseph M. (1995)): A history of managing for quality: the evolution, trends, and future directions of managing for quality

Quality Management, QM

Quality management (QM) ensures that an organization, product or service is consistently following defined standards or specified properties. It consists of four major domains: quality planning, quality assurance, ➔ quality control and quality improvement. QM also includes the means and tools to achieve and sustain quality levels by carrying out quality assurance and controlling processes and product properties.
  • Reference:

  • Rose, Kenneth H. (July 2005). Project Quality Management: Why, What and How.

Questionnaires

A set of questions that can be answered orally (→ interview), in writing, or electronically. The key success factor for a questionnaire is the careful design of its questions, which focus on a governing question. Before formulation of the questions it is necessary to decide on the evaluation method to be used. It is also possible to distinguish among various types of questions, e.g.:
  • Dichotomous questions: questions that have two possible responses such as yes/no.

  • Questions based on level of measurement: various possibilities must be evaluated according to scales or grades, etc.

  • Open-ended questions: any kind of answer can be accepted

  • Closed questions: the possible answers are divided into groups and must be sorted into the defined value ranges

  • Filter or contingency questions: questions that are used to determine if respondents are qualified or experienced enough to answer a subsequent one

Generally, it is good practice to pre-test questionnaires on a small, representative group of people. The results, feedback, and experience gained in this test can be used to improve the questionnaire, but must not be used for a scientific evaluation/survey.
  • Reference:

  • Kreuter, F.; Presser, St. and Tourangeau, R. (2008): Social Desirability Bias in CATI, IVR, and Web Surveys

Queuing Models

Queuing models and the corresponding theory belong to → operations research. They model situations where persons or goods arrive at service stations. Both arrival times and service times are stochastic. This can lead to the forming of waiting lines (queues) and in extreme cases to blocking or denial of service or to idling servers at the other extreme.

Examples of areas where queuing models are very useful include: the analysis of computers, telecommunication networks, traffic systems, logistics, and manufacturing systems. Depending on the field of application, the term service station may have different meanings.

The analytical results from waiting line theory provide the basis for the management of arrivals or for sizing of the service stations to minimize the overall costs caused by waiting and service:

The most important cost influencing factors are:
  • The number of arrivals at a particular arrival rate (= average number of arrivals per time unit) and/or their stochastic distribution

  • The number of service stations, with a particular service rate (= average number of service operations per time unit) and/or their stochastic distribution

  • Line discipline (rules of behavior of the incoming elements)

  • Service strategy (organization and sequence of processing such as: first in first out, etc.)

In classical queuing theory, inter-arrival times are often assumed to follow an exponential distribution and service times. This leads to a stochastic process that is independent of its history (Markov property: M) and allows for analytical solutions. Queues are characterized by the distribution of arrival rates, service rates, and number of servers: M/M/1 would be such a queue.

In many practical cases, such as complex networks, empirical probability distributions or complex service rules for arrivals or service times (e.g., branching waiting lines), it is not possible to develop analytical solutions. Instead → numerical/computer simulation is used applying the → Monte Carlo method.

In systems engineering, the queueing theory is applied in the step involving synthesis/analysis of solutions.
  • References:

  • Asmussen, S. and Glynn, Peter W.(2007): Stochastic Simulation: Algorithms and Analysis

  • Giambene, Giovanni (2014): Queuing Theory and Telecommunications Networks and Applications.

  • See also → operations research.

Real Options

See Sect. 2.2.5

Reengineering, Business Process Reengineering, BPR

Promoted by Michael Hammer and James Champy in the mid-1990s as a concept for drastic changes in production and business processes. The result should be decisive and measurable improvements regarding costs, quality, service, and time. “Quantum leaps, not small steps to improvement.” Reengineering is thus contrary to such concepts as Kaizen and CIP, which support a policy of small, continuous steps.
  • References:

  • Hammer, H.; Champy, J. (2003): Reengineering the Corporation.

  • Schantin, D. (2004): Makromodellierung von Geschäftsprozessen

Regression Analysis

Being a method of → mathematical statistics, regression analysis is an analysis technique that is an attempt at modeling the relationship between one dependent and one or more independent variables. Related to this, a → correlation analysis calculates how well a given (often linear) model can represent an assumed relationship between a set of variables.

Reliability Analyses

Used to model/describe the reliability of objects to calculate their probability of failure. These objects can be as small as electronic components or as large and complex as power plants. Reliability as a part of the quality of objects covers their behavior during or after specified time spans under given operating conditions (see DIN 55 350-11) and depends on a number of variables. These result from the type of application, usage, or operation, etc., i.e. on one hand into external factors, e.g., the specified user demands and effects from the environment, and on the other hand internal factors such as the quality resulting from the development and manufacturing of the object.

Reliability analyses are carried out to specify reliability requirements of objects to be designed, and to investigate the reliability of existing or already developed objects. A simplified deterministic perspective of reliability leads to experience-based measures that are intended to prevent or counter failure or breakdown (e.g., n-fold safety coefficients of a component).

A wide range of material characteristics or manufacturing conditions and a great variability in operating conditions; however, require a probabilistic perspective. This also applies to systems consisting of many elements that must perform a great number of functions (e.g., instrumentation and control equipment), and to systems whose malfunction could pose danger to equipment, humans or the environment (e.g.: signal equipment in a railroad operation).

There are many mathematical methods of calculation reliability. Failure probability calculations often assume an exponential growing probability distribution of failures over time – electrical components in contrast are assumed to follow a so-called “bathtub curve,” which assumes high failure rates early and late in the product’s life. The most commonly used measures are the mean time between failures (MTBF) and the mean time to failures (MTTF).

In systems engineering, reliability analyses are often used during solution search (synthesis/analysis), and if necessary in the situation analysis (improvement of an unsatisfactory solution).
  • References:

  • DIN 55350-11:2008-05, English version: Concepts for quality management - Part 11: Supplement to DIN EN ISO 9000:2005

  • Finkelstein, M. (2008): Failure Rate Modelling for Reliability and Risk.

  • Nakagawa, Toshio (2005): Maintenance Theory of Reliability.

  • Levitin, Gregory, G. (2005): The Universal Generating Function in Reliability Analysis and Optimization.

Reverse Engineering

Reverse engineering (also called back engineering) describes a process of extracting all knowledge and design information from an existing system (e.g., an industrially manufactured product) by only investigating its structures, states, properties, and behavior.

A design is reconstructed that way by trying to reverse the engineering process and moving backward, starting at the finished item. To verify the design and other gained insights, a 1:1 copy of the object can be manufactured and compared with the original. On the basis of the copy and its construction plan it is now possible to move forward with further development. This approach is used as an analysis method in a variety of fields: in natural sciences (genetics, biology, and physics) or as what is known as a retrosynthesis in chemistry.

In situations in which the design to be investigated is protected by intellectual property laws or trade secret laws, it is often forbidden to use reverse engineering. Legal regulations vary from country to country. Sometimes, there are exceptions for ensuring “interoperability” with other systems.

In areas where it is legal, reverse engineering can be used in many different ways:
  • The design of electronic components, such as microprocessors, could be reconstructed by mechanically taking down the silicon die layer by layer. In software, it is often possible, for example, to recover the source code from an executable program (using a disassembler and many efforts at analysis), or the decoding of communication protocols of network components by analyzing their actual communication.

  • In mechanical engineering, real objects/machine parts can easily be digitized using a laser scanner and are ready to be used in a digital engineering process (digital mockups, computational flow simulation, FEM analysis, etc.) or a design process to be developed further. Digitizing real components is also useful for target-actual comparison in quality management –e.g., for comparing in injection molding the shape of the mold’s computer-aided design (CAD) model with the shape of the resulting finished part, which has been digitized using 3D scanners.

  • References:

  • Raja, V.; Fernandes, K. J. (2008): Reverse Engineering

  • Eilam, Eldad (2005). Reversing: secrets of reverse engineering.

Risk Analysis

Risk analyses are used to identify early as possible the dangers or risks – particularly to humans – that result from a product as early as possible → risk management. Another reason for thorough risk analysis is the intention to minimize product liability risk with respect to clients.

They are also part of the documentation for certifications such as the European Union compliance statement, which is mandatory for the CE test mark, or the Federal Communications Commission Declaration of Conformity, which is used on certain electronic devices sold in the USA.

Risk Management

Assessed potential damages or the consequences of a malfunction due to undesirable occurrences are called risk. Risk management is the systematic identification and evaluation of risks and the management of measures to counteract identified risks. It is a systematic process that is applicable in many areas, for example, with business risks, credit risks, financial risks, environmental risks, insurance risks, technical risks, project risks, and many more.

Project risks are of particular interest here: risk management in projects deals with all measures that appear necessary for preventing or dealing with unplanned results, because they may endanger the progress of the project. As these measures often require financial or time resources, it is always necessary to weigh the associated efforts against the damages in case the risks occur (weighted with the likelihood).

Risk management also deals with topics of so-called issues management – the handling of risks that have arisen without previously being identified. Risk analyses in systems engineering are very useful in the concept analysis step of the problem-solving cycle.

The → PMBoK Guide envisions six steps for risk management:
  1. 1.

    Risk management planning: determine processes with which the following risk processes work. These include identification methods, documentation strategies, evaluation strategies, and responsibilities.

     
  2. 2.

    Risk identification: risks in terms of potential hindrances are identified and documented using various methods.

     
  3. 3.

    Qualitative risk analysis: the risks identified are assessed qualitatively and assigned priorities based on the probability that the risks will occur, on the one hand, and on the other hand, their effect on the success of the project.

     
  4. 4.

    Quantitative risk analysis: then there is the quantitative evaluation of risk (in monetary terms), countermeasures, and/or necessary allowances.

     
  5. 5.

    Risk response planning: countermeasures are identified to minimize the occurrence of risk or to reduce the effects of risk.

     
  6. 6.

    Risk monitoring and control: the status of the risks (usually documented in a risk list) and the status of the countermeasures is continually monitored.

     
  • References:

  • Snijders, Paul; Wuttke, Thomas; Zandhuis Anton (2009): PMBOK Guide.

  • Hester, R E. and Harrison, R M., eds. (1998): Risk Assessment and Risk Management.

  • Hopkin, P. (2012): Fundamentals of Risk Management: Understanding, Evaluating and Implementing.

Safety Analyses (Synonym: Security Analyses)

Safety analyses are supposed to reduce the risk of a hazard in businesses and/or projects. The objective is to recognize threats and assess their likelihood of occurrence and damage potential and the attendant risk. Important traditional methods include → fault tree analysis and → FMEA. An interesting new method is → Systems Theoretic Process Analysis (STPA), which is also designed to prevent component interaction accidents.
  • Reference:

  • Leveson Nancy (2012): Engineering a Safer World. Systems Thinking Applied to Safety

Safety Management (Synonym: Security Management)

Safety management tasks in business or in projects can be categorized into strategic and operative tasks: strategic tasks include such things as strategic analyses (threat analyses, vulnerability analyses), establishment of safety goals, strategies and safety measures, the creating of a safety concept, assigning responsibilities for safety and the necessary competencies for the responsible management positions and employees, strategic controls, etc. Operative tasks include operative analyses (→ risk analyses), operative design of measures for implementation of the safety concept (i.e., organization of training and drills in which safety-related information is provided, and the proper behavior of the employees regarding dangers is explained and practiced. This improves the acceptance of safety measures significantly. Operative controls such as checking the implementation of the safety concept, implementing the measures planned, adherence to the safety guidelines, and the effectiveness of the implemented safety measures ensure the sustainability of this concept.
  • References:

  • McKinnon, Ronald C. (2012): Safety Management: Near Miss Identification, Recognition, and Investigation.

  • Ortmeier, P. J. (2001): Security Management: An Introduction.

Sampling

Sampling takes a representative portion of information, a material or product to test. Sampling is used frequently in statistics (e.g., market research, technology, production, quality control, natural, social, and human scientific, medical, and psychological research), because often, it is not possible to investigate all elements, such as the entire population or all manufactured examples of a product. When analyzing unknown systems, samples are often used to construct a hypothesis for the whole system following the induction principle, with which inference is made from the particular to the general.
  • Reference: see the reference for → mathematical statistics.

Scenario Analysis (Scenario Planning)

Scenario analysis (sometimes also called scenario technique ) is a strategic planning method that has its roots in the military, but in the meantime it has also been used with economic and social questions. It is used for analyzing such things as extreme scenarios (best-case scenario/worst-case scenario), plus particularly relevant or typical scenarios (trend scenarios). The scenario technique is also used in psychology and psychotherapy (psychodrama, socio-drama). There, it involves both future and past scenarios. The biggest drawback in using scenario analysis is the fact that it is difficult to put realistic scenarios together: a realistic worst case scenario is not a scenario where everything that can go wrong does go wrong. Instead of going through the effort and risk of constructing realistic scenarios, often stochastic methods are applied, e.g., by assigning probabilities to input and system parameters and by evaluating system behavior using the → Monte Carlo simulation.

However, scenario analysis is still preferred in preparing strategic decisions, because of its simplicity compared with statistical methods, e.g., with reference to technological developments, business models, with market and sector developments, with orientation toward future developments, strategic development, and verification, for early recognition of possible changes through sensitization for the future. Further application fields are crisis management, project management, → risk management, and many others.

Example: Dennis Meadows’ study for the Club of Rome for national economic scenarios: “The Limits of Growth.” For strategic business planning a scenario analysis was used successfully by Shell in the 1970s to overcome the crisis in petroleum prices. The International Panel on Climate Change has worked up scenarios of what the world will look like in the future and the effects that climate change can produce.
  • References:

  • Kahn, H. (1967): The Year 2000

  • Wright, G.; Cairns, G. (2011): Scenario thinking: practical approaches to the future

  • Cornelius, Peter; Van de Putte, Alexander and Romani, Mattia (2005): Three Decades of Scenario Planning in Shell. California Management Review, Nov. 2005

Scoring Method

Synonym forvalue-benefit analysis .

Sensitivity Analyses

The purpose of a sensitivity analysis is to check the stability of an achieved result. This occurs, for example, by varying the value of the parameters that influence the result within an area considered reliable and looking at the new results thus created. Example: determining a series of variants by means of a → value-benefit analysis. Now, it is possible to change the weights of individual criteria that are considered crucial, or to vary the assignment of scores within a reasonable framework. If nothing changes, in other words, if the winner is still out front, we may have a higher degree of certainty that the results are correct. However, if the results change, we have to wonder if the cluster of parameters just chosen would also work.

In systems engineering, sensitivity analysis is mainly used during solution search (after an optimization process) or after an evaluation of solutions.

Sequencing Problems

A category of → operations research problems, which involve putting activities into an optimal and feasible sequence. The objective can be, for example, to minimize the order throughput time throughout manufacturing. A well-known example is the traveling salesperson (salesman) problem, in which the shortest route for a traveling salesperson is sought for visiting n specified cities exactly once.
  • References: see → operations research

Simplex Algorithm

The most important algorithm for solving → linear optimizing tasks in practice. Developed by George Dantzig in 1947 and subsequently improved and extended.
  • References: see → operations research

Simulation Techniques

The term simulation can be used in many ways. Here, we mainly examine the aspects that are significant in connection with development processes.

On the one hand, the term simulation technique is used in connection with tasks that are not (or not yet) solvable analytically (using an optimization algorithm). In this case, the → Monte Carlo method is a good choice. On the other hand, modeling and subsequently simulation techniques are used when the system to be developed is not yet real and only exists as a model. Experiments are conducted on a simulated model to learn about the real system. A simulation model thus presents an abstraction of the system to be engineered (structure, function, behavior) – usually in electronic form.

There can be many reasons for using simulations:
  1. (a)
    Improving or speeding up the development process
    • Virtual product development: design and construction, calculation, and testing are primarily digital, i.e., on the computer. That way, time and money are saved in constructing prototypes, and the “time to market” is reduced and/or the product quality is improved through (multiple) reworkings of a design.

    • Virtual testing: e.g.: simulation of product tests (e.g., crash tests in the automobile industry) in a computer model, perhaps by modifying the design in several steps. Subsequently, a physical prototype is constructed and tested in a testing station. That way, the conformity of the computer model and the testing station results can be compared (and the testing station trial is also a type of reality simulation).

    • Simulation of production facilities because repeatedly remodeling the physical facility would be too complex and expensive. Key word → digital factory.

    • City planning, traffic planning (public and company internal).

    • Simulation of logistics systems (warehousing, storage, retrieval, supply chain, etc.).

     
  2. (b)
    Risk-free and economical training in a training simulator:
    • Pilot training in flight simulators (practice with critical situations such as engine failure, emergency landing, etc.).

    • Similar: driving simulator.

    • Training of doctors and surgeons in operating techniques.

    • Experimental business games (training for analysis and decision-making skills).

     
  3. (c)
    The real system cannot be observed directly:
    • System-related: simulation of a singular molecule in a fluid, astrophysical processes.

    • The real system works too fast: simulation of circuitry.

    • The real system works too slowly: simulation of geological processes.

     
Of course, there are also limits to using simulation models:
  • Limited resources (time, money). Therefore, a model must be as simple as possible.

  • Each simulation model simplifies reality. In particular, models of complex situations or systems are often grossly simplified. This naturally reduces the precision and sometimes the usefulness of the simulation results to be transferred to reality. With other parameters, the results can simply be wrong. This is why simulation models must be carefully tested and validated.

  • Imprecision of the input data (measuring errors, randomness, etc.).

  • References:

  • Banks, J., Ed. (1998): Handbook of Simulation: Principles, Methodology, Advances, Applications, and Practice.

  • Banks, J., J.S. Carson, B.L. Nelson, and D.M. Nicol (2005), Discrete-Event System Simulation.

  • Cellier, F.E. and Kofman, E. (2006): Continuous System Simulation,

  • Sterman, J.D. (2006), Business Dynamics: Systems Thinking and Modeling for a Complex World.

  • Terano, T., H. Kita, T. Kaneda, K. Arai, and H. Deguchi, Eds. (2005), Agent-Based Simulation: From Modeling Methodologies to Real-World Applications.

  • Robinson, St. (2004): Simulation: The Practice of Model Development and Use.

Six Sigma, 6σ

Six sigma is a collection of methods and tools for process improvement and belongs to → quality control. It was introduced at Motorola in 1986 and later adopted by many companies in different sectors – such as General Electric, where it became a central element of the enterprise’s business strategy.

The underlying assumption is that it is possible to improve the quality of the process output by systematically identifying and removing the causes of defects and by minimizing the (unintended) variability in manufacturing and business processes. Six sigma uses a large variety of → quality management methods (data-driven, statistical) methods.

The term “six sigma” originates from the goal to control 6σ of a processes variability – which means 99.99966% of all possible outcomes.
  • Reference:

  • Tennant, Geoff (2001). SIX SIGMA: SPC and TQM in Manufacturing and Services.

Statistics

Seemathematical statistics

Systems Theoretic Process Analysis, STPA

A new hazard analysis technique with the same goals as any other hazard analysis technique: to identify scenarios leading to identified hazards and thus to losses so that they can be eliminated or controlled. STPA, however, has a different theoretical basis or accident causality model: STPA is based on systems theory whereas traditional hazard analysis techniques have reliability theory at their foundation. However, many of the causes do not involve failures or unreliability.

Although traditional techniques were designed to prevent component failure accidents (accidents caused by one or more components that fail), STPA also was designed to address increasingly common component interaction accidents, which can result from design flaws or unsafe interactions among nonfailing (operational) components.
  • References:

  • Leveson, Nancy (2012): Engineering a Safer World. Systems Thinking Applied to Safety. MIT Press

  • Leveson, Nancy (2013): An STPA Primer http://psas.scripts.mit.edu/home/wp-content/uploads/2015/06/STPA-Primer-v1.pdf

Survey

Seeinterview

Synectics

A →creativity technique developed by W.J. Gordon to activate the solution search. A combination of systematic and intuitive elements based on brainstorming that is embedded in a process of stepwise analogy creation (→ analogy method). The process consists of four phases:
  1. 1.

    Preparation phase: after intervening with spontaneous solution ideas, disassociation is used to encourage the implementation of structures extraneous to the problem and the unaccustomed combination of elements.

     
  2. 2.

    Incubation phase: in addition to direct analogies from technology or nature (→ bionics), personal analogies are formulated (How do I feel as…?), along with symbolic, contrary, and fantastic analogies (What would a fairy do?).

     
  3. 3.

    Illumination phase: the analogies are checked for their appropriateness for transfer to the problem; also called force-fit.

     
  4. 4.

    Verification phase: working up solution concepts.

     
Synectics places high demands on the participants, especially on the moderator. In addition to overcoming the restraints of personal analogies, the transfer of unfamiliar structures and the unaccustomed combination of elements requires some practice. This, plus the greater demands on the team and the preparation of the moderator, may be reasons why synectics is not used so frequently in practice.
  • Reference:

  • Gordon, William J.J. (1961): Synectics: The Development of Creative Capacity. New York. Harper & Row Publ.

System Dynamics, SD

Methodology developed by Jay W. Forrester at MIT for a holistic analysis and simulation of complex, dynamic systems. It is especially applied in socio-economic fields. The effects on management decisions, on structure and system behavior (e.g., business success) can thus be simulated and recommendations for action can be extrapolated (also see the brief presentation of system dynamics in Sect. 1.​4).

The qualitative method mainly involves the identification and investigation of self-contained feedback loops. There is a distinction between loops with positive, strengthening effects (reinforcing loops) and negative, stabilizing ones (balancing loops). Causal loop diagrams are often used here as a graphical representation technique.

With quantitative models, the representation in stock and flow diagrams and their simulation facilitates a deeper understanding of the system. Stocks, flows, and auxiliary quantities are useful in describing the systems relationships, and they show how the feedback loops lead to system behavior, which in part is often not linear and counterintuitive. This is the main advantage of this method.

Special software packages such as CONSIDEO, iThink/STELLA, DYNAMO, Vensim, Powersim or AnyLogic allow a numerical simulation of the system dynamics models. The simulation runs of various scenarios foster understanding of the system behavior over time.

Peter Senge has identified, investigated, and classified typical structures in feedback system types, which he has called archetypes. Knowledge of these basic structures allows a better understanding and a good forecast of the behavior of various social systems and/or management situations and thus provides a basis for more effective interventions.

Application areas: system dynamics was the underlying methodology for simulating the World3 world model, which was created under the direction of Dennis L. Meadows under contract from the Club of Rome for studies on “Limits to Growth” (1972, 2004). It is also useful for simulating and explaining the complex behavior of humans in social systems. Typical examples are the investigation of the phenomenon of over-fishing and the occurrence of disasters such as the nuclear accident in Chernobyl.

Building on the methods of system dynamics, information dynamics investigates the information processing of systems and determines that information is the essential determinant for the behavior and the efficiency of systems.
  • References:

  • Forrester, J.W. (1977): Industrial Dynamics.

  • Meadows, D.; Meadows, D.; Randers, J. (1972): The Limits to Growth.

  • Meadows, D.; Meadows D. L.; Randers J. (2004): Limits to Growth: The 30-Year Update.

  • Senge, P.M. (1990): The Fifth Discipline: The Art & Practice of The Learning Organization.

  • Senge, P.M. (1994): The Fifth Discipline Fieldbook: Strategies and Tools for Building a Learning Organization.

  • Sterman, J.B. (2006): Business Dynamics. Systems Thinking and Modeling for a Complex World.

Systems Modeling Language, SysML

SysML is a standardized language based on → UML for modeling complex systems. Originally designed mainly for software development, the application field noticeably extends to overall product development (hardware and software). SysML supports analysis, design, and the testing of complex systems, in addition to the following steps:
  • Modeling system requirements and making them available

  • Analyzing and evaluating systems to solve requirement and design issues and testing alternatives

  • Communicating system information unambiguously among various stakeholders

  • References:

  • Dori, Dov (2016): Model-Based Systems Engineering with OPM and SysML.

  • Weilkiens, T. (2008): Systems Engineering with SysML/UML: Modeling, Analysis, Design.

Target Costing

Approach to guiding the product development process by previously set cost targets. The process starts with market-driven costing where a realistic sales price (including a desired profit margin) is estimated. This “allowable cost” is the basis for the next step, the product level costing, where the internally achievable cost structure for producing a competitive product is evaluated. Here, the → value analysis technique can be of great help. If necessary, functional changes in the product specifications are made to achieve the cost target. In the third process step of target costing, this internal step is repeated at the component level.

During the development process, it is important to track these costs as well.
  • References:

  • Clifton, B.C.; Bird, H.M.B.; Albano, R.E.; Townsend, W.P. (2004) Target Costing: Market-Driven Product Design.

  • IMA: Implementing Target Costing (1994)

Theory of Inventive Problem Solving, TIPS

English acronym for → TRIZ (Teoriya Resheniya Izobretatelskikh Zadatch) .

To-Do List

Also known as a task list or list of open points (LOP list). A simple aid in project meetings for assigning the planned tasks (= WHAT) to individuals (sometimes also groups or organizational units (= WHO) and tying these tasks or work packages to a committed deadline (=WHEN). It is important to make sure that tasks are not delegated to a group or organization, but rather have exactly one person responsible.

The to-do list often has the form of a three-column list that is to be communicated to the corresponding persons and involved groups immediately after the project meeting. This creates the basis for the task supervision (task tracking): the next meeting starts with a report on the pending and completed (or uncompleted) tasks. Project team members whose tasks are lagging behind schedule have to explain and justify the situation and the reason to the team – which will also create (peer) pressure for improving work discipline as an intended side-effect.

Total Quality Control, TQC

Ongoing holistic → quality control (or assurance) of all products and processes involving, in addition to classical production-oriented company functions, other departments such as design and purchasing.
  • Reference:

  • Feigenbaum, Armand V. (1991). Total Quality Control.

Total Quality Management, TQM

Total quality management (TQM) is a traditional → quality control method that demands companywide efforts to establish an environment where a company’s core business processes are continuously improving, which leads to high-quality services or products for customers. In different industries and regions there are variants of TQM. In recent years, some of the TQM ideas have been adopted by modern and more popular concepts such as the ISO 9000 standard, lean manufacturing, or → six sigma.

Quality management of such a kind is nowadays mandatory in sectors such as the airline and aerospace industries, medical technology, health care, pharmaceuticals, and food manufacturing.

Total quality management originated in the USA (W.E. Deming). It, became very popular in Japan after WW2 and experienced a renaissance in the USA (Baldridge Award). In Europe, TQM is known as the → EFQM Model for Business Excellence.

To give an example of some basic TQM core beliefs: not only people, but also poorly planned and poorly managed processes cause errors. It is senseless and futile to search for a single culprit. The goal has to be zero errors. Sourcing closer, more reliable relationships with fewer suppliers is important. As the entire focus is on the customers, processes must be directed primarily toward them and their needs.
  • References:

  • Juran, Joseph M. (1995)): A history of managing for quality: the evolution, trends, and future directions of managing for quality

  • Evans, J.R.; Lindsay, W.M., 1995. The management and control of quality.

  • Deming, E. (1997): Out of the Crisis.

  • Omachonu, Vincent K.; Ross, Joel E. (2004): Principles of Total Quality,

  • Tague, Nancy R. (2005): The Quality Toolbox.

  • Gitlow, Howard Seth; Levine, David M. (2005): Six Sigma for Green Belts and Champions

TRIZ/TIPS

The TRIZ method is one of the → creativity techniques. The original term TRIZ is a Russian acronym for Theory of Inventive Problem Solving (TIPS), which is used in the English-speaking world. The methodology was developed by a group in Russia around G. Altshuller. It was developed during the examination of a large number of patent specifications, from which those that appeared to describe technical breakthroughs should be selected. Those were subsequently analyzed in greater detail and three essential, common phenomena were identified:
  1. 1.

    Many inventions are based on a comparatively small number of general solution principles.

     
  2. 2.

    Only the overcoming of inconsistencies makes innovative developments possible.

     
  3. 3.

    The evolution of technical systems follows certain patterns and rules.

     

The TRIZ method contains a series of methodical tools that facilitate more effective analysis of a technical problem and make it possible to find creative solutions.

Using this method, inventors attempt to systematize their activities to get to new problem solutions more quickly and more efficiently. The TRIZ method has become widespread.

The classical TRIZ methods are:
  1. 1.

    Principles of innovation and contradiction matrices

     
  2. 2.

    Separation principles for solving physical contradictions

     
  3. 3.

    Algorithms or at least stepwise procedures for solving invention problems

     
  4. 4.

    System of 76 standard solutions and substance field analysis

     
  5. 5.

    S-curves and laws for systems development (evolutionary laws of technical development, laws of technical evolution)

     
  6. 6.

    Principle (law) of ideality

     
  7. 7.

    Modeling technical systems using “little men” (dwarf model)

     
The following methods were introduced by Altshuller’s followers and now also belong to TRIZ:
  1. 1.

    Innovation checklist (innovation situation questionnaire)

     
  2. 2.

    Function structure according to TRIZ (a type of cause and effect diagram, which is different than Ishikawa’s version)

     
  3. 3.

    Subject action object function model (an expanded functional model based on work by L. Miles on → value analysis)

     
  4. 4.

    Process analysis

     
  5. 5.

    Materials cost time operator

     
  6. 6.

    Anticipatory error detection

     
  7. 7.

    Feature transfer (part of “alternative system design”)

     
  8. 8.

    Resources

     
  • References:

  • Altshuller, G. S. (1984): Creativity as an Exact Science – The Theory of the Solution of Inventive Problems. New York: Gorden and Breach

  • Altshuller, G. (1999): The Innovation Algorithm: TRIZ, systematic innovation, and technical creativity.

  • Fey, V.; Rivin, E. (2005): Innovation on Demand: New Product Development Using TRIZ.

  • Rantanen, K.; Domb, E. (2007): Simplified TRIZ: New Problem Solving Applications for Engineers and Manufacturing Professionals.

Unified Modeling Language, UML

A language for modeling software and other systems developed and standardized by the OMG. UML is standardized by ISO standards (ISO/IEC 19501) and is today one of the dominant languages for modeling software systems. Owing to its software focus and despite the name its missing an unifying concept, UML is therefore not very useful for business process modeling, where → business process model and notation (BPMN) is used instead.

Selected UML components are used in software project management:
  • Project commissioning parties and business professionals test and confirm, for example, the requirements that the business analysts have determined using BPMN and create the so-called UML use case diagrams.

  • Software developers create a program logic that the business analysts have described in collaboration with professionals in activity diagrams.

  • Systems engineers install and operate the software system based on an installation plan that exists as a deployment diagram.

Besides its usefulness as a graphic notation, UML primarily specifies the data objects and program entities along with their attributes and relationships. Modern UML-based software development tools can automatically generate the corresponding source code.
  • References:

  • Weilkiens, T. (2008): Systems Engineering with SysML/UML: Modeling, Analysis, Design.

  • Coad, Peter; Lefebvre, Eric; De Luca, Jeff (1999): Java Modeling in Color with UML: Enterprise Components and Process.

Use Case

Use Cases are used in both systems- and software engineering to describe all steps of a task where, for example, a person (“actor”) interacts with a system. Such a use case can be seen as a scenario. The basic idea is to extract requirements by analyzing a “usage” or interaction process, which is often more efficient than a classical specification list.

The result of a use case can be success or failure/termination. Use cases traditionally are named for the goals from the agents’ perspective: enrolling a member, withdrawing money, returning a car.

The granularity of use cases can be vastly different: at a very high level a use case describes what happens only very roughly and in an abstract manner. However, the technique of writing a use case can be refined, even at the level of IT processes, so that the behavior of an application is described in detail. This contradicts the original intention of use cases, but it is often very useful.

Use cases and business processes show a different view of the system modeled and have to be clearly distinguished from one another. A use case describes what the actor/environment expects from the system. Business processes, on the other hand, model how the system operates internally to meet the requirements of the environment.

Valuation Techniques

Formalized procedures for evaluating and comparing, for example, solution variants with respect to meeting their objectives (see Part III, Sect. 6.​4). Examples: → value-benefit analysis, → cost-effectiveness analysis, → analytic hierarchy process (AHP).

Value Analysis (Synonyms: Value Management, Value Engineering)

Value analysis was originally an aid in product design. With respect to simplifying products or parts and reducing their costs, it primarily serves value improvement and cost reduction. An extension of this method added the ability to expand this basic idea to the value creation of new products. In a further step, the methodology of value analysis was also applied to researching and designing intangible services (e.g., organizational processes).

The core of value analysis is thinking in functions, at first at the level of the entire product and use by clients. This means “letting go” of existing or obvious possible solution patterns and opening up to innovative solutions and considering their value to a potential client. There is thus a distinction between main and secondary functions, and there may also be undesirable functions that the client does not want and for which he or she will not pay. In addition, there are utilitarian functions (utilities) and prestige utility (e.g., esthetics, image, prestige, without any useful advantage). A client’s appreciation of a product and the desired and valued functional features should also determine the price that a client is prepared to pay in a competitive situation. Often, this is specified in the form of a target cost as a component of the value analysis goal (with functional, quality, performance, and deadline goals, etc.) for the design process. See → target costing.

In contrast to the → value benefit analysis, which is an evaluation process, value analysis is a special approach to the solution search that also seeks an evaluation of individual solution elements with respect to their optimization.

Over the course of time (and in its promoters’ self-perception), value analysis has grown from an approach to improving the product or the product design process into a comprehensive problem-solving methodology (VDI Guideline 2800 or formerly DIN 69910 standard) – see, for example, the value analysis work plan illustrated in Part I, Sect. 2.​2.​1.​4.
  • References:

  • Miles, Lawrence D. (1972): Techniques of Value Analysis and Engineering.

  • Sato, Yoshihiko; Kaufman, J. Jerry (2005): Value Analysis Tear-Down: A New Process for Product Development and Innovation.

Value-Benefit Analysis, VBA (Synonym: Scoring Method)

Evaluation method used to determine the preferability of variants.

See Part III, Sect. 6.4.2.3 or the airport Case Study in Chap. 9 .

Virtual Product Development

Development of a product within the virtual world of computers. With the help of CAD systems and PDM Systems, products are designed graphically and assigned relevant technical characteristics (material characteristics: thickness, surface finish, tensile and yield strength; mechanical and kinematic relationships; production information, etc.). Afterward, further processing, calculations, and simulations of all types can be carried out, such as strength calculations, finite element calculations, installation tests (with the help of digital mockups), crash simulations, etc. The results can have repercussions on the dimensioning or shaping of the construction, i.e., they may lead to iterations for improving the quality of the construction.

New products can be tested virtually in this way before they are available in physical form. Virtual product development is one of the most important trends in construction and development activities.
  • References:

  • Weisberg, D.: The Engineering Design Revolution, E-Book, www.cadhistory.net, May 2010

  • Nambisan, Satish ed. (2010): Information Technology and Product Development.

  • Kenneth B. Kahn, ed. (2013): The PDMA Handbook of New Product Development.

  • Bordegoni, M.; Rizzi, C., eds. (2011): Innovation in Product Design: From CAD to Virtual Prototyping.

  • Virtual Vehicle Research Center: http://​vif.​tugraz.​at

Visualization Techniques (Information Graphics)

Collective term for techniques for representing and visualizing all types of information: layout plans show the size of rooms and their arrangement, etc. Organizational charts show the organizational structure, the manner of structuring, hierarchical relationships, job holders, etc. Flow charts show the sequence of mutual dependencies, the logical order of activities. Communication diagrams show between which positions, how often, what type of communication takes place, etc. Bubble charts are used for draft visualizations of a problem or for visualizing system relationships. Bubbles = elements, and arrows indicate relationships among them. Questions: which system aspect is of interest to us? What perspective are we using? Which elements of significance, which relationships, how to demarcate them, what is not significant? Also see Part I, Sect. 1.​2.​5. Tabular representations provide an overview, structure, order. → Mind maps likewise order and stimulate the continuation of trains of thought.
  • References:

  • Tufte, Edward R. (2001): The Visual Display of Quantitative Information

  • Zelazny, Gene (2001): Say It With Charts: The Executive’s Guide to Visual Communication

  • Rendgen, Sandra; et al. (2012): Information Graphics

Work Breakdown Structure, WBS

A graphic overview that represents objects of a system or tasks (usually as a tree), that are connected with respect to their development and implementation. The structuring can be carried out in various ways.
  • Object-oriented WBSs (object structure designs) provide an overview of the system components to be designed, which can be further subdivided arbitrarily. Example: a car divided into chassis, body, motor, interior, etc.

  • Task-oriented WBSs describe the tasks to be completed, such as development, construction, manufacturing, assembly, marketing, distribution, etc.

  • Phase-oriented WBSs describe the phases, which are divided according to temporal and decision-oriented aspects, e.g., preliminary study (feasibility study), main study (master plan), detailed study, etc.

  • References:

  • Project Management Institute (2006): Practice Standard for Work Breakdown Structures. Project Management Institute

  • Haugan, Gregory T. (2003): The Work Breakdown Structure in Government Contracting

16.1 Self-Check of Knowledge and Understanding: Encyclopedia/Glossary

  1. 1.

    Do you think this glossary useful even nowadays, in the age of Wikipedia, Google, etc.? Why or why not?

     
  2. 2.

    How many percent of the described methods and tools were familiar for you?