An → arrow in the text refers to a corresponding key word in the encyclopedia.
ABC Analysis (Synonyms: Pareto Analysis, 80–20 Rule)
Reference:
Koch, R. (2004): Living the 80/20 Way.
Activity Sampling (Synonyms: Work Sampling, Multi-Moment Observations)
Activity sampling is a statistical method for determining the proportion of time spent by workers in various defined categories of activity (e.g., setting up a machine, assembling parts, waiting, etc.). Its great advantage over other statistical techniques is the efficiency with which it measures and analyzes the nature and performance figures of complex processes and interactions.
In an activity sampling study, a sufficiently large and representative number of random observations is made during a specified amount of time. The nature and frequency of observed activities are recorded and later analyzed.
Reference:
Groover, M. P. Work Systems and Methods, measurement, and Management of Work.
Agile Systems Engineering
Refers either to an AGILE Systems ENGINEERING Approach or to Engineering of AGILE SYSTEMS. See Sect. 1.3 on the “Agility of Systems.”
Analogy Method
References:
Gerardin, L. (1968): Bionics.
Rossmann T.; a.o. (2007): Bionics - Natural Technologies and Biomimetics.
Analysis Techniques
Techniques that are used for a systematic investigation of all aspects/components/elements of an object (or subject) based on defined criteria. These are then sorted, structured, and evaluated – very often also with respect to their interaction. Analysis techniques are very important to systems engineering and are used to investigate the past, present, and, when designing, the future (requirements) of systems.
Mathematical methods, simulation runs, plausibility tests, and destruction analyses support the various techniques, which are often named after their purpose (→ reliability analysis, → security analysis, disaster analysis, compatibility analysis, consequence analysis, → risk analysis, cause-effect analyses, → cost-effectiveness analyses, etc.).
In designing systems analysis, techniques can either be used to predict a model’s behaviors ahead of time (ex-ante) or to investigate the behavior of the realized model (object) in the real environment. In spite of all the progress in mathematical/scientific modeling and computer performance, and despite sophisticated simulations of system behaviors, often only the construction and analysis of a real, functioning system can clarify the functionality of principles and solution ideas. When designing chemical engineering processes, for example, miniature plants are often constructed for testing first or pilot plants. When designing machines, we often see functional models, pilot samples, or prototypes. Finally, pilot runs are used to test production processes and means of serial and mass production.
With workflow planning in particular, test runs of various sizes right up to parallel operation of the old and new systems are carried out.
Analytic Hierarchy Process, AHP
Reference:
Saaty, Th. L. (2001): Decision Making for Leaders
Assignment or Allocation Problems
Transportation problems can generally be described as follows: at certain starting points physically separated from one another (points of departure), specific resources (e.g., vehicles, goods for shipment) are available in certain quantities. At certain endpoints (recipients) there is a specific need for the same resources. The connection paths including transport times and costs of the transfer from each starting point to each endpoint are given. The available resources are to be sent from the starting points to the endpoints so that the shipping expense (costs, times, vehicle use) is minimal.
An allocation problem can also be seen as a special instance of a transport problem. At each starting point, there is just one unit of the required resource available, and at each endpoint just one unit is requested. (Example: the best possible allocation of n persons to n workplaces or the transfer of n vehicles from n starting points to n endpoints so that the overall mileage is minimized).
References: See → linear programming, → operations research.
Bar Chart (Synonym: Gantt Chart)
An aid for the graphic representation of the duration (by the length of the bar) and the temporal arrangement of activities, e.g., in a project (by the length of the time axis). In their original form, bar graphs contain no logical dependencies of the represented activities. However, these can be added. IT-supported projects management systems permit the automatic generation of bar graphs from a → network plan.
Benchmarking
Benchmarking is a continual process of comparing entities such as one’s own products, performance, practices, processes, with others to set goals to perform as well as or better than the best and thereby achieve a competitive advantage.
References:
Boxwell, R.G. (1994): Benchmarking for Competitive Advantage.
Walleck, A.S., a.o. (1991): Benchmarking world-class performance
Bionics
References:
Rossmann T.; Tropea C.; Vincent, J. (2007): Bionics.
Gerardin, L. (1968): Bionics.
Marteka, V. (1965): Bionics.
Brainstorming
→ A creativity technique with which a group’s problem-solving skills are to be used so that the flow of ideas is encouraged by a type of game rule.
Reference:
Osborn, A. F. (1957): Applied Imagination.
Business Process Model and Notation, BPMN
References:
Silver Bruce (2011): BPMN Method and Style with BPMN Implementer’s Guide
White, Stephen A.; Bock, Conrad (2011). BPMN 2.0 Handbook: Methods, Concepts, Case Studies and Standards in Business Process Management Notation.
Grosskopf, A.; Decker, G.; Weske, M. (2009): The Process: Business Process Modeling Using BPMN.
Business Re-engineering, See → Re-engineering
Capability Maturity Model, CMM; Capability Maturity Model Integration, CMMI
References:
Gallagher B. P.; Phillips, M.; Richter, K.J; Shrum. S. (2009): CMMI-ACQ.
Chrissis, M.B.; Konrad, M.; Shrum, S. (2011): CMMI.
Card Technique (Synonym: Metaplan Method)
→ Creativity technique in which the ideas and statements are not expressed orally, as in → brainstorming, but rather are written on small cards by every participant and posted on a bulletin board. Advantage: easier evaluation, e.g., in terms of clustering similar ideas. Disadvantage: depends on the availability of aids such as a bulletin board.
Checklists
Lists of activities that are necessary for the completion of tasks. There is a meaningful distinction between (1) checklists of a compulsory nature: every activity must be carried out (for example: airplane takeoff) and (2) checklists of a discretionary nature as an aid in searching for ideas – or as a stimulus for one’s own thinking (remember the important things). Checklists are usually strongly task-/context-oriented and therefore not universally applicable.
Configuration Management, CM
The impetus for configuration management (CM) was the continually increasing complexity of products caused by a variety of possible combinations involving various modules and the continual change in product configurations. The first solution approaches to CM were developed in the aircraft and aerospace industries. Similar complexity problems were also evident in other sectors in which – for both suppliers and clients – it always had to be clear what parts or modules constituted the purchased or delivered product. Methods and instruments of CM were refined and specialized for various application fields. Today, CM transformations are part of many disciplines, such as product data management (PDM), software configuration management, etc.
The American National Standards Institute (ANSI), in cooperation with the Electronic Industries Alliance (EIA) has defined CM as follows: “Configuration management is a management process for establishing and maintaining consistency of a product’s performance, its functional and physical attributes, with its requirements, design, and operational information, throughout its life.” (Wikipedia)
References:
ISO 10007:2003: Quality management systems - Guidelines for configuration management.
Lyon, D.D. (2000): Practical Configuration Management.
Continuous Improvement Process
See → Kaizen
Correlation Analysis
A positive (+) correlation means that variables are coupled and changing their magnitude in the same direction (such as horsepower and acceleration of a car).
A negative (−) correlation means that variables are coupled and changing their magnitude in opposite directions (such as the dependency of fuel consumption and weight or engine power of a car).
r = 0 means that the variables are not coupled, i.e., totally independent from each other, such as the body height and political beliefs of a person.
However: even a correlation coefficient r near 1 cannot mean a true dependency of two variables, for an outside, third variable can influence both of them, or there can be any other, unknown relationship. Therefore, it is important not to confuse correlation with causation!
References:
Walpole, R.E. (2007, 9.ed.): Probability & Statistics For Engineers & Scientists.
Silver, N. (2013): The Signal and the Noise.
Mann, P.S. (2012): Introductory Statistics
Cost-Benefit Analysis, CBA
References:
Sassone P.G.; Schaffer, W. A. (1978): Cost-benefit Analysis - A Handbook.
Nas, T.F. (1996): Cost-benefit analysis: Theory and application.
Cost-Effectiveness Analysis
Reference:
Levin H M (1983): Cost-Effectiveness: A Primer.
Creativity Techniques, CTs
Are ascribed to purely intuitive processes
Promote analogous and/or contrasting linking of ideas (restructuring of the solution field)
Are intended to lead to a variety of solutions by means of a combinational process
Thus, individual techniques can be assigned to several categories. Creativity techniques include in particular → brainstorming, → card technique, and → method 635. The characteristic of these techniques is that criticism and discussion of the reasonableness and practicability of an idea are not allowed at first. On the other hand, it is allowable, and even desirable, to seize on an expressed idea and modify it or twist its meaning around.
This glossary also includes → analogy methods → synectics, → bionics, and → morphological analysis. Attribute listing seeks further manifestations of the characteristics, functions, and effects (traits) of the existing or a discovered solution based on each trait. The emphasis on applying attribute listing is improvement. Beyond these methods, there are comprehensive systems for finding solutions such as G. Nadler’s ideals concept (see Part I, Sect. 2.1.4.2), the systematic heuristics of G. Altshuller’s theory of imaginative problem-solving (→ TRIZ).
References:
De Bono, E. (1970): The Use of Lateral Thinking.
Osborn, A.F. (1957): Applied Imagination.
Csikszentmihalyi, M. (2013): Creativity: Flow and the Psychology of Discovery and Invention.
Teramata, T.; Nijstad, B.A. (Eds. 2003): Group Creativity: Innovation Through Collaboration.
Criteria Plan
List of variables and their measures used for a comparative evaluation of solutions (see Part III, Sect. 6.4.3.2).
Critical Path Method, CPM
The critical path method (CPM) is a widely used algorithm for planning and scheduling project activities → network planning techniques.
Decision Theory
References:
Saaty, Th. L. (2001): Decision Making for Leaders.
Schuyler, J. R. (2001): Risk and Decision Analysis in Projects.
Beer, S. (1966): Decision and Control.
Keeney, R.L.; Raiffa, H. (1976): Decisions with Multiple Objectives.
Peterson, M. (2009): An Introduction to Decision Theory.
Goodwin, P.; Wright, G. (2004): Decision Analysis for Management Judgment
Decision Tree Method
References:
Magee, J.F. (1964): Decision Trees for Decision Making
Foster, Provost; Fawcett, Tom (2013): Data Science for Business. What You Need to Know about Data Mining and Data-Analytic Thinking. O’Reilly Media.
de Ville, Barry; Neville, Padraic (2013): Decision Trees for Analytics Using SAS® Enterprise Miner™. SAS Institute
Delphi Method
Reference:
Hsu, Chia-Chien and Sandford, Brian A. (2007). The Delphi Technique
Design Structure Matrix, DSM
(also referred to as: dependency structure matrix, problem-solving matrix, design precedence matrix)
References:
Eppinger, Steven D.; Browning, Tyson R. (2012): Design Structure Matrix Methods and Applications.
Lindemann, U., et al. (2009): Structural Complexity Management
Digital Factory
A concept that virtualizes all processes involved in an industrial factory as a computer model/simulation.
References:
Canetta, L.; Redaelli, Cl.; Flores, M. (Eds.) 2011: Digital Factory for Human-oriented Production Systems
Economic Feasibility Calculation
A method of evaluating the profitability of an existing or a planned system, or for a profitability comparison of several variants. Important elements of an economic feasibility calculation are the intended useful life, the interest rate or the discount rate, and the performance expressed in monetary units, which are compared with the use of resources (costs). The profitability principle demands either minimizing costs with given performance or maximizing performance with given costs.
References:
Farris, P W.; Bendle N T.; Pfeifer Ph E; Reibstein D J. (2010). Marketing Metrics: The Definitive Guide to Measuring Marketing Performance.
Feibel B J. (2003): Investment Performance Measurement.
Brealey R A., Myers S C. and Allen F. (20013): Principles of Corporate Finance.
Effect Networks/Influence Matrices
Important components of a method for representing and analyzing complex effect relationships (see → Network Thinking) to draw conclusions from them.
- 1.
Possible use of an effect network diagram in which the relations among the individual elements (variables) are illustrated (Fig. 16.1).
- 2.
Use of a matrix in which the columns and rows represent the network (Fig. 16.2)
- 3.
Assessment of the strengths of the influences and entries in the matrix. The meaning of the numbers is as follows: 0 = no influence; 3 = strong influence, etc. (the scale can be chosen arbitrarily).
- 4.
Calculation of row sums (active effect, means “element influences”). This sum indicates the level of influence taken by the particular element in the row. Calculation of the column sums (passive effect, means “element is influenced”) to show as a whole how strongly the particular element in the column is influenced by others.
- 5.Interpreting the results:
- (a)
If the sum of active effects is high, changes in this element can have large effects on the system. As long as the element is not determined from the outside, but rather can be changed through action, this indicates a possibility for intervention. If an element has both high active effects and passive effects (the product of multiplying the total active effects by the total passive effects is large), this means that changes also produce major backlashes or retroactive effects.
- (b)
If an element exhibits both low total active effects and low total passive effects (total active effects times total passive effects = small), then the element should be considered relatively neutral in comparison with other elements and has a “buffering” character.
- (c)
If the total active effects are high and the total passive effects are low (total active effects divided by total passive effects = large), then this result indicates that intervention is relatively ineffective in this case.
If the total passive effects are high and the total active effects are low (total active effects divided by total passive effects = small), then this element exerts a very small influence and the element is greatly influenced by other factors.
- (a)
- 6.
Application: when considering which measures to use in changing a complex causal-networked system, these considerations help in thinking through the effectiveness of measures and the desirability of the effects.
References:
Probst Gilbert J. B. and Gomez Peter (1992): Thinking in Networks to Avoid Pitfalls of Managerial Thinking
Vester, F. (2007): The Art of Interconnected Thinking.
Colgan St. (2009): Joined-Up Thinking.
EFQM Model
A → Quality management system in the context of → total quality management. It was developed in 1988 by the European Foundation for Quality Management (EFQM) and is used by, according to current estimates, over 10,000 companies.
References:
Gryna, F M. (2001): Quality Planning and Analysis
Deming, W.E. (1997): Out of the Crisis,
Eisenhower Method
Reference:
Covey Stephen R. (2004): The 7 Habits of Highly Effective People
Failure Cause Analysis
A conceptual approach that is aimed at preventing focusing on measures too prematurely before dealing with failures and their causes. The basic idea is shown in Fig. 6.6 in Sect. 6.1.3.2.
Fault Tree Analysis
References:
Roberts, N. H.;Vesely W.E. (1987): Fault Tree Handbook.
Ericson, Clifton A.: (2011): Fault Tree Analysis Primer.
DIN standard 25424-1
Flow Charts
Reference:
Bohl, Marilyn and Rynn, Maria (2007): Tools for Structured and Object-Oriented Design.
FMEA (= Failure Mode and Effects Analysis)
Synonyms: FMECA (Failure Mode and Effects and Criticality Analysis).
Failure mode and effects analysis (FMEA) follows the basic idea of precautionary error prevention rather than subsequently dealing with an error (error detection and correction). This should be done as early as in the design phase by identification of potential error causes. Costs of controlling and error correction in the manufacturing phase or in the field (with the client) should be avoided. The goal is to lower overall costs. Using such a systematic approach and building on the knowledge and experience gained can prevent to repeat design flaws in new products and processes.
Reference:
SAE (2009): Potential Failure Mode and Effects Analysis in Design (Design FMEA) and Potential Failure Mode and Effects Analysis in Manufacturing and Assembly Processes (Process FMEA).
Forecasting Techniques
Methods and techniques with which future developments, results, or conditions can be predicted. The time frames for forecasting can be short, medium, or long term. With respect to techniques, there is a distinction between intuitive and analytical. Intuitive methods take in more subjective opinions and assessments (which may also be influenced by facts): → interview, surveys, → scenario writing, → Delphi method.
References:
Armstrong, J. S. (ed.) (2001). Principles of forecasting: a handbook for researchers and practitioners.
Rescher, N. (1998): Predicting the future: An introduction to the theory of forecasting.
Game Theory
References:
Brandenburger, A M.; Nalebuff B. J. (1996): Co-Opetition. Currency Doubleday
Nash, John (1950) “Equilibrium points in n-person games” Proceedings of the National Academy of Sciences 36(1):48–49
Harsanyi, J C. and Selten, R A. (1988): General Theory of Equilibrium Selection in Games
McCain, Roger A. (2014): Game Theory: A Nontechnical Introduction to the Analysis of Strategy
Heuristics, Heuristic Methods
A term for solution-seeking methods, which are practical (performant, simple. etc.), promising, but not guaranteed to find an optimal solution. Heuristics can also be described as a way of coming up with (sufficiently) good solutions under time and resource constraints.
References:
Michalewicz, Z.; Fogel, D.B. (2004): How To Solve It: Modern Heuristics.
Gigerenzer, G., Todd P. M. (2000): Simple Heuristics That Make Us Smart.
Dörner, D. (1980): Heuristics and Cognition in Complex Systems.
Histogram
Also see References under → statistics.
Information Acquisition Plan
Before conducting a fairly large survey, an information acquisition plan should be prepared based on the following questions: what information is needed? Why? What information is essential? Why? What conclusions should be possible or supported? What degree of precision or detail is necessary? What timeframe will the survey cover? Subsequently, one must consider the quickest and easiest way of gathering this information.
Information Acquisition Techniques
There is a distinction between information oriented toward the past, the present, and the future. Additional characteristics deal with the difference between primary and secondary information: primary information is obtained right at the appropriate source. Secondary information is taken from documents already on hand and is acquired by analyzing them. Specific information acquisition techniques are → interviews, → questionnaires, → observation, → Delphi method, → scenario techniques, → forecasting techniques, etc.
Information Preparation/Processing Methods
All methods that serve the preparation/compression of information or the recognition of inherent laws, dependencies, etc. This includes methods of → mathematical statistics, → correlation analysis, → regression analysis, all types of → key indicator systems, etc. The results are illustrated using → visualization techniques.
Interview
Information gathering through oral questioning, which can be divided into procedures with various characteristics: conversation, conversation with a list of questions, interview with open answers, interview with predetermined answers ((multiple) choice).
Interviewing Techniques
References:
Innes, J. (2009): The Interview Book.
Gordon N J., Fleisher, W L. (2011): Effective Interviewing and Interrogation Techniques.
Ishikawa Diagram
(Synonyms: cause-effect diagram, fishbone diagram)
References:
Ishikawa, K. (1990): Introduction to Quality Control.
Tague N R. (2005): The Quality Toolbox.
ISO 9001
This popular standard establishes the requirements for a quality management (QM) system. Such a system is useful if an organization needs to prove its capabilities producing products that meet the demands of the customers and/or authorities, or simply seeks to increase customer satisfaction. This ISO standard describes the entire QM system in model form.
References:
ISO 9001 in Plain English, (2015) by Craig Cochran
ISO 9001 - What does it mean in the supply chain? Available from: http://www.iso.org/iso/pub100304.pdf
Just in Sequence, JIS
See → just in time
Just in Time, JIT
Just in time (JIT) involves the delivery of materials (raw materials, parts, assembly groups, or products) at precisely the right time, with the necessary quality and in the desired quantity (including packing) to the agreed-upon location. Storage costs are largely eliminated, and the usual administrative expenses are also significantly reduced.
References:
Hirano, Hiroyuki and Makota, Furuya (2006): JIT Is Flow
Womack, James P. and Jones, Daniel T. (2003): Lean Thinking.
Takeda, Hitoshi (2006): The Synchronized Production System: Going Beyond Just-in-time Through Kaizen
Kaizen
Reference:
Masaaki Imai (2012): Gemba Kaizen: A Commonsense Approach to a Continuous Improvement Strategy.
(Key) Indicator System
References:
Reichert, F., Kunz, A., Moryson, R, 2008, MAE-P3 - A System to Gain Transparency of Production Structure
Austin, Robert D. (1996): Measuring and Managing Performance in Organizations.
Lessons Learned
Analysis and determination of positive and/or negative experiences in projects to learn from them. Lessons learned, which may be part of the final project documentation, are fairly structured compilations of information and are very important for the completed project. Carefully archived in accessible form, they can be used to prepare similar projects and to improve project management.
Linear Programming (Synonym: Linear Optimization)
References:
Dantzig, G. B. (1963): Linear Programming and Extensions
Schrijver, A. (1998): Theory of Linear and Integer Programming.
Also see → operations research.
Mathematical Statistics
Methods for analyzing mass phenomena in terms of recognizing patterns and internal laws. There is a distinction between descriptive statistics (determining the characteristic of a distribution of numerical values, mean, variance, standard deviation, etc.), and evaluating or inferential statistics (based on descriptive statistics and → probability calculation).
References:
Pitman, Jim (1999): Probability.
Walpole, R.E. (2007, 9.ed.): Probability & Statistics For Engineers & Scientists
Wheelan, Ch. (2013) Naked Statistics: Stripping the Dread from the data
Field, A. (2013): Discovering Statistics using IBM SPSS Statistics.
Mean Time Between Failure, MTBF
Reference:
Jones, James V. (2006): Integrated Logistics Support Handbook
Mean Time to Failure, MTTF
Reference: also see → mean time between failure
Mean Time to Repair, MTTR
Reference: also see → mean time between failure.
Method 6-3-5
References:
Schroer, B.; Kain A. and Lindemann U. (2010): Supporting Creativity In Conceptual Design: Method 635-Extended
Linsey J S. and Becker B. (2011): Effectiveness of Brainwriting Techniques
See also → creativity techniques.
Milestone Monitoring (Progress/Slip Charts)
Mind Mapping
Reference:
Buzan, T. and B. (2010): The Mind Map Book
Modeling and Representation Techniques
Modeling and representing the results of the synthesis, i.e., of solutions, theoretically involves the techniques that have already been described in connection with systems thinking and situation analysis. These include in particular system representations of all types such as bubble charts, system hierarchical representation, effect networks, special views of systems, black box representations, and many more.
See also → effect networks/influence matrices, correlation matrices, → process charts, → flow charts, all kinds of plans, drawings, sketches, tables, diagrams, and physical modeling (reduced-scale physical models) or abstract, mathematical, and usually IT-supported modeling. Numerical methods facilitate through dynamic simulation of procedures and processes (e.g., changing parameters) the possibility of quickly evaluating and visualizing changes of system performance and behavior.
Monte Carlo Method
Is a powerful → simulation technique useful for stochastic processes that are dependent on random variables following often unknown probability distributions and was originally developed in the 1930s and 1940s. The Monte Carlo method assigns in multiple runs values to these variables, which are created by random stochastic sampling from historic data (in the case of known/assumed probability distributions the values are generated in such a way that they follow their respective distribution). With these values the process is evaluated (e.g., customer waiting times in a queue) and the result recorded for each run. These results, as they are a based on the stochastic variation of (input) variables, follow a distribution function (with expected value and variance) themselves. The number of runs (iterations) depends on the desired quality/reliability of the result (confidence interval).
References:
Rubinstein, R. Y.; Kroese, D. P. (2007). Simulation and the Monte Carlo Method
Asmussen, S. and Glynn, Peter W.(2007): Stochastic Simulation: Algorithms and Analysis
Lemieux, Ch. (2009): Monte Carlo and Quasi-Monte Carlo Sampling.
See also → operations research
Morphological Analysis
- 1.
Defining the problem: what do we want to find? In the example: the greatest possible variety (or all conceivable) types of vehicle.
- 2.
Setting the parameters that can be used to describe the numerous variants. In the example: space layout – arrangement of drive unit and cargo space (SL); type of steering (ST); and drive type (DT).
- 3.
Generating possible parameter values: in the example, SL with SL1 to SL3, ST with ST1 and ST2, and DT with DT1 to DT4.
- 4.
Coming up with the different configurations: all possible combination parameter values are created (at first without assessment or critical evaluation). In the example, there is a possible total of 3 × 2 × 4 = 24 different combinations (configurations).
- 5.Creating a shortlist of technically feasible configurations – without judgment and prejudice. In our example the obvious (and known) configurations are:
- (a)
SL1 + ST1 + DT2 = automobile or bus
- (b)
SL2 + ST2 + DT3 = streetcar (tram)
- (c)
SL3 + ST2 + DT3 = train (locomotive + rail wagon)
- (d)
SL1 + ST1 + DT3 = trolley-bus
- (a)
- 6.
Evaluation and final decision about best configuration (Fig. 16.5).
Reference:
Zwicky, F. (1969): Discovery, Invention, Research – Through the Morphological Approach.
Multi-Moment Observations, See → Activity Scanning
Text
Network Planning Techniques
- 1.
Structure analysis: in a first step, the activities to be carried out are determined (questioning experts, brainstorming, etc.) and put into a logical sequence according to established rules (what activities must be completed so that a certain activity can begin, which activities can run entirely or partially in parallel, which activity subsequently starts, etc.) The result is represented graphically to create what is known as a network plan.
- 2.
Time analysis: if the respective temporal duration of the individual activities is provided – or can be estimated, it is possible – based on the dependencies established in the structure – to calculate the completion times of the network plan. The earliest possible or latest allowable times for results (condition) and activities are established. The most important result is the critical path, the path that is crucial to the final deadline. Each delay along this path leads to a delay in the overall project completion (some methods also allow for dealing with uncertain activity times, such as → PERT). Activities on the other paths may have time buffers (buffer times) within which they can be postponed or preponed without having any effect on the final deadline.
- 3.
Cost analysis: when costs can be assigned to the individual activities (personnel, materials, machine hours, and investment costs), it is possible to calculate and predict the project costs at each project stage. It is also possible to calculate the optimal allocation of investments or to speed up the project by optimally investing in additional resources (crash costs) using → linear optimization.
- 4.
Capacity analysis: with the assignment of resources such as machines, equipment, work groups, personnel, etc., to the individual activities, useful load overviews can be created. Their analysis allows load peaks to be smoothed out by postponing or bringing forward certain activities using their buffer times.
There are two ways of representing the network of activities as a mathematical graph. One way is called event node network, where activities are represented as arcs (edges) and their start and end events as nodes (CPM, PERT). The other way is called activity node network, where the assignment is just the other way around: the activities are on the nodes and their connections drawn as arcs (edges). Metra-Potential Method (MPM) and many others use this notation.
With respect to the time estimate for the duration of the activities, there are also various possibilities. With deterministic time estimates a single, constant value for the duration of a process is assumed; with stochastic activity times a stochastic distribution function is given – or in the simplified case of → PERT – approximates the general distribution with the skewed beta distribution (often called PERT distribution) using a three-value estimate of the duration of the activities: an optimistic one, a pessimistic one, and a most likely one. This allows probabilities to be calculated for individual and overall project completion times.
References:
Milosevic, D. Z. (2003): Project Management ToolBox.
O’Brien, J.J.; Plotnick, F.L. (2010): CPM in Construction Management
Lewis, James P. (2004): Project Planning, Scheduling & Control.
Network Thinking
See → effect network thinking.
Numerical/Computer Simulation
Numerical systems models that are evaluated by computers (or powerful computer clusters) to simulate the system’s behavior. Well-known examples are a computational fluid dynamics, finite element method (FEM) strength calculation, weather and climate forecasts, thermal, energetic or functional building simulation, to name but a few.
References:
Angermann, L. ed. (2011): Numerical Simulations - Applications, Examples and Theory
Awrejcewicz, J. ed. (2011): Numerical Simulations of Physical and Engineering Processes.
Observation Techniques
A method of information gathering that is suitable for observable phenomena (states, conditions, procedures, processes, behavior, etc.). Here, the information does not have to be gathered by questioning experts or people involved, but can be collected by observation, e.g., by doing a complete survey (video surveillance) or observation at random, discrete, selected times and extrapolating the results with statistical methods (→ multi-moment observations; → activity scanning).
Operationalization
Operationalization defines the measurement of phenomena that are not directly measurable or traceable by supplementing fuzzy, abstract terms with indicators or proxies that are easy to capture and/or measure. For example, the term work atmosphere can be operationalized by indicators such as: personnel fluctuation, rate of sick leaves, etc. Corporate success could be measured with indicators such as: profit, cash flow, growth, market share, average product age, sales volume with new (e.g.: <5 years old) products, employer attractiveness indicator, etc.
Operations Research, OR
The application of mathematical methods to systems to give them an optimal shape (structure) or an optimal functional performance. The term optimal is always understood with respect to a target (objective) function and given constraints that must be defined. OR in systems engineering is used primarily in the solution search (synthesis/analysis).
Well-known OR application areas are blending problems, inventory optimization, → assignment or allocation problems, → queuing theory, → sequencing problems, job shop scheduling, maintenance planning, or transportation. Important OR methods are → linear programming, (mixed) integer programming, nonlinear programming, dynamic programming, simulation, etc.
References:
Hillier, F.S.; Lieberman, G.J. (2010): Introduction to Operations Research.
Winston, W.L. (2008): Operations Research: Applications and Algorithms.
Baldick R. (2006): Applied Optimization. Formulation and Algorithms for Engineering Systems.
Taha, H.A. (1992): Operations research: An introduction.
Eiselt, H.A. (2012): Operations Research. A Model Based Approach.
Panel Polling (Polls)
→ Information acquisition technique with which certain matters such as assessment of the situation, opinions, desires, estimates of future developments, etc., are reviewed and collected, repeatedly, often at regular intervals, from a particular group of people (clients, suppliers, residents, etc.). This facilitates a dynamic observation of changes and trends. Example: → Delphi method.
Polarity Profile (Spider Web Diagram)
→ Visualization technique that can be used to illustrate characteristics of object systems, people, etc., which can be put on a scale. These diagrams visualize a wide variety of object properties in one view. This greatly simplifies comparisons.
The coordinate axes represent the degree to which the goal is met. Possible constraints could also be plotted that way. The scales should be arranged in such a way that the best performance of each criterion moves in the same direction, good either inward or outward for all criteria. As there is often no meaningful order or correlation between the different axes or scaling factors, the area spanned by the polar diagrams does not simply correlate with the overall quality of an object and therefore cannot be used for a comparison.
Process Analysis
Systematic investigation of business processes by decomposition into partial steps/sub-processes, and precise analysis of the process logic and contents. The goal is often to identify opportunities for improvement such as simplifying, speeding up, reducing costs. Various → visualization techniques can be helpful. Also, methods such as → Benchmarking can help to raise awareness for problems and opportunities, and facilitate a better understanding of the process itself and/or identify weaknesses.
Process Modelling Diagrams
References:
Hommes, Bart-Jan; van Reijswoud Victor (2000):The evaluation of business process modeling techniques.
Mendling, J.; Reijers, H. A.; van der Aalst, W. M. P. (2010). Seven process modeling guidelines (7PMG).
Program Evaluation and Review Technique, PERT
A stochastic → network planning technique where activities are assumed to follow a probability distribution (PERT distribution) determined by an optimistic, a most likely, and a pessimistic duration estimate. The expected value of the total project duration is calculated similar to the → critical path method as the sum of the expected durations of the activities on the critical path (= the sequence of activities that determine the project duration – each delay of one of those activities leads to a delay of the project) – its variance as the sum of the activities’ variances. The big advantage over CPM is that this technique calculates a stochastic forecast of project durations and confidence intervals for achieving them, which is essential in → risk management and far superior to the “single point estimates” of CPM.
Project Audit
Tool for assessing a project’s progress. The objective is to determine to what extent the project is progressing according to plan in terms of time, cost, quality, etc., or whether measures need to be taken to make sure the project will be successful.
- 1.
Establishing a baseline based on the planned deadlines, costs, and accomplishments (target or plan)
- 2.
Assessing the current status with respect to completion/progress, costs, and deadlines (actual situation)
- 3.
Determining the resources still needed to complete the project (e.g., cost to complete, CTC)
- 4.
Conducting a deviation analysis: why and to what degree is there a deviation from the planning up to this point (deadlines, costs, completion)?
- 5.
Conducting a critical analysis of the project premise: evaluation of the assumptions in respect for plausibility, probability of occurrence, prospects of success
- 6.
Evaluation of opportunities and risks
- 7.
Adoption of a catalog of measures for the remainder of the project
The audit can be initiated by different management levels depending on the degree of the suspected/feared deviation and the risk associated therewith (project administration, the company’s business management, executive board). The selection of the auditors takes place in accordance with the initiating management level. For reasons of objectivity, the auditors should not be part of the project team or the particular environment in which the project is situated. Entirely external auditors are often a good choice.
Project Management Body of Knowledge Guide, PMBoK Guide
A widespread project management standard of the US Project Management Institute, by which it is published and supported. The methods described in the PMBoK Guide are usable with projects in various application fields: construction, software development, mechanical engineering, automotive industry, etc.
The PMBoK Guide is process-oriented; in other words, it uses a model by which the work is accomplished through processes. A project is carried out using the interaction of many processes. The PMBoK structures all methodological skills along processes. Inputs (documents, plans, designs, etc.), tools and techniques (mechanisms applied to inputs), and outputs (documents, plans, designs, etc.) are described for every process.
Reference:
Snijders, P.; Wuttke, Th.; Zandhuis, A. (2009) PMBOK Guide
Project Management Software, PMS
Information technology tools for supporting project management, mostly on the basis of → network planning techniques.
Single project management systems (usually design software for designing and tracking a single project).
Multi-project management systems (planning software for administering and managing multiple projects).
Enterprise project management systems (for integration into company-wide planning, e.g., enterprise resource planning software).
Project collaboration platforms with communication solutions (e.g., groupware solutions, portal software), but also many other programs that really are not project-management-specific, but which may, for example, have interfaces with project management software, such as office solutions, creativity tools (e.g., mind mapping).
Some software solutions are sector-specific.
Quality Control, QC
Quality control (QC) describes a process by which the quality of all factors involved in production are reviewed. The ISO 9000 standard defines quality control as “A part of quality management focused on fulfilling quality requirements”.
Since the 1930s, different QC paradigms, which build on various concepts, have been applied. Among those are: statistical quality control (SQC; 1930s) using statistical methods and sampling, ➔ total quality control (TQC; 1950s) favoring a holistic approach to involve all of the company’s quality stakeholders, statistical process control (SPC; 1960s) building on process control systems, company-wide quality control (CWQC; 1960s) which is similar to TQC, total quality management (TQM; 1980s) using methods of statistical quality control for organizational improvement and (lean) six sigma (6σ; 1980s), which applies statistical quality control methods to business strategy.
Reference:
Juran, Joseph M. (1995)): A history of managing for quality: the evolution, trends, and future directions of managing for quality
Quality Management, QM
Reference:
Rose, Kenneth H. (July 2005). Project Quality Management: Why, What and How.
Questionnaires
Dichotomous questions: questions that have two possible responses such as yes/no.
Questions based on level of measurement: various possibilities must be evaluated according to scales or grades, etc.
Open-ended questions: any kind of answer can be accepted
Closed questions: the possible answers are divided into groups and must be sorted into the defined value ranges
Filter or contingency questions: questions that are used to determine if respondents are qualified or experienced enough to answer a subsequent one
Reference:
Kreuter, F.; Presser, St. and Tourangeau, R. (2008): Social Desirability Bias in CATI, IVR, and Web Surveys
Queuing Models
Queuing models and the corresponding theory belong to → operations research. They model situations where persons or goods arrive at service stations. Both arrival times and service times are stochastic. This can lead to the forming of waiting lines (queues) and in extreme cases to blocking or denial of service or to idling servers at the other extreme.
Examples of areas where queuing models are very useful include: the analysis of computers, telecommunication networks, traffic systems, logistics, and manufacturing systems. Depending on the field of application, the term service station may have different meanings.
The analytical results from waiting line theory provide the basis for the management of arrivals or for sizing of the service stations to minimize the overall costs caused by waiting and service:
The number of arrivals at a particular arrival rate (= average number of arrivals per time unit) and/or their stochastic distribution
The number of service stations, with a particular service rate (= average number of service operations per time unit) and/or their stochastic distribution
Line discipline (rules of behavior of the incoming elements)
Service strategy (organization and sequence of processing such as: first in first out, etc.)
In classical queuing theory, inter-arrival times are often assumed to follow an exponential distribution and service times. This leads to a stochastic process that is independent of its history (Markov property: M) and allows for analytical solutions. Queues are characterized by the distribution of arrival rates, service rates, and number of servers: M/M/1 would be such a queue.
In many practical cases, such as complex networks, empirical probability distributions or complex service rules for arrivals or service times (e.g., branching waiting lines), it is not possible to develop analytical solutions. Instead → numerical/computer simulation is used applying the → Monte Carlo method.
References:
Asmussen, S. and Glynn, Peter W.(2007): Stochastic Simulation: Algorithms and Analysis
Giambene, Giovanni (2014): Queuing Theory and Telecommunications Networks and Applications.
See also → operations research.
Real Options
See Sect. 2.2.5
Reengineering, Business Process Reengineering, BPR
References:
Hammer, H.; Champy, J. (2003): Reengineering the Corporation.
Schantin, D. (2004): Makromodellierung von Geschäftsprozessen
Regression Analysis
Being a method of → mathematical statistics, regression analysis is an analysis technique that is an attempt at modeling the relationship between one dependent and one or more independent variables. Related to this, a → correlation analysis calculates how well a given (often linear) model can represent an assumed relationship between a set of variables.
Reliability Analyses
Used to model/describe the reliability of objects to calculate their probability of failure. These objects can be as small as electronic components or as large and complex as power plants. Reliability as a part of the quality of objects covers their behavior during or after specified time spans under given operating conditions (see DIN 55 350-11) and depends on a number of variables. These result from the type of application, usage, or operation, etc., i.e. on one hand into external factors, e.g., the specified user demands and effects from the environment, and on the other hand internal factors such as the quality resulting from the development and manufacturing of the object.
Reliability analyses are carried out to specify reliability requirements of objects to be designed, and to investigate the reliability of existing or already developed objects. A simplified deterministic perspective of reliability leads to experience-based measures that are intended to prevent or counter failure or breakdown (e.g., n-fold safety coefficients of a component).
A wide range of material characteristics or manufacturing conditions and a great variability in operating conditions; however, require a probabilistic perspective. This also applies to systems consisting of many elements that must perform a great number of functions (e.g., instrumentation and control equipment), and to systems whose malfunction could pose danger to equipment, humans or the environment (e.g.: signal equipment in a railroad operation).
There are many mathematical methods of calculation reliability. Failure probability calculations often assume an exponential growing probability distribution of failures over time – electrical components in contrast are assumed to follow a so-called “bathtub curve,” which assumes high failure rates early and late in the product’s life. The most commonly used measures are the mean time between failures (MTBF) and the mean time to failures (MTTF).
References:
DIN 55350-11:2008-05, English version: Concepts for quality management - Part 11: Supplement to DIN EN ISO 9000:2005
Finkelstein, M. (2008): Failure Rate Modelling for Reliability and Risk.
Nakagawa, Toshio (2005): Maintenance Theory of Reliability.
Levitin, Gregory, G. (2005): The Universal Generating Function in Reliability Analysis and Optimization.
Reverse Engineering
Reverse engineering (also called back engineering) describes a process of extracting all knowledge and design information from an existing system (e.g., an industrially manufactured product) by only investigating its structures, states, properties, and behavior.
A design is reconstructed that way by trying to reverse the engineering process and moving backward, starting at the finished item. To verify the design and other gained insights, a 1:1 copy of the object can be manufactured and compared with the original. On the basis of the copy and its construction plan it is now possible to move forward with further development. This approach is used as an analysis method in a variety of fields: in natural sciences (genetics, biology, and physics) or as what is known as a retrosynthesis in chemistry.
In situations in which the design to be investigated is protected by intellectual property laws or trade secret laws, it is often forbidden to use reverse engineering. Legal regulations vary from country to country. Sometimes, there are exceptions for ensuring “interoperability” with other systems.
The design of electronic components, such as microprocessors, could be reconstructed by mechanically taking down the silicon die layer by layer. In software, it is often possible, for example, to recover the source code from an executable program (using a disassembler and many efforts at analysis), or the decoding of communication protocols of network components by analyzing their actual communication.
In mechanical engineering, real objects/machine parts can easily be digitized using a laser scanner and are ready to be used in a digital engineering process (digital mockups, computational flow simulation, FEM analysis, etc.) or a design process to be developed further. Digitizing real components is also useful for target-actual comparison in quality management –e.g., for comparing in injection molding the shape of the mold’s computer-aided design (CAD) model with the shape of the resulting finished part, which has been digitized using 3D scanners.
References:
Raja, V.; Fernandes, K. J. (2008): Reverse Engineering
Eilam, Eldad (2005). Reversing: secrets of reverse engineering.
Risk Analysis
Risk analyses are used to identify early as possible the dangers or risks – particularly to humans – that result from a product as early as possible → risk management. Another reason for thorough risk analysis is the intention to minimize product liability risk with respect to clients.
They are also part of the documentation for certifications such as the European Union compliance statement, which is mandatory for the CE test mark, or the Federal Communications Commission Declaration of Conformity, which is used on certain electronic devices sold in the USA.
Risk Management
Assessed potential damages or the consequences of a malfunction due to undesirable occurrences are called risk. Risk management is the systematic identification and evaluation of risks and the management of measures to counteract identified risks. It is a systematic process that is applicable in many areas, for example, with business risks, credit risks, financial risks, environmental risks, insurance risks, technical risks, project risks, and many more.
Project risks are of particular interest here: risk management in projects deals with all measures that appear necessary for preventing or dealing with unplanned results, because they may endanger the progress of the project. As these measures often require financial or time resources, it is always necessary to weigh the associated efforts against the damages in case the risks occur (weighted with the likelihood).
Risk management also deals with topics of so-called issues management – the handling of risks that have arisen without previously being identified. Risk analyses in systems engineering are very useful in the concept analysis step of the problem-solving cycle.
- 1.
Risk management planning: determine processes with which the following risk processes work. These include identification methods, documentation strategies, evaluation strategies, and responsibilities.
- 2.
Risk identification: risks in terms of potential hindrances are identified and documented using various methods.
- 3.
Qualitative risk analysis: the risks identified are assessed qualitatively and assigned priorities based on the probability that the risks will occur, on the one hand, and on the other hand, their effect on the success of the project.
- 4.
Quantitative risk analysis: then there is the quantitative evaluation of risk (in monetary terms), countermeasures, and/or necessary allowances.
- 5.
Risk response planning: countermeasures are identified to minimize the occurrence of risk or to reduce the effects of risk.
- 6.
Risk monitoring and control: the status of the risks (usually documented in a risk list) and the status of the countermeasures is continually monitored.
References:
Snijders, Paul; Wuttke, Thomas; Zandhuis Anton (2009): PMBOK Guide.
Hester, R E. and Harrison, R M., eds. (1998): Risk Assessment and Risk Management.
Hopkin, P. (2012): Fundamentals of Risk Management: Understanding, Evaluating and Implementing.
Safety Analyses (Synonym: Security Analyses)
Reference:
Leveson Nancy (2012): Engineering a Safer World. Systems Thinking Applied to Safety
Safety Management (Synonym: Security Management)
References:
McKinnon, Ronald C. (2012): Safety Management: Near Miss Identification, Recognition, and Investigation.
Ortmeier, P. J. (2001): Security Management: An Introduction.
Sampling
Reference: see the reference for → mathematical statistics.
Scenario Analysis (Scenario Planning)
Scenario analysis (sometimes also called scenario technique ) is a strategic planning method that has its roots in the military, but in the meantime it has also been used with economic and social questions. It is used for analyzing such things as extreme scenarios (best-case scenario/worst-case scenario), plus particularly relevant or typical scenarios (trend scenarios). The scenario technique is also used in psychology and psychotherapy (psychodrama, socio-drama). There, it involves both future and past scenarios. The biggest drawback in using scenario analysis is the fact that it is difficult to put realistic scenarios together: a realistic worst case scenario is not a scenario where everything that can go wrong does go wrong. Instead of going through the effort and risk of constructing realistic scenarios, often stochastic methods are applied, e.g., by assigning probabilities to input and system parameters and by evaluating system behavior using the → Monte Carlo simulation.
However, scenario analysis is still preferred in preparing strategic decisions, because of its simplicity compared with statistical methods, e.g., with reference to technological developments, business models, with market and sector developments, with orientation toward future developments, strategic development, and verification, for early recognition of possible changes through sensitization for the future. Further application fields are crisis management, project management, → risk management, and many others.
References:
Kahn, H. (1967): The Year 2000
Wright, G.; Cairns, G. (2011): Scenario thinking: practical approaches to the future
Cornelius, Peter; Van de Putte, Alexander and Romani, Mattia (2005): Three Decades of Scenario Planning in Shell. California Management Review, Nov. 2005
Scoring Method
Synonym for → value-benefit analysis .
Sensitivity Analyses
The purpose of a sensitivity analysis is to check the stability of an achieved result. This occurs, for example, by varying the value of the parameters that influence the result within an area considered reliable and looking at the new results thus created. Example: determining a series of variants by means of a → value-benefit analysis. Now, it is possible to change the weights of individual criteria that are considered crucial, or to vary the assignment of scores within a reasonable framework. If nothing changes, in other words, if the winner is still out front, we may have a higher degree of certainty that the results are correct. However, if the results change, we have to wonder if the cluster of parameters just chosen would also work.
In systems engineering, sensitivity analysis is mainly used during solution search (after an optimization process) or after an evaluation of solutions.
Sequencing Problems
References: see → operations research
Simplex Algorithm
References: see → operations research
Simulation Techniques
The term simulation can be used in many ways. Here, we mainly examine the aspects that are significant in connection with development processes.
On the one hand, the term simulation technique is used in connection with tasks that are not (or not yet) solvable analytically (using an optimization algorithm). In this case, the → Monte Carlo method is a good choice. On the other hand, modeling and subsequently simulation techniques are used when the system to be developed is not yet real and only exists as a model. Experiments are conducted on a simulated model to learn about the real system. A simulation model thus presents an abstraction of the system to be engineered (structure, function, behavior) – usually in electronic form.
- (a)Improving or speeding up the development process
Virtual product development: design and construction, calculation, and testing are primarily digital, i.e., on the computer. That way, time and money are saved in constructing prototypes, and the “time to market” is reduced and/or the product quality is improved through (multiple) reworkings of a design.
Virtual testing: e.g.: simulation of product tests (e.g., crash tests in the automobile industry) in a computer model, perhaps by modifying the design in several steps. Subsequently, a physical prototype is constructed and tested in a testing station. That way, the conformity of the computer model and the testing station results can be compared (and the testing station trial is also a type of reality simulation).
Simulation of production facilities because repeatedly remodeling the physical facility would be too complex and expensive. Key word → digital factory.
City planning, traffic planning (public and company internal).
Simulation of logistics systems (warehousing, storage, retrieval, supply chain, etc.).
- (b)Risk-free and economical training in a training simulator:
Pilot training in flight simulators (practice with critical situations such as engine failure, emergency landing, etc.).
Similar: driving simulator.
Training of doctors and surgeons in operating techniques.
Experimental business games (training for analysis and decision-making skills).
- (c)The real system cannot be observed directly:
System-related: simulation of a singular molecule in a fluid, astrophysical processes.
The real system works too fast: simulation of circuitry.
The real system works too slowly: simulation of geological processes.
Limited resources (time, money). Therefore, a model must be as simple as possible.
Each simulation model simplifies reality. In particular, models of complex situations or systems are often grossly simplified. This naturally reduces the precision and sometimes the usefulness of the simulation results to be transferred to reality. With other parameters, the results can simply be wrong. This is why simulation models must be carefully tested and validated.
Imprecision of the input data (measuring errors, randomness, etc.).
References:
Banks, J., Ed. (1998): Handbook of Simulation: Principles, Methodology, Advances, Applications, and Practice.
Banks, J., J.S. Carson, B.L. Nelson, and D.M. Nicol (2005), Discrete-Event System Simulation.
Cellier, F.E. and Kofman, E. (2006): Continuous System Simulation,
Sterman, J.D. (2006), Business Dynamics: Systems Thinking and Modeling for a Complex World.
Terano, T., H. Kita, T. Kaneda, K. Arai, and H. Deguchi, Eds. (2005), Agent-Based Simulation: From Modeling Methodologies to Real-World Applications.
Robinson, St. (2004): Simulation: The Practice of Model Development and Use.
Six Sigma, 6σ
Six sigma is a collection of methods and tools for process improvement and belongs to → quality control. It was introduced at Motorola in 1986 and later adopted by many companies in different sectors – such as General Electric, where it became a central element of the enterprise’s business strategy.
The underlying assumption is that it is possible to improve the quality of the process output by systematically identifying and removing the causes of defects and by minimizing the (unintended) variability in manufacturing and business processes. Six sigma uses a large variety of → quality management methods (data-driven, statistical) methods.
Reference:
Tennant, Geoff (2001). SIX SIGMA: SPC and TQM in Manufacturing and Services.
Statistics
See → mathematical statistics
Systems Theoretic Process Analysis, STPA
A new hazard analysis technique with the same goals as any other hazard analysis technique: to identify scenarios leading to identified hazards and thus to losses so that they can be eliminated or controlled. STPA, however, has a different theoretical basis or accident causality model: STPA is based on systems theory whereas traditional hazard analysis techniques have reliability theory at their foundation. However, many of the causes do not involve failures or unreliability.
References:
Leveson, Nancy (2012): Engineering a Safer World. Systems Thinking Applied to Safety. MIT Press
Leveson, Nancy (2013): An STPA Primer http://psas.scripts.mit.edu/home/wp-content/uploads/2015/06/STPA-Primer-v1.pdf
Survey
See → interview
Synectics
- 1.
Preparation phase: after intervening with spontaneous solution ideas, disassociation is used to encourage the implementation of structures extraneous to the problem and the unaccustomed combination of elements.
- 2.
Incubation phase: in addition to direct analogies from technology or nature (→ bionics), personal analogies are formulated (How do I feel as…?), along with symbolic, contrary, and fantastic analogies (What would a fairy do?).
- 3.
Illumination phase: the analogies are checked for their appropriateness for transfer to the problem; also called force-fit.
- 4.
Verification phase: working up solution concepts.
Reference:
Gordon, William J.J. (1961): Synectics: The Development of Creative Capacity. New York. Harper & Row Publ.
System Dynamics, SD
Methodology developed by Jay W. Forrester at MIT for a holistic analysis and simulation of complex, dynamic systems. It is especially applied in socio-economic fields. The effects on management decisions, on structure and system behavior (e.g., business success) can thus be simulated and recommendations for action can be extrapolated (also see the brief presentation of system dynamics in Sect. 1.4).
The qualitative method mainly involves the identification and investigation of self-contained feedback loops. There is a distinction between loops with positive, strengthening effects (reinforcing loops) and negative, stabilizing ones (balancing loops). Causal loop diagrams are often used here as a graphical representation technique.
With quantitative models, the representation in stock and flow diagrams and their simulation facilitates a deeper understanding of the system. Stocks, flows, and auxiliary quantities are useful in describing the systems relationships, and they show how the feedback loops lead to system behavior, which in part is often not linear and counterintuitive. This is the main advantage of this method.
Special software packages such as CONSIDEO, iThink/STELLA, DYNAMO, Vensim, Powersim or AnyLogic allow a numerical simulation of the system dynamics models. The simulation runs of various scenarios foster understanding of the system behavior over time.
Peter Senge has identified, investigated, and classified typical structures in feedback system types, which he has called archetypes. Knowledge of these basic structures allows a better understanding and a good forecast of the behavior of various social systems and/or management situations and thus provides a basis for more effective interventions.
Application areas: system dynamics was the underlying methodology for simulating the World3 world model, which was created under the direction of Dennis L. Meadows under contract from the Club of Rome for studies on “Limits to Growth” (1972, 2004). It is also useful for simulating and explaining the complex behavior of humans in social systems. Typical examples are the investigation of the phenomenon of over-fishing and the occurrence of disasters such as the nuclear accident in Chernobyl.
References:
Forrester, J.W. (1977): Industrial Dynamics.
Meadows, D.; Meadows, D.; Randers, J. (1972): The Limits to Growth.
Meadows, D.; Meadows D. L.; Randers J. (2004): Limits to Growth: The 30-Year Update.
Senge, P.M. (1990): The Fifth Discipline: The Art & Practice of The Learning Organization.
Senge, P.M. (1994): The Fifth Discipline Fieldbook: Strategies and Tools for Building a Learning Organization.
Sterman, J.B. (2006): Business Dynamics. Systems Thinking and Modeling for a Complex World.
Systems Modeling Language, SysML
Modeling system requirements and making them available
Analyzing and evaluating systems to solve requirement and design issues and testing alternatives
Communicating system information unambiguously among various stakeholders
References:
Dori, Dov (2016): Model-Based Systems Engineering with OPM and SysML.
Weilkiens, T. (2008): Systems Engineering with SysML/UML: Modeling, Analysis, Design.
Target Costing
Approach to guiding the product development process by previously set cost targets. The process starts with market-driven costing where a realistic sales price (including a desired profit margin) is estimated. This “allowable cost” is the basis for the next step, the product level costing, where the internally achievable cost structure for producing a competitive product is evaluated. Here, the → value analysis technique can be of great help. If necessary, functional changes in the product specifications are made to achieve the cost target. In the third process step of target costing, this internal step is repeated at the component level.
References:
Clifton, B.C.; Bird, H.M.B.; Albano, R.E.; Townsend, W.P. (2004) Target Costing: Market-Driven Product Design.
IMA: Implementing Target Costing (1994)
Theory of Inventive Problem Solving, TIPS
English acronym for → TRIZ (Teoriya Resheniya Izobretatelskikh Zadatch) .
To-Do List
Also known as a task list or list of open points (LOP list). A simple aid in project meetings for assigning the planned tasks (= WHAT) to individuals (sometimes also groups or organizational units (= WHO) and tying these tasks or work packages to a committed deadline (=WHEN). It is important to make sure that tasks are not delegated to a group or organization, but rather have exactly one person responsible.
The to-do list often has the form of a three-column list that is to be communicated to the corresponding persons and involved groups immediately after the project meeting. This creates the basis for the task supervision (task tracking): the next meeting starts with a report on the pending and completed (or uncompleted) tasks. Project team members whose tasks are lagging behind schedule have to explain and justify the situation and the reason to the team – which will also create (peer) pressure for improving work discipline as an intended side-effect.
Total Quality Control, TQC
Reference:
Feigenbaum, Armand V. (1991). Total Quality Control.
Total Quality Management, TQM
Total quality management (TQM) is a traditional → quality control method that demands companywide efforts to establish an environment where a company’s core business processes are continuously improving, which leads to high-quality services or products for customers. In different industries and regions there are variants of TQM. In recent years, some of the TQM ideas have been adopted by modern and more popular concepts such as the ISO 9000 standard, lean manufacturing, or → six sigma.
Quality management of such a kind is nowadays mandatory in sectors such as the airline and aerospace industries, medical technology, health care, pharmaceuticals, and food manufacturing.
Total quality management originated in the USA (W.E. Deming). It, became very popular in Japan after WW2 and experienced a renaissance in the USA (Baldridge Award). In Europe, TQM is known as the → EFQM Model for Business Excellence.
References:
Juran, Joseph M. (1995)): A history of managing for quality: the evolution, trends, and future directions of managing for quality
Evans, J.R.; Lindsay, W.M., 1995. The management and control of quality.
Deming, E. (1997): Out of the Crisis.
Omachonu, Vincent K.; Ross, Joel E. (2004): Principles of Total Quality,
Tague, Nancy R. (2005): The Quality Toolbox.
Gitlow, Howard Seth; Levine, David M. (2005): Six Sigma for Green Belts and Champions
TRIZ/TIPS
- 1.
Many inventions are based on a comparatively small number of general solution principles.
- 2.
Only the overcoming of inconsistencies makes innovative developments possible.
- 3.
The evolution of technical systems follows certain patterns and rules.
The TRIZ method contains a series of methodical tools that facilitate more effective analysis of a technical problem and make it possible to find creative solutions.
Using this method, inventors attempt to systematize their activities to get to new problem solutions more quickly and more efficiently. The TRIZ method has become widespread.
- 1.
Principles of innovation and contradiction matrices
- 2.
Separation principles for solving physical contradictions
- 3.
Algorithms or at least stepwise procedures for solving invention problems
- 4.
System of 76 standard solutions and substance field analysis
- 5.
S-curves and laws for systems development (evolutionary laws of technical development, laws of technical evolution)
- 6.
Principle (law) of ideality
- 7.
Modeling technical systems using “little men” (dwarf model)
- 1.
Innovation checklist (innovation situation questionnaire)
- 2.
Function structure according to TRIZ (a type of cause and effect diagram, which is different than Ishikawa’s version)
- 3.
Subject action object function model (an expanded functional model based on work by L. Miles on → value analysis)
- 4.
Process analysis
- 5.
Materials cost time operator
- 6.
Anticipatory error detection
- 7.
Feature transfer (part of “alternative system design”)
- 8.
Resources
References:
Altshuller, G. S. (1984): Creativity as an Exact Science – The Theory of the Solution of Inventive Problems. New York: Gorden and Breach
Altshuller, G. (1999): The Innovation Algorithm: TRIZ, systematic innovation, and technical creativity.
Fey, V.; Rivin, E. (2005): Innovation on Demand: New Product Development Using TRIZ.
Rantanen, K.; Domb, E. (2007): Simplified TRIZ: New Problem Solving Applications for Engineers and Manufacturing Professionals.
Unified Modeling Language, UML
A language for modeling software and other systems developed and standardized by the OMG. UML is standardized by ISO standards (ISO/IEC 19501) and is today one of the dominant languages for modeling software systems. Owing to its software focus and despite the name its missing an unifying concept, UML is therefore not very useful for business process modeling, where → business process model and notation (BPMN) is used instead.
Project commissioning parties and business professionals test and confirm, for example, the requirements that the business analysts have determined using BPMN and create the so-called UML use case diagrams.
Software developers create a program logic that the business analysts have described in collaboration with professionals in activity diagrams.
Systems engineers install and operate the software system based on an installation plan that exists as a deployment diagram.
References:
Weilkiens, T. (2008): Systems Engineering with SysML/UML: Modeling, Analysis, Design.
Coad, Peter; Lefebvre, Eric; De Luca, Jeff (1999): Java Modeling in Color with UML: Enterprise Components and Process.
Use Case
Use Cases are used in both systems- and software engineering to describe all steps of a task where, for example, a person (“actor”) interacts with a system. Such a use case can be seen as a scenario. The basic idea is to extract requirements by analyzing a “usage” or interaction process, which is often more efficient than a classical specification list.
The result of a use case can be success or failure/termination. Use cases traditionally are named for the goals from the agents’ perspective: enrolling a member, withdrawing money, returning a car.
The granularity of use cases can be vastly different: at a very high level a use case describes what happens only very roughly and in an abstract manner. However, the technique of writing a use case can be refined, even at the level of IT processes, so that the behavior of an application is described in detail. This contradicts the original intention of use cases, but it is often very useful.
Use cases and business processes show a different view of the system modeled and have to be clearly distinguished from one another. A use case describes what the actor/environment expects from the system. Business processes, on the other hand, model how the system operates internally to meet the requirements of the environment.
Valuation Techniques
Formalized procedures for evaluating and comparing, for example, solution variants with respect to meeting their objectives (see Part III, Sect. 6.4). Examples: → value-benefit analysis, → cost-effectiveness analysis, → analytic hierarchy process (AHP).
Value Analysis (Synonyms: Value Management, Value Engineering)
Value analysis was originally an aid in product design. With respect to simplifying products or parts and reducing their costs, it primarily serves value improvement and cost reduction. An extension of this method added the ability to expand this basic idea to the value creation of new products. In a further step, the methodology of value analysis was also applied to researching and designing intangible services (e.g., organizational processes).
The core of value analysis is thinking in functions, at first at the level of the entire product and use by clients. This means “letting go” of existing or obvious possible solution patterns and opening up to innovative solutions and considering their value to a potential client. There is thus a distinction between main and secondary functions, and there may also be undesirable functions that the client does not want and for which he or she will not pay. In addition, there are utilitarian functions (utilities) and prestige utility (e.g., esthetics, image, prestige, without any useful advantage). A client’s appreciation of a product and the desired and valued functional features should also determine the price that a client is prepared to pay in a competitive situation. Often, this is specified in the form of a target cost as a component of the value analysis goal (with functional, quality, performance, and deadline goals, etc.) for the design process. See → target costing.
In contrast to the → value benefit analysis, which is an evaluation process, value analysis is a special approach to the solution search that also seeks an evaluation of individual solution elements with respect to their optimization.
References:
Miles, Lawrence D. (1972): Techniques of Value Analysis and Engineering.
Sato, Yoshihiko; Kaufman, J. Jerry (2005): Value Analysis Tear-Down: A New Process for Product Development and Innovation.
Value-Benefit Analysis, VBA (Synonym: Scoring Method)
Evaluation method used to determine the preferability of variants.
See Part III, Sect. 6.4.2.3 or the airport Case Study in Chap. 9 .
Virtual Product Development
Development of a product within the virtual world of computers. With the help of CAD systems and PDM Systems, products are designed graphically and assigned relevant technical characteristics (material characteristics: thickness, surface finish, tensile and yield strength; mechanical and kinematic relationships; production information, etc.). Afterward, further processing, calculations, and simulations of all types can be carried out, such as strength calculations, finite element calculations, installation tests (with the help of digital mockups), crash simulations, etc. The results can have repercussions on the dimensioning or shaping of the construction, i.e., they may lead to iterations for improving the quality of the construction.
References:
Weisberg, D.: The Engineering Design Revolution, E-Book, www.cadhistory.net, May 2010
Nambisan, Satish ed. (2010): Information Technology and Product Development.
Kenneth B. Kahn, ed. (2013): The PDMA Handbook of New Product Development.
Bordegoni, M.; Rizzi, C., eds. (2011): Innovation in Product Design: From CAD to Virtual Prototyping.
Virtual Vehicle Research Center: http://vif.tugraz.at
Visualization Techniques (Information Graphics)
References:
Tufte, Edward R. (2001): The Visual Display of Quantitative Information
Zelazny, Gene (2001): Say It With Charts: The Executive’s Guide to Visual Communication
Rendgen, Sandra; et al. (2012): Information Graphics
Work Breakdown Structure, WBS
Object-oriented WBSs (object structure designs) provide an overview of the system components to be designed, which can be further subdivided arbitrarily. Example: a car divided into chassis, body, motor, interior, etc.
Task-oriented WBSs describe the tasks to be completed, such as development, construction, manufacturing, assembly, marketing, distribution, etc.
Phase-oriented WBSs describe the phases, which are divided according to temporal and decision-oriented aspects, e.g., preliminary study (feasibility study), main study (master plan), detailed study, etc.
References:
Project Management Institute (2006): Practice Standard for Work Breakdown Structures. Project Management Institute
Haugan, Gregory T. (2003): The Work Breakdown Structure in Government Contracting
16.1 Self-Check of Knowledge and Understanding: Encyclopedia/Glossary
- 1.
Do you think this glossary useful even nowadays, in the age of Wikipedia, Google, etc.? Why or why not?
- 2.
How many percent of the described methods and tools were familiar for you?