Chapter 2: Overview of Statistical Quality Control Topics and JMP
This chapter discusses the key concepts and JMP platforms associated with statistical quality control (SQC) topics. More specifically, we review statistical process control (SPC) for continuous and attribute data, measurement system analysis (MSA) for gauge R&R studies, and how to carry out a process health assessment (PHA). We also provide practical advice for how to navigate common challenges when using these techniques, based on our many years of experience. Most of the JMP platforms in the Analyze ► Quality and Process menu and tips for structuring the JMP tables for the analysis are covered.
For most people, Shewhart’s & Range control chart is the first point of entry into the world of SPC. This foundational chart has taught us several important lessons about using SPC effectively over the years. The first one has to do with the key concept of rational subgroups, also introduced by Dr. Shewhart, in order to minimize the variation within subgroups and maximize the variation between subgroups. The Range chart, shown in Figure 2.1, is used to determine if the variation within a subgroup is consistent from subgroup to subgroup. While the chart is used to determine if, given the within subgroup variation, the subgroup means are consistent over subgroups. In other words, the bottom chart is a measure of within subgroup variation and the top chart is a measure of between subgroup variation. Prof. Montgomery discusses Rational Subgroups in Section 5.3.4 in ISQC Chapter 5. An excellent illustration of the impact of the improper specification of rational subgroups is the analysis of the Injection Molded Sockets data in Wheeler and Chambers (1992) (see also Ramírez, J. and Ramírez, B. (2009) and Ramírez and Zangi (2014)).
Figure 2.1 and Range Chart for Flow Width from Chapter 3
The effective use of the & Range control chart is also related to the magnitude of the two sources of variation, between and within a subgroup. In batch processes, for example, rational subgroups are often formed using one production batch, where multiple measurements are taken. Since the between-batch variation can be much larger than the within-batch variation, the control limits on the chart can be unusually tight, resulting in the majority of the points outside of the limits. It is an image that is difficult to forget. But the three-way (or 3-way) control chart saves the day, by adjusting the limits for the chart to account for the batch-to-batch variation, using a moving range control chart for the batch means, and preserving the Range control limits in the usual way. ISQC Chapter 6 and Chapter 3 of this book provide additional discussions on this topic.
In some industries, such as, semiconductor, sometimes it is necessary to seek out advanced approaches to extract more value from a monitoring program. One example is the implementation of SPC on equipment to maintain the optimal environment in a ISO Class 1 cleanroom environment. One example is a make-up air handler, which uses Proportional-Integral-Derivative (PID) controllers, with data available at very short time intervals. The use of an Individual Measurement and Moving Range control chart would have about 75% of the points involved in a runs test violations. This is a direct result of using an inappropriate technique to monitor nonstationary autocorrelated output. The use of Time Series control charting techniques to deal with autocorrelation is discussed in Chapter 8 of this book and Chapter 10 in ISQC.
The use of P/NP and C/U charts may also be challenging in practice. For example, when working with wafer yield, which is calculated as the ratio of the number of good die (chips) to the total number of die on a wafer, the binomial distribution is used to derive the control limits for a P chart. However, the observed variation is usually larger than expected under the binomial model and results in control limits that are too tight. This is known as overdispersion, and the P chart has to be adjusted accordingly to reflect this. The beta-binomial distribution can be used to model overdispersion and chart wafer yield (Cantell et. al., 1998). Particle counts can also exhibit the same phenomenon, and the C chart can be modified to account for overdispersion by using a gamma-Poisson distribution. For more details see Chapter 4 of this book and Chapter 7 of ISQC.
Reducing special cause variation by identifying and removing root-causes for signals is by far one of the most difficult activities associated with SPC. This is especially true for batch-based complex processes, where a smoking gun is difficult to locate, and the trail is often cold before we start looking. However, the chances of finding and responding to special cause variation, improves by leaps and bounds when there is “real-time” data and “real-time” monitoring. Another way to increase our ability of achieving stable processes is to combine SPC with the other SQC tools like measurement system analysis (MSA) and process health assessments (PHA).
The SPC-related JMP platforms discussed in this book are shown in Figure 2.2, which are found under the Analyze ► Quality and Process menu. Below is a high-level description and roadmap of these SPC platforms. The first thing to note is that the Control Chart Builder and the Control Chart platforms both produce variables (XBar, IR) and attributes (P, NP, C, U) control charts. While Chapters 3 and 4 outline the capabilities of both platforms and provide several examples of each one, emphasis is placed on the Control Chart Builder since it is a newer platform and may eventually replace the legacy Control Chart platform. Also, the Distribution platform, not shown here, is used to establish probability based control limits with an appropriate, non-normal distribution.
The small shift detection platforms shown in Figure 2.2 include the UWMA, EWMA, and CUSUM, found in the Control Chart platform. The CUSUM Control Chart also produces CUSUM control charts and is a newer platform, which may eventually replace the legacy CUSUM option, found in the Control Chart platform. Finally, the Pareto Plot and Diagram tools (see Figure 2.2), found in the Quality and Process menu, are used to create a Pareto Plot and a Fishbone diagram, respectively and are presented in Chapter 4 of this book.
Figure 2.2 JMP Platforms for Control Charts
The Time Series platform is shown in Figure 2.3. Currently, there is not one JMP tool that will create time series based control charts and it takes several steps to generate them. First, an appropriate Time Series model must be identified and fit using the Time Series platform found in the Specialized Modeling menu. The residuals are then saved to a JMP table and the chart is made using the Control Chart Builder. Alternatively, the forecasts and prediction limits can be saved to a JMP table and the chart is made using the Graph Builder. While Multivariate Control Chart is created using the Control Chart platform (Figure 2.2), two platforms under the Multivariate Methods menu, Multivariate and Principal Components (Figure 2.4) are both illustrated in Chapter 9 of this book, to gain additional insight.
Figure 2.3 Time Series Platform
Figure 2.4 Additional Multivariate Platforms
In order to create a control chart, one piece of information is required, that is, the values for the variable being charted. Another piece of information that is desirable but not required, if the sample size is constant, is the subgroup variable. In addition, there are two basic JMP table layouts, horizontal or vertical, that can be used. In a vertical layout, Figure 2.5a, the values for each individual variable are included in one column, Data, and another column is added to identify the Variable Name for every single value. In this layout, one more column can be added to identify the subgroups. In a horizontal layout, Figure 2.5b, each individual variable is placed in its own column, A through J, in this example. If the values have different rational subgroups, then additional columns would be required for each unique subgrouping scheme.
Figure 2.5a Vertical SPC JMP Table Layout
Figure 2.5b Horizontal SPC JMP Table Layout
In this section, we discuss the most efficient way to create univariate variables and attributes control charts using JMP. We want to distinguish between creating univariate control charts for one variable at time versus creating univariate control charts for multiple variables at one time. For the former, the horizontal JMP table layout in Figure 2.5b is best, since there is no straightforward way to isolate the results for one variable in a vertical JMP table layout.
It is often desirable to create control charts for multiple variables at one time, using one SPC tool and having the charts contained within the same output window. This is as a matter of efficiency, especially when there are many variables to monitor. For this scenario, the following distinctions are made in subsequent discussions:
● JMP table layout (vertical or horizontal): As was described above, in a vertical layout all data is in one column and there are additional columns to identify the variable name and subgroup number. In the horizontal layout, each variable has a unique column.
● Control chart types (same or mixed): This reflects the control chart types that will be used for all the variables of interest. For example, same implies that the same control chart type will be used for all variables, for example, XmR chart. Mixed means that several control chart types will be used for the variables of interest, for example, XmR and & R control charts.
● Monitoring Phase (I or II): Phase I monitoring implies that control limits will be estimated from the baseline data and applied to the same data; while Phase II monitoring means that new data will be added and monitored using saved control limits.
The ability to generate multiple univariate control charts for variable data at one time, using the Control Chart Builder and Control Chart platforms, is summarized in Table 2.1. The information is presented for the control chart types (Same or Mixed) and JMP table layouts (vertical or horizontal). There are more options available to generate the same type of control chart for multiple variables. For example, with a vertical JMP table layout, the Control Chart Builder can generate the same control chart type for multiple variables in one pass, either by using the Process Name variable in the Phase zone, or with the By button. With a JMP horizontal table layout, charts can be generated one-at-a-time by selecting a column and clicking the New Y Chart button. With the legacy Control Chart platform and a vertical JMP table layout is simple, we just use the By option and enter the data variable and subgroup variable in the appropriate fields. With a horizontal JMP table layout multiple variables are included in the Process field.
It is not easy to generate multiple control charts of mixed type using JMP. One way to accomplish this is to use the legacy Control Chart platform with a JMP vertical table layout. One caveat is that for mixed chart types involving XmR and XBar & R or XBar & S charts, all the XBar charts must have a range (R) or a standard deviation (S) chart, but not a mix. From experience, mixed charts are a fairly common scenario and the tip in Table 2.1 is quite useful. With the Control Chart platform XBar must be selected, and the Subgroup field and By field must be appropriately populated. Before the output is displayed the following warning message, Figure 2.6, appears.
Figure 2.6 Control Chart Platform Warning Message for XBar for Mixed Chart Types
However, just click Continue and the mixed type of control charts will be produced.
Table 2.1 JMP Table Layouts for Phase I Monitoring of More Than 1 Variable
Control |
Control Chart Builder |
Control Chart | ||
Vertical |
Horizontal |
Vertical |
Horizontal | |
Same, |
Yes.
Use the By button, or Phase zone. |
Yes.
Drag subgroup variable to Subgroup zone and then select all variables and click the New Y Chart button. |
Yes.
Use the By button. Note, for XBar charts with non-constant subgroup sizes, the subgroup variable must be used for Sample Label. |
Yes.
Select all variables that need to be charted and click Process. Use the same subgroup variable in the Sample Label. By adding missing values to a given variable values, different subgroup sizes can be handled. |
Mixed, e.g., |
No. |
No. |
Yes.
Select XBar. Note the subgroup variable must be used for Sample Label and the variable name must be used in the By field. |
No. |
Phase II monitoring requires hard coded limits and the ability to add new data to the JMP data table. Once again, for a one variable-at-a-time approach, the horizontal layout of the data is the easiest way to carry out Phase II monitoring. Control chart types and limits should be saved to the Column Properties for each variable and, when new data are added, these limits will be used when the appropriate control chart tool is launched.
As is shown in Table 2.2, like Phase I, there are more options for producing multiple control charts using saved limits and new data, when all the control charts are of the same type. When working with a vertical JMP table layout in the Control Chart Builder platform, the Get Limits option is needed. This table can be created using the Phase Chart option and selecting Save Limits ► In New Table. The user will be prompted for the JMP table containing the variable name, chart type, and control limits. For mixed chart types, it is not possible to use a saved limits table.
Table 2.2 JMP Table Layouts for Phase II Monitoring of More Than 1 Variable
Control |
Control Chart Builder |
Control Chart | ||
Vertical |
Horizontal Layout |
Vertical |
Horizontal Layout | |
Same, |
Yes. Use the By button and Get Limits. The limits table is also in a vertical layout with a column identifying the chart variables. Use the Phase zone and Get Limits. |
Yes. Use Get Limits. The limits table is also in a horizontal layout. Drag subgroup variable to Subgroup zone and then select all variables that have limits saved as a Control Limits column property, and click the New Y Chart button. |
Yes. Use the By button and Get Limits. The limits table is also in a vertical layout with a column identifying the chart variables. |
Yes. Use Get Limits. The limits table is also in a horizontal layout. Select all the variables that have limits saved as a Control Limits column property, and click Process. Select the subgroup variable and click Sample Label. |
Mixed e.g., |
No. |
No. |
No. |
No. |
For multivariate control charts for a single process with multiple measurements per subgroup, a horizontal JMP table layout works the best. In this case, each column represents one of the multiple measurements per subgroup. For several variables where the same multiple measurements are taken, a vertical JMP table layout, with a variable denoting the process where the measurements come from, can be used. The process variable can be placed in the By or Group fields of the dialog window.
Successful measurement system studies are predicated on the desire to quantify, understand and reduce the contribution of measurement system variation, and in so doing, reduce the overall observed variation. This is idea is captured in the following equation:
(1)
In this equation, the observed, or total, variation is the variation that is present in the measurements that we take on our products or processes. This total variation consists of the true part or process variation plus the variation due to the measurement system. It is important to understand the impact of the transmission of measurement system variation to the observed variation, to get the most out of our statistical quality control efforts.
For many years, the AIAG Gauge R&R approach (AIAG (2010)) was the golden standard for how to analyze data from a Gauge R&R study. Following a step-by-step form, hand calculations were performed, using a myriad of equations, so one could arrive at a conclusion about the quality of the gauge. In Step 13 (see Wheeler, (2009)), the %GRR was calculated using the combined gauge Repeatability and Reproducibility, which was then multiplied by 6, divided by (USL – LSL) and multiplied by 100. This quantity is referred to as the Precision to Tolerance, or “P/T” ratio, and was used to classify the quality of the gauge. Granted, this was years ago (the first AIAG edition was in 1990), but with this approach, it was easy to get lost in the calculations and potentially miss out on some important findings about the measurement system.
Measurement System Analysis (MSA) and Evaluating the Measurement Process (EMP) offer significant enhancements over the more traditional measurement studies. In the EMP approach, the focus is on the measurement system, which is to be thought of as another manufacturing process step – and not just as a measurement device or gauge, that is used by operators. Figure 2.7 shows an & Range chart applied to the results from a measurement study. Did you know that a Shewhart control chart can also be use to evaluate measurement performance? The Range chart shows the test-retest error by part and operator, indicating that George might have higher and inconsistent test-retest error. In the chart, the control limits are determined using the within subgroup variation or measurement error, and for a measurement system with good discrimination, we expect most of the points to be out of control. This is the only situation we know of where we want the chart to completely out of control! We can also use the chart to see if the operators have a significant offset, bias, and if they have the same part-to-part patterns (George is off). MSA studies are included in Chapter 5 of this book and in ISQC Chapter 8.
Figure 2.7 Chart & Range Chart for EMP Study – Sample JMP Data ‘2 Factors Crossed’
Another important difference in the EMP approach is how the “Operator” effect is treated. In a traditional Gauge R&R analysis, Operator is a random component, because, in theory, there are an infinite number of potential operators that can be included in the study. In contrast, in an EMP study, Operator is considered as a fixed term, because we do not have an infinite pool of operators. We have a small, and finite, number of operators like Joe, Sue, and Mary that use a particular measurement device. In an EMP we focus on the impact of Operator on the average performance and the measurement error, to see if we can improve the measurement system by standardizing on the Operator with the best technique. Therefore, an EMP study should be run to inform the development and improvement of the measurement system, and to qualify or validate it.
Destructive samples make it challenging to conduct a measurement system study. One way around this dilemma is to see if a “golden standard” is available, through NIST or the vendor. Another is the use of ‘sibling samples’. For example, in measuring the break force of a laminated rolled good, samples used for the repeatability and reproducibility components can strategically be taken in a close proximity to each other on the laminate and treated as if it were the “same” part. When neither of these options is available, the “reproducibility” variance component can be estimated by randomly assigning parts to operators, days, and so on.
There is a perception that in order to get a good estimate of test-retest error, each operator should take as many repeated measurements of the same part, under the same conditions, as is possible. With a carefully designed MSA this is not needed. For example, if 3 operators take 4 measurements on each of 5 parts, then the partitioning of degrees-of-freedom is shown in Table 2.3. In this case, the error, or repeatability component, is estimated with an excessive number of degrees-of -freedom, 3×5×(4-1)=45, while operator and part only have 2 and 4 degrees of freedom, respectively.
Table 2.3 Degrees of Freedom for MSA Study: 3 Operators, 5 Parts, and 5 Repeats
Source |
Degrees of Freedom |
Operator |
2 |
Part |
4 |
Operator*Part |
8 |
Error |
45 |
Corrected Total |
59 |
By reducing the number of repeats and increasing the number of parts, there are more degrees of freedom for the part-to-part variation. For the same total number of measurements, n = 60, a study with 3 operators, that take 2 measurements on each of 10 parts, results in the partitioning shown in Table 2.4. The repeatability is still estimated well with 3×10×(2-1)=30 degrees of freedom, and the part variability is better estimated with 9 degrees-of-freedom rather than 4.
Table 2.4 Degrees of Freedom for MSA Study: 5 Operators, 6 Parts, and 2 Repeats
Source |
Degrees of Freedom |
Operator |
2 |
Part |
9 |
Operator*Part |
18 |
Error |
30 |
Corrected Total |
59 |
Determining the overall quality of the measurement system is the most challenging part of an MSA analysis. The comparison of the measurement system error to the tolerance, or specification width, using the P/T ratio is well known. It is a useful, but restrictive, comparison because it only looks at the ability to correctly disposition good product as acceptable, and bad product as unacceptable. Yes, this is important because it deals with customer requirements and business needs. However, we also need to be able to detect signals in the presence of noise. For example, we want to be able identify import factors in a statistically designed experiment, and detect process changes on a statistical process control chart. Both of these activities can be hindered if the measurement system error is too large. In the EMP approach, the overall classification of a measurement system is determined according to its ability to detect process shifts on a control chart. For all approaches mentioned, make sure the typical sources of variation were included in the selection of the parts. If they were underrepresented, then the outcome might appear bleaker than it actually is.
JMP has two platforms that perform MSA types of analyses, using industry standard nomenclature and calculations. These are highlighted in Figure 2.8 and are accessed from the Analyze ► Quality and Process menu. The Measurement System Analysis platform is used for Gauge R&R and EMP studies using continuous variables. The user must identify the MSA Method (EMP or Gauge R&R) in the launch window before any output is produced. The MSA Method will inform the default JMP output and available options from within the output window. For the Gauge R&R method, the AIAG analysis approach is used and the default output and options follow the same format as those produced in by the Variability / Attribute Gauge Chart platform. In addition, if a standard is available, it can be included in the analysis. The EMP method uses the approach outlined by Wheeler (2006). A control chart like the one shown in Figure 2.7 is the default output and eight additional options are available to produce other EMP elements, like, the EMP Results, Bias Comparison chart and the Test-Retest Error Comparison chart.
The Variability / Attribute Gauge Chart carries out a Gauge R&R for continuous or attribute variables. The user must identify the Chart Type (Variability or Attribute) in the launch window before any output is produced. The Chart Type will inform the default JMP output and available options from within the output window. For the Variability selection, the AIAG analysis approach is used and the default output and options follow the same format as those produced in by the Measurement System Analysis platform. Once again, if a standard is available, it can be included in the analysis. In an Attribute Gauge R&R, comparisons between appraisers are made, and the agreement and effectiveness are evaluated. Plots and output is produced following this approach, many of which use some type of agreement statistic, such as, Kappa. If a standard is available, it can also be included in this type of MSA.
Figure 2.8 JMP Platforms for Measurement System Analysis
An MSA study is a designed experiment with factors and levels for each factor. Therefore, to assess the impact of each factor on the parameter of interest, the factors and their levels need to be included in the JMP table. For example, if more than one operator was included in the study, there needs to be column in the JMP table for Operator and all rows should be filled in with the operator’s name associated with each measurement value. There should also be a column for the Part identifier and, if this was a non-destructive test method, it is very important that each part has the same identifier for each operator. If a standard was used, then there should be a column the table that contains the true value of the standard. Multiple MSA studies can be combined in the same JMP table by using the vertical table layout and including a column for the parameter name. This was discussed extensively in the previous discussion for JMP SPC Table Formats.
An example of a typical JMP table for an MSA for a variable gauge is shown in Figure 2.9. We recommend using obvious naming conventions for the MSA factors, such as Operator, Part, Lab Location, and so on, and the standard, since this will facilitate populating the launch windows and aid in interpreting the output. Make sure that the modeling type for the MSA factors are set to a nominal scale because these will be used to estimate variance components or comparisons between the levels. The JMP table in Figure 2.9 has a standard, which provides the true value for each measurement taken. Finally, every measurement in every row must have complete information, as is shown in Figure 2.9. We are sure that all these tips are obvious but thought it might be helpful to reinforce them.
Figure 2.9 Sample JMP Table Format for a Variable MSA
The JMP table format is a bit different for an attribute gauge R&R. These JMP tables require a horizontal layout, where the part classification for each operator must be included in its own column. An example is shown in Figure 2.10. The data in this table is from a study where three operators, labeled anonymously as A, B and C, inspected the same 50 parts and classified them as 1 = Pass or 0 = Fail. Note that each part was inspected three times by each operator. Once again, obvious nomenclature should be used for the item inspected, and any standard and operator names, if appropriate. In Figure 2.10, Part and Standard have the obvious meanings. The data in the first row signifies that for the first part, which has a true value of ‘1’, all three operators classified it as a ‘1’. They successfully did this two more times, as is shown in rows 2 and 3.
Figure 2.10 Sample JMP Table Format for an Attribute MSA
Similar to a JMP table for a variable gauge R&R, the factors are also set to a nominal modeling type. In Figure 2.10, Standard, A, B, and C have all been set to a nominal modeling type and have the red histogram symbol next to their names in the Columns area. For binary classifications of 0 or 1, such as the ones shown in Figure 2.10, a continuous modeling type will produce the same output. However, for binary responses, we think it is best to stick with the nominal modeling type for attribute studies.
Wheeler and Chambers (1992) introduced the concept of the four possibilities for any process using Four Process States: Ideal, Threshold, Brink of Chaos and State of Chaos. In this classification, the state of a process is identified using two dimensions of process performance—the process capability, or the ability to meet acceptance limits, and the process stability, the ability to maintain a state of statistical control. The capability of a process is often determined using Ppk, while the stability of a process is determined using Shewhart charts and runs tests. However, Shewhart charts do not lend themselves to quantify process stability, especially if runs tests are applied. For example, can a stable process have any runs violations? Is having one violation worse than having multiple violations? Are runs above the centerline worse than having 2 of 3 points fall outside of 2σ? What about the possibility of false alarms?
B. Ramírez and G. Runger (2006) developed the Stability Ratio (SR) test as a way to quantify and test for process stability. It is used to classify a parameter as stable, exhibiting common cause variation, or as unstable, and symptomatic of special cause variation. The Stability Ratio (SR) is the ratio of long-term variation to short-term variation,
(2)
The long-term variation is estimated using the sample variance of all the data, and the short-term variation is estimated from the within subgroup variation in a control chart using, for example, or . This is analogous to Sir Ronald Fisher’s fundamental Analysis Of Variance (ANOVA) principle of comparing the between variation to the within variation. Like ANOVA, a significance test can be performed, assuming the SR approximately follows an F-distribution.
The introduction of the Stability Ratio test is a breakthrough for conducting what we call a Process Health Assessment or PHA. Not only does it provide a consistent way to classify a parameter as stable or not, but it also has a quantitative measure that could be placed on a stability continuum and used to prioritize improvement efforts. In a process health assessment, each process parameter is classified as
1. Capable or incapable
Based on comparing the confidence interval for Ppk to a given value like 1 or 1.33
2. Stable or Unstable
Based on the p-value of the F approximation to the Stability Ratio (SR) test
A process performance dashboard (see J. Ramírez 2018) is a visualization of the four process states: Ideal State (stable and capable), Yield Issue (stable and incapable), Process Issue (unstable and capable) and Double Trouble (unstable and incapable). The process performance dashboard inspired the Process Performance Graph that was introduced in JMP version 13, in the Process Screening platform. It combines the Stability Ratio with Ppk to determine the overall health of parameter. An example from Chapter 6 of this book is shown in Figure 2.11 (see also J. Ramírez (2016)).
Figure 2.11 Process Performance Dashboard Example from Chapter 6
The Process Performance Graph can be used to monitor the process health of multiple processes for a given product and manufacturing site, for multiple products at a manufacturing site, or for multiple products and multiple manufacturing sites. The frequency for updating the process health metrics can be determined by the manufacturing volumes at the various sites. While small movements of the points are to be expected with each revision of the Process Performance Graph, it is very revealing when parameters move quadrants and assume a new process health state. The best movement is when parameters congregate in the stable and capable (Ideal) zone.
Quality improvements are more dramatic when there is a comprehensive approach, which includes targeted efforts leveraging all the SQC tools discussed in this chapter. For example, when using SPC, a ‘one and done’ mentality will not serve you well. After we troubleshoot a signal, we might be tempted to address it, document it and move on to the next one. Variability reduction which is sustained, usually happens through the identification of assignable causes for systematic variation, which can take some time to discover. Make sure you are conducting a periodic review of all the signals and looking for systematic issues. Remember, the best chance of having stable and capable processes, is to design them that way right from the start and do the same for your measurement systems.
We uniquely define a PHA platform as a platform that includes both process capability indices and the process stability metric, the Stability Ratio, for single parameter assessments, or the addition of the Process Performance Graph for multiple parameter assessments. The Control Chart Builder is used for individual parameter assessments of continuous variables, as is shown in Figure 2.12. This platform is accessed from the Analyze ► Quality and Process menu. For individual parameter assessments, the variables charts in the Control Chart Builder will produce estimates of short- and long-term variation, the Stability Ratio, process capability indices, using short term (Cp, Cpk, and so on) and long-term estimates of variability (Pp, Ppk, and so on), their confidence intervals, and nonconformance estimates. In order to generate this output, the Spec Limits Column Properties must be populated. The Distribution and legacy Control Chart platforms are not included here, since they do not include the process stability metric. However, the Distribution platform is highlighted in Chapter 5 of this book.
Figure 2.12 JMP Platform for Process Health Assessment for Individual Parameters
Two platforms are available for use with multiple parameter assessments of continuous variables, as is shown in Figure 2.13. It is a good idea to populate the Spec Limits Column Properties in the JMP table, prior to launching them. The Process Screening platform, which was introduced in JMP version 13, is accessed from the Analyze ► Screening menu. For multiple parameter assessments, the output includes a table with summaries in three main areas: Variability (Stability Ratio and long and short-term estimates of variation), Control Chart Alarms (control chart alarm rates) and Capability (out-of-specification rates, Cpk and Ppk). The Process Performance Graph can be launched from the default output window. There are many more useful options in the Process Screening platform, which are discussed in detail in Chapter 6 of this book.
Figure 2.13 Main JMP Platform for Process Health Assessment for Multiple Parameters
The Process Capability platform for multiple parameter assessments is accessed from Analyze ► Quality and Process, Figure 2.14. Alternatively, it can be launched from within the Process Screening platform as an option from the default output window. While the default output is slightly different, depending on where it is launched from, both have access to the same information. There are several plots summarizing the combined process capability assessments, for example, the Goal Plot and Capability Box Plots, summary tables containing specification limits, the Stability Ratio, Ppk, and Actual and Estimated % out of spec. The Process Performance Graph can also be produced from the Process Capability platform. The options in the Process Capability platform are described in Chapters 5 and 6 of this book.
Figure 2.14 JMP Process Capability Platform for Process Health Assessment for Multiple Parameters
For each parameter assessed, specification limits are required to calculate process capability indices, and a control chart type must be identified to calculate the Stability Ratio. The horizontal layout (see Figure 2.5b) lends itself more readily to this type of analysis. The specification limits for each parameter should be entered in the Spec Limits field in the Column Properties for each parameter. If an XBar control chart is used then a subgroup variable is also needed in the JMP table, for parameters with unequal subgroup sizes. As an alternative, if the subgroup sizes are equal then the subgroup size may be entered in the dialogue window. For multiple parameter assessments, the same control chart type must be entered, and for XBar charts, even the same subgroup size is also required. However, missing values can be used to input different subgroup sizes. We are unaware of how to specify different control chart types, such as, XmR and XBar & Range, for the parameters in the Process Screening platform. Similar considerations apply to multiple parameter assessments using the Process Capability platform.
Since the specifications limits are required to conduct a capability analysis, the vertical JMP table layout does not lend itself for a process health assessment in JMP. This is because there is no way to incorporate the specification limits for each parameter, in a vertical format. Also, if one uses the By button with either the Process Screening or Process Capability platform with a vertical layout, there is no prompt to enter the limits, parameter by parameter, inside of these platforms.