Six Sigma is a set of practices originally developed by Motorola to systematically improve processes by eliminating defects.
[1] A defect is defined as nonconformity of a product or service to its specifications.
While the particulars of the methodology were originally formulated by Bill Smith at Motorola in 1986 Six Sigma was heavily inspired by six preceding decades of quality improvement methodologies such as quality control, TQM, and Zero Defects. Like its predecessors, Six Sigma asserts the following:
Continuous efforts to reduce variation in process outputs is key to business success
Manufacturing and business processes can be measured, analyzed, improved and controlled
Succeeding at achieving sustained quality improvement requires commitment from the entire organization, particularly from top-level management
The term "Six Sigma" refers to the ability of highly capable processes to produce output within specification. In particular, processes that operate with six sigma quality produce at defect levels below 3.4 defects per (one) million opportunities (DPMO)
[3]. Six Sigma's implicit goal is to improve all processes to that level of quality or better.
Six Sigma is a registered service mark and trademark of Motorola, Inc.[4] Motorola has reported over US$17 billion in savings
[5] from Six Sigma as of 2006.
In addition to Motorola, companies that also adopted Six Sigma methodologies early-on and continue to practice it today include Bank of America, Caterpillar, Honeywell International (previously known as Allied Signal), Raytheon, Merrill Lynch and General Electric (introduced by Jack Welch).
The term Six Sigma
Sigma (the lower-case Greek letter σ) is used to represent standard deviation (a measure of variation) of a population (lower-case 's', is an estimate, based on a sample). The term "six sigma process" comes from the notion that if one has six standard deviations between the mean of a process and the nearest specification limit, there will be practically no items that fail to meet the specifications. This is the basis of the Process Capability Study, often used by quality professionals. The term "Six Sigma" has its roots in this tool, rather than in simple process standard deviation, which is also measured in sigmas. Criticism of the tool itself, and the way that the term was derived from the tool, often sparks criticism of Six Sigma.
The widely accepted definition of a six sigma process is one that produces 3.4 defective parts per million opportunities (DPMO).[11] A process that is normally distributed will have 3.4 parts per million beyond a point that is 4.5 standard deviations above or below the mean (one-sided Capability Study). This implies that 3.4 DPMO corresponds to 4.5 sigmas, not six as the process name would imply. This can be confirmed by running on QuikSigma or Minitab a Capability Study on data with a mean of 0, a standard deviation of 1, and an upper specification limit of 4.5. The 1.5 sigmas added to the name Six Sigma are arbitrary and they are called "1.5 sigma shift" (SBTI Black Belt material, ca 1998). Dr. Donald Wheeler dismisses the 1.5 sigma shift as "goofy".
[12]
In a Capability Study, sigma refers to the number of standard deviations between the process mean and the nearest specification limit, rather than the standard deviation of the process, which is also measured in "sigmas". As process standard deviation goes up, or the mean of the process moves away from the center of the tolerance, the Process Capability sigma number goes down, because fewer standard deviations will then fit between the mean and the nearest specification limit (see Cpk Index). The notion that, in the long term, processes usually do not perform as well as they do in the short term is correct. That requires that Process Capability sigma based on long term data is less than or equal to an estimate based on short term sigma. However, the original use of the 1.5 sigma shift is as shown above, and implicitly assumes the opposite.
As sample size increases, the error in the estimate of standard deviation converges much more slowly than the estimate of the mean (see confidence interval). Even with a few dozen samples, the estimate of standard deviation often drags an alarming amount of uncertainty into the Capability Study calculations. It follows that estimates of defect rates can be very greatly influenced by uncertainty in the estimate of standard deviation, and that the defective parts per million estimates produced by Capability Studies often ought not to be taken too literally.
Estimates for the number of defective parts per million produced also depends on knowing something about the shape of the distribution from which the samples are drawn. Unfortunately, there are no means for proving that data belong to any particular distribution. One can only assume normality, based on finding no evidence to the contrary. Estimating defective parts per million down into the 100s or 10s of units based on such an assumption is wishful thinking, since actual defects are often deviations from normality, which have been assumed not to exist.
The ±1.5 Sigma Drift
The ±1.5σ drift is the drift of a process mean, which is assumed to occur in all processes.[13] If a product is manufactured to a target of 100 mm using a process capable of delivering σ = 1 mm performance, over time a ±1.5σ drift may cause the long term process mean to range from 98.5 to 101.5 mm. This could be of significance to customers.
The ±1.5σ shift was introduced by Mikel Harry. Harry referred to a paper about tolerancing, the overall error in an assembly is affected by the errors in components, written in 1975 by Evans, "Statistical Tolerancing: The State of the Art. Part 3. Shifts and Drifts". Evans refers to a paper by Bender in 1962, "Benderizing Tolerances – A Simple Practical Probability Method for Handling Tolerances for Limit Stack Ups". He looked at the classical situation with a stack of disks and how the overall error in the size of the stack, relates to errors in the individual disks. Based on "probability, approximations and experience", Bender suggests:
A run chart depicting a +1.5σ drift in a 6σ process. USL and LSL are the upper and lower specification limits and UNL and LNL are the upper and lower natural tolerance limits.
Harry then took this a step further. Supposing that there is a process in which 5 samples are taken every half hour and plotted on a control chart, Harry considered the "instantaneous" initial 5 samples as being "short term" (Harry's n=5) and the samples throughout the day as being "long term" (Harry's g=50 points). Due to the random variation in the first 5 points, the mean of the initial sample is different from the overall mean. Harry derived a relationship between the short term and long term capability, using the equation above, to produce a capability shift or "Z shift" of 1.5. Over time, the original meaning of "short term" and "long term" has been changed to result in "long term" drifting means.
Harry has clung tenaciously to the "1.5" but over the years, its derivation has been modified. In a recent note from Harry, "We employed the value of 1.5 since no other empirical information was available at the time of reporting." In other words, 1.5 has now become an empirical rather than theoretical value. Harry further softened this by stating "... the 1.5 constant would not be needed as an approximation". Interestingly, 1.5σ is exactly one half of the commonly accepted natural tolerance limits of 3σ.
Despite this, industry is resigned to the belief that it is impossible to keep processes on target and that process means will inevitably drift by ±1.5σ. In other words, if a process has a target value of 0.0, specification limits at 6σ, and natural tolerance limits of ±3σ, over the long term the mean may drift to +1.5 (or -1.5).
In truth, any process where the mean changes by 1.5σ, or any other statistically significant amount, is not in statistical control. Such a change can often be detected by a trend on a control chart. A process that is not in control is not predictable. It may begin to produce defects, no matter where specification limits have been set.
Monday, October 8, 2007
Subscribe to:
Posts (Atom)
