Six Sigma For Dummies
Book image
Explore Book Buy On Amazon

In general, when planning for Six Sigma, variation is undesirable because it creates uncertainty in your ability to produce a desired outcome. Professional results, in anything, demand consistency. In the world of business and organizational life, the goal is to produce a work product or deliver a service in a predictable manner. That means you get the variation in your inputs — the Xs — under control.

Some variation — within limits — may be okay. A little too much variation here or there, and you may have some repairs or rework on your hands. Too much variation altogether, and you’re either out of a job or out of business.

Variation from a Six Sigma perspective

Very simply, variation is deviation from expectation. If you toss a coin, what’s the chance of it landing on heads? Fifty percent. Therefore, if you toss a coin ten times, your expectation is to get five heads and five tails. Take out a coin and toss it ten times. What happened? Did you meet your expectation?

Try it again. What happened the second time? Try performing successive sets of ten coin tosses. Every time you repeat your ten coin tosses, the output — the number of heads and tails — varies. The extent to which your experience deviates from expectation is the extent to which variation has occurred.

When you closely measure any output Y, you find that it varies — always. This point is important to understand: Every output varies. Each time you park your car, it doesn’t fit exactly in the same place between the parking lines. Every single product a company makes varies, however minutely, from every other single instance of the same product on every dimension, such as weight, size, durability, and so on.

The variation of actual occurrence versus the mean is a comparison you make frequently in Six Sigma. If you measure the occurrence of something many times, it’s going to vary around some average — or mean — value. The mean is the central tendency of your process. Flip that coin enough times, and you see that the mean tends toward 50 percent heads and 50 percent tails.

Anytime you measure the value of a given occurrence or event, it’s going to vary from the mean. A player’s batting average may be .302 for the season, but Friday night he went 2-for-5 and batted 0.400, nearly 100 points above his average. And then Saturday he went 0-for-4. That’s all thanks to variation.

Common cause versus special cause variation

Variation comes from two sources: common causes and special causes. Some variation is just natural; you can’t eliminate it. That’s common cause variation. The natural forces of nature work to mix things up. It’s simply part of the normal course of events. Consider the earlier coin-toss example; the variation in the number of heads from set to set is perfectly normal.

Now consider a few examples in human systems. Think about the time each day when the mailman comes or how long it takes to process a credit card application. They all vary, and the variation is a natural part of their systems.

Special cause variation is completely different — it’s directly caused by something special. If the mailman usually comes at about 11:30 each day but gets a flat tire on Tuesday and doesn’t come until noon, that’s a special cause of variation. If it normally takes 15 minutes to process a credit card application, but the network connection goes down and prolongs the procedure, that’s a special cause.

These special causes are specific things you can identify and do something about. Special cause variation is captured in the X input factors of the breakthrough equation.

With Six Sigma, you spend particular effort to understand the difference between common cause and special cause variation because they’re so different and because you go to special effort to identify which type is causing the variation and how it’s affecting the outcome.

Reduce variation through Six Sigma

In general, you should work on reducing special cause variation before trying to reduce common cause variation. When you have special cause variation, the process isn’t stable or predictable, and you can’t be sure of what is happening. But after you’ve taken the special cause variation out of a system or process, you can then improve its common cause variability.

For example, suppose a coffeehouse is getting a lot of complaints about inconsistent drink quality. If the coffeehouse first eliminates the special employee-to-employee differences in making a cup of coffee, it can then effectively work on improving the inherent, common cause quality of the coffee itself. But if the initial focus is on fixing the inherent quality of the coffee first, the special employee-to-employee differences will cloud the situation.

The goal is to understand variation, control it, and minimize its impact, while accepting that it’s part of everyday life and a part of every organization. Just like you can understand and characterize the relationship between the Xs and the Y, you can characterize variation and error in the ability to produce desired outcomes consistently over time.

This measure of control provides the foundation and framework for implementing real changes in the way you do what you do — changes that have the greatest probability of yielding positive results.

Short-term and long-term variation

Another important characteristic of variation is the way in which it changes over time. Recognizing the difference between short-term variations and long-term variations is important.

Here’s an example: “That mailman used to come at 11:30, give or take a few minutes, but lately he’s been coming later and later, and now it seems he’s here closer to 12:15, which is really annoying because we’re at lunch and he has to leave the packages out in the rain.”

In this example, the short-term variation of a few minutes was inconsequential and well within the workers’ tolerance level, but when the mean time of arrival experienced a long-term variation, drifting out by 45 minutes, it became a problem.

About This Article

This article is from the book:

About the book authors:

Craig Gygi is Executive VP of Operations at MasterControl, a leading company providing software and services for best practices in automating and connecting every stage of quality/regulatory compliance, through the entire product life cycle. He is an operations executive and internationally recognized Lean Six Sigma thought leader and practitioner. Bruce Williams is Vice President of Pegasystems, the world leader in business process management. He is a leading speaker and presenter on business and technology trends, and is co-author of Six Sigma Workbook for Dummies, Process Intelligence for Dummies, BPM Basics for Dummies and The Intelligent Guide to Enterprise BPM. Neil DeCarlo was President of DeCarlo Communications.

This article can be found in the category: