Six Sigma–Designing Past the Five Sigma Wall


In the eighth chapter of the book Six Sigma:  The Breakthrough Strategy Management Strategy Revolutionizing the World’s Top Corporations, the authors Ikel Harry, Ph.D., and Richard Schroeder talk about moving past the Five Sigma wall.

There is only way to do this–you can’t inspect your product past this wall, you have to design your product past the wall.   The Design for Six Sigma or DFSS system is a system of Six Sigma principles and methods that allow a designer of products, processes, or services to create designs that a) are resource efficient, b) capable of very high yields, and c) are imperious to process variations.

Why is this important?   Because although design represents the smallest actual cost element in products, it leverages the largest cost influence.   If you simplify the design by 30%, it creates 21% overall cost savings, whereas the same 30% savings applied to labor or overhead only results in a 1.5% overcall cost savings.

This is why DFSS is so valuable for a company, because it can eliminate parts or processes that either create defects or do not translate into critical-to-quality characteristics, and thus can improve customer satisfaction.

Six Sigma–Process Drift


In the eighth chapter of their book Six Sigma:  The Breakthrough Strategy Revolutionizing the World’s Top Corporations, the authors Mikel Harry, Ph.D., and Richard Schroeder discuss “Measuring Performance on the Sigma Scale.”   In the previous chapter, the authors discussed the Breakthrough Strategy of implementing Six Sigma on the business, operational, and process level.

In this chapter they focus on the question “how does improving the Sigma level of a company’s processes improve that company’s performance?”   One of the ways this is done is by setting a control limit in order to control the variation within the specification limit.    However, even when one thinks one has controlled the variation to a certain Sigma level, it turns out that the long-term Sigma level of the process ends up, on average, 1.5 Sigma less than the short-term improvement one thought one had achieved.   Why is this?   Because of something called “process drift”.

The very illuminating example used by the authors is that of designing a garage to accommodate a vehicle’s width.   Let’s assume that you had an architect who was going to design your garage to accommodate your vehicle, and that you only have one vehicle to park in that garage.    What the architect has to accommodate is not just the width of the vehicle, but the variations in which it will be driven into the garage.   Yes, the garage needs to be as wide as the vehicle, but what if the driver is coming into the garage slightly off center?   Just how much is the variation between individual drives, not to mention individual drivers in the household?   Also, there needs to be some accommodation not just for the width of the car, but the width of the driver.

When the driver gets out of the vehicle, there has to be enough room for the driver to be able to fit between the vehicle and the garage wall as the driver makes his or her way towards the door leading into the house.   Depending on how much is stored in the sides of the garage, the garage will need to accommodate the width of the storage as well.

The “process drift” is analogous to the variation in an individual’s centering of the vehicle from day to day.   This can be effected by a) the amount of sleep the person has received the night before, b) the amount of alcohol the person has had before driving home, and c) the amount of light outside the garage at the time the driver approaches it.

The analogy in the world of manufacturing that causes “process drift” can come from three primary sources or variation:

1)  Inadequate design margins

These need to account for natural variation.   In the case of the garage, the architect needs to account for the fact that the car will not be perfectly centered, and so a reasonable amount of variation will be needed to be designed in.

2)  Unstable parts and materials provided by vendors and suppliers

Vendors and suppliers will always be on the lookout to change parts and materials to something cheaper.   This will sometimes cause variation that the manufacturer needs to be on the lookout for.

3)  Insufficient process capability

This means that the process is not capability of meeting the specifications limits of the critical-to-quality characteristics that customers demand.   If an engineer does not take the width of the driver into account as well as the width of the automobile, the driver may complain that they can get the car into the garage, but are subsequently unable to get out of the car, which understandably would prove irksome to the driver.

These three sources of variations can occur individually or, more often than not, can overlap and happen all at once.  Six Sigma is used to tease out these three common sources of variation, and thus to help remove “process drift”.

In the next post, the authors go into a little more detail regarding the relationship between a customer’s critical-to-quality characteristics (CTQs) and the specification limits a manufacturer sets in order to make sure that they are operationally satisfied in the manufacturing process.

Six Sigma–Specification vs. Control Limits


In the eighth chapter of their book Six Sigma:  The Breakthrough Strategy Revolutionizing the World’s Top Corporations, the authors Mikel Harry, Ph.D., and Richard Schroeder discuss “Measuring Performance on the Sigma Scale.”   In the previous chapter, the authors discussed the Breakthrough Strategy of implementing Six Sigma on the business, operational, and process level.

In this chapter they focus on the question “how does improving the Sigma level of a company’s processes improve that company’s performance?”   One of the ways this is done is by setting a control limit in order to control the variation within the speification limit.

One way to explain these two concepts is by using an analogy.   Let’s say you’re in a car that is traveling down the road and you don’t want to hit the leave the road because the shoulder has rocks or, even worse, a sharp drop off a cliff.   One of the ways you can do this is by focusing on staying within the guardrail.   Now, if you’re doing a process that is churning out units on an assembly line, you don’t want to have defective units which can occur if the units are out of specification.   So you measure the variation from the center and you call the point where the variation goes out of specification as the specification limit.   It’s the equivalent of the guardrail in the analogy.   You stay within the specification limit, no defects.   You stray outside of the specification limit, then you’ve got a defect.

If you’re traveling down the road, rather than trying to avoid the guardrail, and even safer method of driving is to make sure you that you stay in your lane (except when passing a car, for example).   The lane line on the right-hand side of your car is far enough away from the guardrail that, if you focus your effort on staying within your lane, you will almost assuredly never be in danger of hitting the guardrail.   Now, if you’re doing the process mentioned in the paragraph above, and you want to stay within the specification limits, then you setup control limit.    It’s the equivalent of the lane in the analogy.   You stay within the control limit, then you end up staying with the specification limit, and there are no defects.

This brings up another reason for the 1.5-sigma shift which is a phenomenon where the long-term performance of a process is 1.5 sigma less than the short-term performance.   The analogy for the car your car’s steering capability.   If you point the car in a certain direction, and then leave off the steering wheel, are your wheels and chassis aligned in such a way that the car will still go in that direction?   Or will the car drift to the left or right?    Now if you are driving along and making sure the wheel is in the same direction, the car will go straight.   That is the short-term steering capability of the car.   However, if your car’s wheel alignment is such that the car will tend to steer to the left or right, it may require repeated inputs from you to keep the car aligned correctly.   In a similar way, the short-term capability of a process may be at 4 sigma, but if measure it in the long run, there may be errors that are causing the quality equivalent of faulty wheel alignment in that they cause the long-term capability of a process to be at 2.5 sigma instead.

This is why it is vital to account for the “shift and drift” phenomena mentioned above by dividing the total process variation into short-term and long-term components.   This is the only way to make sure the process is maintaining high quality, or in our driving analogy, to make sure the car stays on the road!

Six Sigma–The Pareto Principle: The Trivial Many vs. the Vital Few


In the eighth chapter of their book Six Sigma:  The Breakthrough Strategy Revolutionizing the World’s Top Corporations, the authors Mikel Harry, Ph.D., and Richard Schroeder discuss “Measuring Performance on the Sigma Scale.”   In the previous chapter, the authors discussed the Breakthrough Strategy of implementing Six Sigma on the business, operational, and process level.

In this chapter they focus on the question “how does improving the Sigma level of a company’s processes improve that company’s performance?”   In the last post, I referred to the importance the authors stress on treating persistent problems over sporadic ones.   The persistent problems are the ones that require the most effort, but their solution produces the most lasting benefit.

In this post, I describe how the authors stress the importance of solving the “vital few” vs. the “trivial many” problems by invoking the Pareto principle, developed by an eighteenth-century Italin economist Vilfredo Pareto.  Pareto’s law in the context of quality says that “80% of defects will be traceable to 20% of the different types of defects that can occur.”   The types of defects that account for 80% of the defects produced are called the “vital few”, and the types that account for 20% of the defects produced are called the “trivial many.”

Combining the discussion of the Pareto principle with the discussion in the last post regarding the distinction between sporadic vs. persistent problems, the authors rightly conclude that within each category the Pareto law holds.   That is, there is a Pareto distribution of sporadic problems as well as persistent problems.

So combining these two categories, the authors say that the best “bang-for-the-buck” that companies can get from their quality improvement activities is to go after the “vital few persistent problems”.  These cause the greatest headaches for companies, and going after them not only brings the greatest, well, headache relief, but the maximum results in terms of the bottom line.

The next post deals with the concept of “control limits”, which act like guardrails and lane markers on a highway to make sure the vehicle stays on the road.

Six Sigma–Sporadic vs. Persistent Problems


In the eighth chapter of their book Six Sigma:  The Breakthrough Strategy Revolutionizing the World’s Top Corporations, the authors Mikel Harry, Ph.D., and Richard Schroeder discuss “Measuring Performance on the Sigma Scale.”   In the previous chapter, the authors discussed the Breakthrough Strategy of implementing Six Sigma on the business, operational, and process level.

In this chapter they focus on the question “how does improving the Sigma level of a company’s processes improve that company’s performance?”   And when I say “improve that company’s performance”, I am referring to the company’s performance on the business, operational, and process level.

The first topic in the chapter is a discussion of sporadic vs. persistent problems.   It should be pretty clear what these are:  they refer to how often the problems occur.    However, in examining further those problems which occur “constantly” as opposed to “once and a while”, you also may find that you are examining the very nature of the problem.   A sporadic problem is one that the company focuses on first, for the obvious reason that it stands out among the background.  A problem that occurs constantly, however, may be invisible just because it happens all the time.

Six Sigma, due to its statistical toolkit, is especially adapt at teasing out the persistent problems and solving them, thereby reducing the overall defect rate.  Here are some reasons given for persistent defects:

  • hidden design flaws
  • inadequate tolerances
  • inferior processes
  • poor vendor quality
  • lack of employee training
  • inadequate tool maintenance
  • employee carelessness
  • insufficient inspection feedback

The problems that management may have with regards to persistent defects are that a) they may not even be aware that hidden persistent problems exist, and b) if they are made aware of them, they may assume that correcting these problems is uneconomical.    This perception can be changed if management can be shown the costs of these persistent defects, and that these costs are greater than the costs of correcting them.

The real reason why a company needed to get rid of persistent defects is that, the authors liken it to a cancer that spreads.  A persistent defect does not only persist, it spreads throughout an otherwise healthy company.   It’s best to target this cancer with the laser-like precision of Six Sigma so that a company can truly do the best work that can be done.

Capital in the Twenty-First Century–Malthus, Ricardo, and Marx on Inequality


I just started reading the introduction to Thomas Piketty’s masterwork on the topic of economic inequality Capital in the Twenty-First Century, and I wanted to jot down some notes from the introduction on how earlier economic theorists dealt with the subject of economic inequality.   In particular, I wanted to capture Thomas Piketty’s insights on how Malthus, Ricardo, and Marx thought about the subject.   In it, Mr. Piketty shows in essence what they got they wrong in the hindsight of later economic research, but also what they got right.

1.  Malthus

Thomas Malthus published his Essay on the Principle of Population in 1798, and in it he posited that the primary threat on economic and political stability was overpopulation.   What did he get right?   Well, it is true that France was the most populous country in Europe and had achieved a steady increase in population throughout the eighteenth century, with a population in 1700 of 20 million (four times the population of England) and 30 million by 1780.   This contributed to a stagnation of agricultural wages and an increase in land rents in the decades before the French Revolution of 1789.   It was not the sole cause of the revolution, but it was definitely a contributing factor.

To combat this problem of overpopulation, he proposed the draconian measure of an immediate halt to all welfare assistance to the poor, and an institution of severe scrutiny of reproduction by the poor.

What did he get wrong?   Well, like others in England, like Arthur Young, who wrote of the poverty of the French countryside based on his travels there in 1787-1788 on the eve of the revolution, much of the extremity of his solution to inequality was based on the fear gripping those in England, and indeed much of the European elite, in 1790s of a similar revolution taking place at home, rather than on a sober economic analysis of the factors involved.

2.  Ricardo

David Ricardo published his Principles of Political Economy and Taxation in 1817. Like Malthus, he was interested in the issue of overpopulation and its effects on social equilibrium, but in particular through the mechanism of its effect on land prices and land rents.   His argument was that, as population increased, since the supply of land is limited relative to other goods, it will become increasing scarce as more and more people are competing for roughly the same amount of land.  So the price of land will rise continuously, as will the rents paid to landlords.   This means that the landlords will claim a growing share of national income, as the share available to the rest of the population decreases.   This growing economic power of the rentier vs. the renting segments of society could only be counterbalanced by a steadily increasing tax on land rents.

What did Ricardo get right?   Although his example was based on the price of land, his “scarcity principle” meant that certain prices might rise to very high levels, which might be enough to destabilize entire societies.   His insight of the scarcity principle could well be applied to the price of urban real estate in major world capitals or the price of oil.

What did he get wrong?   Ricardo did not anticipate the importance of technological progress or industrial growth, which changed the level of output achievable from any given piece of land.    In other words, although land is a fixed input, the productivity achieved by that land is not a fixed output.

In addition, the high prices for a certain scarce commodity, through the law of supply and demand, will create a countervailing force that will reduce the prices of that commodity through the lowering of demand, often through the development of alternatives.  In the case of oil, for example, the high prices of oil have created a pressure to develop alternative fuels, which are thereby becoming more competitive with oil and may someday replace it.

3.  Marx

I have to start the notes to this section by mentioning a Monty Python skit where there is a new faculty member of the Philosophy Department who is arriving at the University of Wallamalloo in Australia and who is told he can mention Karl Marx in his lectures as long he states clearly that he was wrong.

Mr. Piketty is here to tell us both what he got right and what he got wrong.

For Marx, who wrote the first volume of Das Kapital in 1867, he differed from Malthus and Ricardo in that he was writing during the full swing of the Industrial Revolution.   The Industrial Revolution ushered in a period from 1870-1914 where inequality stabilized at extremely high levels due to a long phase of wage stagnation and an increase the share of national income devoted to capital (industrial profits, land and building rents) which occurred during a period of rapidly accelerating economic growth.  It was only the shocks to the system rendered by World War I which were powerful enough to reduce inequality.

Marx set himself the task of explaining why capital prospered while labor incomes stagnated.  What he did was to take the Ricardian model of the principle of scarcity and apply it to a world where capital was primarily industrial (machinery, plants, etc.) rather than landed property.    However, as opposed to land, which was a fixed quantity, he saw that there was no limit to the amount of capital that could be accumulated.    His principal conclusion was the “principle of infinite accumulation”, which said that capital would accumulate in fewer and fewer hands with no natural limit to the process.    It’s like the “scarcity principle” on steroids.   This economic “singularity event” would led to a political singularity event, namely, the communist revolution.

What did Marx get wrong?   He neglected the possibility of durable technological progress and steadily increasing productivity which can serve to some extent as a counterweight to the process of accumulation and concentration of private capital.   I use the word “can” advisedly, because the productivity gains due to technological innovation in the U.S. after the 1970s have been going almost totally to the capital side and not to the labor side of the economy, so increasing productivity does not necessarily lead to it being a counterweight to the process of accumulation.

Of course, the other major flaw in Marx’ argument was what would happen after the “political singularity event” of the Revolution.   He did not put a lot of thought in to the question of how a society in which private capital had been totally abolished would be organized politically and economically.

What did Marx get right?   If population and productivity growth are relatively low, then cannot counterbalance the destabilizing effects of accumulated wealth.   Although accumulation ends in the real world at a finite level, not an infinite one as Marx had posited, it still ends at a level high enough to be destabilizing.   That insight is just as relevant to the levels of private wealth in the 1980s and 1990s in the Western world and Japan as it was in the late nineteenth century in Europe.

One factor that Malthus, Ricardo, and Marx had was that their analysis was based on analysis of a relatively limited set of facts about recent economic conditions, combined with theoretical speculations.   It would take the 20th century for economists to begin to apply the principles of social science research and the use of historical data to the economic problems like the problem of economic inequality.

Six Sigma–The Breakthrough Strategy (Conclusion)


In the seventh chapter of their book Six Sigma: the Breakthrough Management Strategy Revolutionizing the World’s Top Corporations, the authors Mikel Harry, Ph.D., and Richard Schroeder explain the meat of the book’s contents by explaining just what the Breakthrough Strategy entails.

It consists of three levels, the business, operations, and process level, and there are eight stages to each level:

  1. Recognize
  2. Define
  3. Measure
  4. Analyze
  5. Improve
  6. Control
  7. Standardize
  8. Integrate

The standard Six Sigma project consists of the five stages Define-Measure-Analyze-Improve-Control, with the Recognize being the input from the higher level, and the standardize and integrate being the outputs back to the higher level.    Stages 1, 7, and 8, therefore, tie the three levels together.

This allows the Six Sigma process to flow up and down across organizations, so that those at the business level communicate with those at the operation and process levels to make sure their projects are meaningful to the improvement of the overall business, and those at the operation and process levels communicate their successful results in the form of “lessons learned” which can be integrated into management’s thinking and thus increase the company’s intellectual capital.

Six Sigma–The 8 Stages of the Breakthrough Strategy at the Process Level


In the seventh chapter of their book Six Sigma:  The Breakthrough Management Strategy Revolutionizing the World’s Top Corporations, the authors Mikel Harry, Ph.D., and Richard Schroeder get to the “meat” of the book by explaining what the Six Sigma Breakthrough Strategy consists of.   Before explaining the stages of the strategy, the authors related the three levels of the strategy:   the business level, the operations level, and the process level.   These three stages were covered in a previous post.   In this series of three posts, I go through the 8 Stages of the Breakthrough Strategy for each of the three stages.   Last post covered the Operational Level, and this post covers the Process Level.

1.  RECOGNIZE … functional problems that link to operational issues

As mentioned in the previous post, an operational issue must be recognized and defined … as a series of independent, but interrelated problems.    They may be independent from each other, but are interrelated because they are tied into the same business or support systems.

2.  DEFINE … the processes that contribute to the functional problems

Functional problems are one of three basic types:

a)  Product problems

b) Service-related problems

c) Transactional problems

No matter what type of problem, they are all created by one or more processes which themselves consist of a series of steps, events, or activities.   Rather than focus on the outcome (i.e., the problem), the focus needs to shift to the process.   How does the organization map its processes into a kind of “data flow” map?    In order to search for the source of the problem, you need a map!

3.  MEASURE … the capability of each process that offers operational leverage

The capability of each process needs to be measured, and those elements which are critical-to-quality are referred to as CTQ characteristics.   It is important to use good data to measure CTQs, because they form the crucial link between data, information (processed data in an understandable form), process metrics (processed information in a form that is easily transmitted and communicated) , and finally the management decisions to improve the processes.

4.  ANALYZE … the data to access prevalent patterns and trends.

The data are analyzed to determine the relationships between the variable factors in the process, which in turn determines the direction of improvements.   Performance metrics will show the theoretical limit of the capability of the process if all is going perfectly.    If the metrics show that there is potential for improvement, then the project can progress to the next phase…

5.  IMPROVE … the key products/services characteristics created by the key processes.

The job of the Black Belt is to

a) focus on CTQ characteristics inherent in a product of service

b) go about improving the capability of such CTQ characteristics by “screening” for variables that have the greatest impact on the processes

c) isolate the key variables, establish the limits of acceptable variation, and then control the factors that affect these limits.

Once these key variables are identified, the Black Belt can twist these “knobs” to establish new levels of performance for these CTQ characteristics.

6.  CONTROL … the process variables that exert undue influence.

Once the process has been improved, Black Belts need to control these key variables over time to make sure that the improvement stays in place, usually in the form of a statistical process control or SPC system.

7.  STANDARDIZE … the methods and processes that produce best-in-class performance.

Once Black Belts have improved their target CTQ characteristics, thereby achieving their Six Sigma project goals, they must promote and standardized those Six Sigma methods that produced the best results, as well as standardizing the results themselves.

8.  INTEGRATE … standard methods and processes into the design cycle

The focus of the company needs to go to taking the standardized results achieved through Six Sigma goals and then to make equivalent changes in the company’s designs across the board.   In this way, improved components, processes, and practices that have proven to be best-in-class can be replicated throughout the company.    Design engineers need to be rewarded not just how the product performed once manufactured, but how easy it was to manufacture that product.

I remember at Mitsubishi Motors, there would be design changes made to accommodate the installation of a part which was engineered well, but placed in a location that was hard for assembly line workers to get at.    This manufacturing difficulty led to defects which the design engineers had not envisioned.   I heard that the design engineers were actually made to go to the assembly line and see for themselves how difficult it was to install the part.   Once they themselves witnessed the problem, they understood and went back to the proverbial drawing board to redesign not just the part, but its location within the car so that it would be more easily accessible.   This is an example of making sure that the viability of the entire product cycle is considered during the initial design phase.

Another example comes from the design of a bumper which was to be a single piece of plastic, which made manufacturing easier.  However, the analysis from the bumper crash tests showed that damage to a bumper at 5 mph would require replacement of the entire one-piece assembly as opposed to just portions of the bumper as was the case with the previous design.   The analysis was made that the repair costs would be higher in the case of the one-piece design, and that this would therefore have an adverse effect on the insurance rates for customers who bought the vehicles with that design.   It was thus changed, because the design and manufacturing teams needed to be aware of the effect of the design on the actual usage by the customer.

So the design cycle needs to be where the focus is on preventing future problems, not repairing past problems.

The final post on this chapter is a summary of the entire issue of the Breakthrough Strategy.

Six Sigma–The 8 Stages of the Breakthrough Strategy at the Operational Level


In the seventh chapter of their book Six Sigma:  The Breakthrough Management Strategy Revolutionizing the World’s Top Corporations, the authors Mikel Harry, Ph.D., and Richard Schroeder get to the “meat” of the book by explaining what the Six Sigma Breakthrough Strategy consists of.   Before explaining the stages of the strategy, the authors related the three levels of the strategy:   the business level, the operations level, and the process level.   These three stages were covered in a previous post.   In this series of three posts, I go through the 8 Stages of the Breakthrough Strategy for each of the three stages.   Last post covered the Business Level, and this post covers the Operational Level.

1.  RECOGNIZE … operational issues that link to key business systems

An issue that comes up at the operational level needs to be broken down into its components, each of which can be deal with separately.    However, it is important to see how these components fit together, because that will give a clue into how they are all tied into key business systems.   You can have quality problem-solving tools that try to reduce defects after they have occurred, all measured through a quality information system or QIS at the end of the manufacturing line.   But a better approach is to install an in-process quality measurement system that connects to the end-of-line, so you are able to forecasting end-of-line defects before they occur.   This is the first step in preventing defects, rather than correcting them.

2.  DEFINE … Six Sigma projects to resolve operational issues

How does a company prioritize the operational issues to be resolved?   With these criteria:

a)  the extent of cost savings to be realized

b) the degree to which an operational issues is connected to larger critical-to-quality issues

c) the degree to which an operational issue is connected to the efficient and effective operation of a business support system, and

d) the expected length of time necessary to resolve a specific operational issue

3.  MEASURE … performance of the Six Sigma projects

Once the Six Sigma projects are chosen based on the criteria listed above in paragraph 2 (under DEFINE), the company’s progress metrics must be established, so that once the Six Sigma projects are initiated, data regarding their performance can be gathered.

4.  ANALYZE … project performance in relation to operational goals

Of course, monitoring the actual savings of a Black Belt Six Sigma project with its projected savings is important, but the performance of the  projects have to be measured against the larger operational goals of the business.

5.  IMPROVE … Six Sigma project management system

Once the operational issues recognized above are defined into Six Sigma projects, and these projects are then measured and analyzed, this tracking system needs to be improved and refined.   Maybe a business needs to track new variables (net savings, project scope, project completion time, etc.).   Maybe it has to track a different set of data.

6.  CONTROL … inputs to project management system

Once several iterations of improvement have gone forward, there should be a regular system audit to sustain the improvement.

7.  STANDARDIZE … best in-class management system practices

Once a Six Sigma project management system has achieved best-in-class status, the company should standardize it and replicate it throughout all relevant sectors within the business.   This is done by reward and recognition systems that give everyone incentives to keep up their “same old” ways of doing things to adopt the practices that have been proven to work.

8.  INTEGRATE … standardized Six Sigma practices into policies and procedures

Once the best-in-class management system practices have been standardized, then need to be integrated into operations of the business by creating policies and procedures that become the new fabric of operations for the business.

 

Six Sigma–The 8 Stages of The Breakthrough Strategy at the Business Level


In the seventh chapter of their book Six Sigma:  The Breakthrough Management Strategy Revolutionizing the World’s Top Corporations, the authors Mikel Harry, Ph.D., and Richard Schroeder get to the “meat” of the book by explaining what the Six Sigma Breakthrough Strategy consists of.   Before explaining the stages of the strategy, the authors related the three levels of the strategy:   the business level, the operations level, and the process level.   This is covered in the previous post.

In this post, we cover the 8 stages of the implementation of the Six Sigma Breakthrough Strategy at the Business Level.  As mentioned in the previous post, the strategy is implemented at a business level by a Deployment Champion, and the application of the strategy may take place over a 3-5 year period if it is to be done in a consistent and focused manner.

1)  RECOGNIZE … the true states of your business

The “states” of a business are the global business conditions used to guide and manage a business.   These could be “levels of customer satisfaction”, for example, which impact the economics of a business.   Knowing this can help a company leverage its efforts and resources to improving customer satisfaction, which will in turn impact the bottom line of the business in a positive way.

2)  DEFINE … what plans must be in place to realize improvement of each state

Let’s assume that “levels of customer satisfaction” is one of the “states of business” that is being considered for improvement.   What parts of the company’s organization are correlated to this state?   Is it the manufacturing system, the engineering (design) system, the delivery system or the service system?    Are there characteristics of these systems which are critical to quality (in the sense of customer satisfaction)?

3)  MEASURE … the business systems that support the plans.

The first question here is to ask “what” to measure, and then the second question to ask is “how” to measure it.  The third question to ask is “is there executive (management) commitment to go after the right measurements?”

4)  ANALYZE … the gaps in system performance benchmarks.

Let us say that a company analyzes its own performance in an area and finds that it is operating at a 3.4 sigma level.   Let’s  say that the company has analyzed a competitor which operates in the same or similar area at a 4.6 sigma level.  What is that other business doing that makes its performance better?

5)  IMPROVE … system elements to achieve performance goals.

Once the system that needs to be improved has identified those elements which comprise it, a company then needs to identify those elements which need to be improved first, which are considered to be most likely those that will affect quality.  This ensures that the company’s resources spent on Six Sigma projects to improve those elements are getting the most “bang for the buck.”

6) CONTROL … system-level characteristics that are critical to value

Once an improvement is identified and proved with Six Sigma techniques, then it is important to monitor and control this solution over a period of time to make sure that it is a permanent solution and that the system does not fall back into previous patterns, which could cause the gains made in the previous stage to erode over time.

7) STANDARDIZE … the systems that prove to be best-in-class

Let’s say that a system element is improved and then controlled so that the improvement is permanent.  Once the element is shown to be “best-in-class”, it can then be replicated, where applicable, in other business units to amplify the improvement throughout the entire company.

8) INTEGRATE … best-in-class systems into the strategic planning framework.

Once the best-in-class systems have been adopted on a business-wide basis, then the strategic planning framework needs to take them into account.  This business-wide improvement is then the new “state of business.”

Essentially, the process has come full circle and it is time for another iteration of the 8 stages of the breakthrough strategy, but with the results of the previous cycle being the basis for the next round of breakthroughs.

For example, if previous efforts have been made towards reducing manufacturing defects, the company may then alter the strategic planning framework so that the next phase of improvement tries to focus on reducing those design defects that produced the manufacturing defects in the first place.

This is how the business continuously improves and at the same time coordinates that improvement within all levels of the organization.    From this bird’s-eye view of the business, we next go to the operations level of the Breakthrough Strategy, which is the subject of the next post.