Six Sigma–Sporadic vs. Persistent Problems


In the eighth chapter of their book Six Sigma:  The Breakthrough Strategy Revolutionizing the World’s Top Corporations, the authors Mikel Harry, Ph.D., and Richard Schroeder discuss “Measuring Performance on the Sigma Scale.”   In the previous chapter, the authors discussed the Breakthrough Strategy of implementing Six Sigma on the business, operational, and process level.

In this chapter they focus on the question “how does improving the Sigma level of a company’s processes improve that company’s performance?”   And when I say “improve that company’s performance”, I am referring to the company’s performance on the business, operational, and process level.

The first topic in the chapter is a discussion of sporadic vs. persistent problems.   It should be pretty clear what these are:  they refer to how often the problems occur.    However, in examining further those problems which occur “constantly” as opposed to “once and a while”, you also may find that you are examining the very nature of the problem.   A sporadic problem is one that the company focuses on first, for the obvious reason that it stands out among the background.  A problem that occurs constantly, however, may be invisible just because it happens all the time.

Six Sigma, due to its statistical toolkit, is especially adapt at teasing out the persistent problems and solving them, thereby reducing the overall defect rate.  Here are some reasons given for persistent defects:

  • hidden design flaws
  • inadequate tolerances
  • inferior processes
  • poor vendor quality
  • lack of employee training
  • inadequate tool maintenance
  • employee carelessness
  • insufficient inspection feedback

The problems that management may have with regards to persistent defects are that a) they may not even be aware that hidden persistent problems exist, and b) if they are made aware of them, they may assume that correcting these problems is uneconomical.    This perception can be changed if management can be shown the costs of these persistent defects, and that these costs are greater than the costs of correcting them.

The real reason why a company needed to get rid of persistent defects is that, the authors liken it to a cancer that spreads.  A persistent defect does not only persist, it spreads throughout an otherwise healthy company.   It’s best to target this cancer with the laser-like precision of Six Sigma so that a company can truly do the best work that can be done.

Capital in the Twenty-First Century–Malthus, Ricardo, and Marx on Inequality


I just started reading the introduction to Thomas Piketty’s masterwork on the topic of economic inequality Capital in the Twenty-First Century, and I wanted to jot down some notes from the introduction on how earlier economic theorists dealt with the subject of economic inequality.   In particular, I wanted to capture Thomas Piketty’s insights on how Malthus, Ricardo, and Marx thought about the subject.   In it, Mr. Piketty shows in essence what they got they wrong in the hindsight of later economic research, but also what they got right.

1.  Malthus

Thomas Malthus published his Essay on the Principle of Population in 1798, and in it he posited that the primary threat on economic and political stability was overpopulation.   What did he get right?   Well, it is true that France was the most populous country in Europe and had achieved a steady increase in population throughout the eighteenth century, with a population in 1700 of 20 million (four times the population of England) and 30 million by 1780.   This contributed to a stagnation of agricultural wages and an increase in land rents in the decades before the French Revolution of 1789.   It was not the sole cause of the revolution, but it was definitely a contributing factor.

To combat this problem of overpopulation, he proposed the draconian measure of an immediate halt to all welfare assistance to the poor, and an institution of severe scrutiny of reproduction by the poor.

What did he get wrong?   Well, like others in England, like Arthur Young, who wrote of the poverty of the French countryside based on his travels there in 1787-1788 on the eve of the revolution, much of the extremity of his solution to inequality was based on the fear gripping those in England, and indeed much of the European elite, in 1790s of a similar revolution taking place at home, rather than on a sober economic analysis of the factors involved.

2.  Ricardo

David Ricardo published his Principles of Political Economy and Taxation in 1817. Like Malthus, he was interested in the issue of overpopulation and its effects on social equilibrium, but in particular through the mechanism of its effect on land prices and land rents.   His argument was that, as population increased, since the supply of land is limited relative to other goods, it will become increasing scarce as more and more people are competing for roughly the same amount of land.  So the price of land will rise continuously, as will the rents paid to landlords.   This means that the landlords will claim a growing share of national income, as the share available to the rest of the population decreases.   This growing economic power of the rentier vs. the renting segments of society could only be counterbalanced by a steadily increasing tax on land rents.

What did Ricardo get right?   Although his example was based on the price of land, his “scarcity principle” meant that certain prices might rise to very high levels, which might be enough to destabilize entire societies.   His insight of the scarcity principle could well be applied to the price of urban real estate in major world capitals or the price of oil.

What did he get wrong?   Ricardo did not anticipate the importance of technological progress or industrial growth, which changed the level of output achievable from any given piece of land.    In other words, although land is a fixed input, the productivity achieved by that land is not a fixed output.

In addition, the high prices for a certain scarce commodity, through the law of supply and demand, will create a countervailing force that will reduce the prices of that commodity through the lowering of demand, often through the development of alternatives.  In the case of oil, for example, the high prices of oil have created a pressure to develop alternative fuels, which are thereby becoming more competitive with oil and may someday replace it.

3.  Marx

I have to start the notes to this section by mentioning a Monty Python skit where there is a new faculty member of the Philosophy Department who is arriving at the University of Wallamalloo in Australia and who is told he can mention Karl Marx in his lectures as long he states clearly that he was wrong.

Mr. Piketty is here to tell us both what he got right and what he got wrong.

For Marx, who wrote the first volume of Das Kapital in 1867, he differed from Malthus and Ricardo in that he was writing during the full swing of the Industrial Revolution.   The Industrial Revolution ushered in a period from 1870-1914 where inequality stabilized at extremely high levels due to a long phase of wage stagnation and an increase the share of national income devoted to capital (industrial profits, land and building rents) which occurred during a period of rapidly accelerating economic growth.  It was only the shocks to the system rendered by World War I which were powerful enough to reduce inequality.

Marx set himself the task of explaining why capital prospered while labor incomes stagnated.  What he did was to take the Ricardian model of the principle of scarcity and apply it to a world where capital was primarily industrial (machinery, plants, etc.) rather than landed property.    However, as opposed to land, which was a fixed quantity, he saw that there was no limit to the amount of capital that could be accumulated.    His principal conclusion was the “principle of infinite accumulation”, which said that capital would accumulate in fewer and fewer hands with no natural limit to the process.    It’s like the “scarcity principle” on steroids.   This economic “singularity event” would led to a political singularity event, namely, the communist revolution.

What did Marx get wrong?   He neglected the possibility of durable technological progress and steadily increasing productivity which can serve to some extent as a counterweight to the process of accumulation and concentration of private capital.   I use the word “can” advisedly, because the productivity gains due to technological innovation in the U.S. after the 1970s have been going almost totally to the capital side and not to the labor side of the economy, so increasing productivity does not necessarily lead to it being a counterweight to the process of accumulation.

Of course, the other major flaw in Marx’ argument was what would happen after the “political singularity event” of the Revolution.   He did not put a lot of thought in to the question of how a society in which private capital had been totally abolished would be organized politically and economically.

What did Marx get right?   If population and productivity growth are relatively low, then cannot counterbalance the destabilizing effects of accumulated wealth.   Although accumulation ends in the real world at a finite level, not an infinite one as Marx had posited, it still ends at a level high enough to be destabilizing.   That insight is just as relevant to the levels of private wealth in the 1980s and 1990s in the Western world and Japan as it was in the late nineteenth century in Europe.

One factor that Malthus, Ricardo, and Marx had was that their analysis was based on analysis of a relatively limited set of facts about recent economic conditions, combined with theoretical speculations.   It would take the 20th century for economists to begin to apply the principles of social science research and the use of historical data to the economic problems like the problem of economic inequality.

Six Sigma–The Breakthrough Strategy (Conclusion)


In the seventh chapter of their book Six Sigma: the Breakthrough Management Strategy Revolutionizing the World’s Top Corporations, the authors Mikel Harry, Ph.D., and Richard Schroeder explain the meat of the book’s contents by explaining just what the Breakthrough Strategy entails.

It consists of three levels, the business, operations, and process level, and there are eight stages to each level:

  1. Recognize
  2. Define
  3. Measure
  4. Analyze
  5. Improve
  6. Control
  7. Standardize
  8. Integrate

The standard Six Sigma project consists of the five stages Define-Measure-Analyze-Improve-Control, with the Recognize being the input from the higher level, and the standardize and integrate being the outputs back to the higher level.    Stages 1, 7, and 8, therefore, tie the three levels together.

This allows the Six Sigma process to flow up and down across organizations, so that those at the business level communicate with those at the operation and process levels to make sure their projects are meaningful to the improvement of the overall business, and those at the operation and process levels communicate their successful results in the form of “lessons learned” which can be integrated into management’s thinking and thus increase the company’s intellectual capital.

Six Sigma–The 8 Stages of the Breakthrough Strategy at the Process Level


In the seventh chapter of their book Six Sigma:  The Breakthrough Management Strategy Revolutionizing the World’s Top Corporations, the authors Mikel Harry, Ph.D., and Richard Schroeder get to the “meat” of the book by explaining what the Six Sigma Breakthrough Strategy consists of.   Before explaining the stages of the strategy, the authors related the three levels of the strategy:   the business level, the operations level, and the process level.   These three stages were covered in a previous post.   In this series of three posts, I go through the 8 Stages of the Breakthrough Strategy for each of the three stages.   Last post covered the Operational Level, and this post covers the Process Level.

1.  RECOGNIZE … functional problems that link to operational issues

As mentioned in the previous post, an operational issue must be recognized and defined … as a series of independent, but interrelated problems.    They may be independent from each other, but are interrelated because they are tied into the same business or support systems.

2.  DEFINE … the processes that contribute to the functional problems

Functional problems are one of three basic types:

a)  Product problems

b) Service-related problems

c) Transactional problems

No matter what type of problem, they are all created by one or more processes which themselves consist of a series of steps, events, or activities.   Rather than focus on the outcome (i.e., the problem), the focus needs to shift to the process.   How does the organization map its processes into a kind of “data flow” map?    In order to search for the source of the problem, you need a map!

3.  MEASURE … the capability of each process that offers operational leverage

The capability of each process needs to be measured, and those elements which are critical-to-quality are referred to as CTQ characteristics.   It is important to use good data to measure CTQs, because they form the crucial link between data, information (processed data in an understandable form), process metrics (processed information in a form that is easily transmitted and communicated) , and finally the management decisions to improve the processes.

4.  ANALYZE … the data to access prevalent patterns and trends.

The data are analyzed to determine the relationships between the variable factors in the process, which in turn determines the direction of improvements.   Performance metrics will show the theoretical limit of the capability of the process if all is going perfectly.    If the metrics show that there is potential for improvement, then the project can progress to the next phase…

5.  IMPROVE … the key products/services characteristics created by the key processes.

The job of the Black Belt is to

a) focus on CTQ characteristics inherent in a product of service

b) go about improving the capability of such CTQ characteristics by “screening” for variables that have the greatest impact on the processes

c) isolate the key variables, establish the limits of acceptable variation, and then control the factors that affect these limits.

Once these key variables are identified, the Black Belt can twist these “knobs” to establish new levels of performance for these CTQ characteristics.

6.  CONTROL … the process variables that exert undue influence.

Once the process has been improved, Black Belts need to control these key variables over time to make sure that the improvement stays in place, usually in the form of a statistical process control or SPC system.

7.  STANDARDIZE … the methods and processes that produce best-in-class performance.

Once Black Belts have improved their target CTQ characteristics, thereby achieving their Six Sigma project goals, they must promote and standardized those Six Sigma methods that produced the best results, as well as standardizing the results themselves.

8.  INTEGRATE … standard methods and processes into the design cycle

The focus of the company needs to go to taking the standardized results achieved through Six Sigma goals and then to make equivalent changes in the company’s designs across the board.   In this way, improved components, processes, and practices that have proven to be best-in-class can be replicated throughout the company.    Design engineers need to be rewarded not just how the product performed once manufactured, but how easy it was to manufacture that product.

I remember at Mitsubishi Motors, there would be design changes made to accommodate the installation of a part which was engineered well, but placed in a location that was hard for assembly line workers to get at.    This manufacturing difficulty led to defects which the design engineers had not envisioned.   I heard that the design engineers were actually made to go to the assembly line and see for themselves how difficult it was to install the part.   Once they themselves witnessed the problem, they understood and went back to the proverbial drawing board to redesign not just the part, but its location within the car so that it would be more easily accessible.   This is an example of making sure that the viability of the entire product cycle is considered during the initial design phase.

Another example comes from the design of a bumper which was to be a single piece of plastic, which made manufacturing easier.  However, the analysis from the bumper crash tests showed that damage to a bumper at 5 mph would require replacement of the entire one-piece assembly as opposed to just portions of the bumper as was the case with the previous design.   The analysis was made that the repair costs would be higher in the case of the one-piece design, and that this would therefore have an adverse effect on the insurance rates for customers who bought the vehicles with that design.   It was thus changed, because the design and manufacturing teams needed to be aware of the effect of the design on the actual usage by the customer.

So the design cycle needs to be where the focus is on preventing future problems, not repairing past problems.

The final post on this chapter is a summary of the entire issue of the Breakthrough Strategy.

Six Sigma–The 8 Stages of the Breakthrough Strategy at the Operational Level


In the seventh chapter of their book Six Sigma:  The Breakthrough Management Strategy Revolutionizing the World’s Top Corporations, the authors Mikel Harry, Ph.D., and Richard Schroeder get to the “meat” of the book by explaining what the Six Sigma Breakthrough Strategy consists of.   Before explaining the stages of the strategy, the authors related the three levels of the strategy:   the business level, the operations level, and the process level.   These three stages were covered in a previous post.   In this series of three posts, I go through the 8 Stages of the Breakthrough Strategy for each of the three stages.   Last post covered the Business Level, and this post covers the Operational Level.

1.  RECOGNIZE … operational issues that link to key business systems

An issue that comes up at the operational level needs to be broken down into its components, each of which can be deal with separately.    However, it is important to see how these components fit together, because that will give a clue into how they are all tied into key business systems.   You can have quality problem-solving tools that try to reduce defects after they have occurred, all measured through a quality information system or QIS at the end of the manufacturing line.   But a better approach is to install an in-process quality measurement system that connects to the end-of-line, so you are able to forecasting end-of-line defects before they occur.   This is the first step in preventing defects, rather than correcting them.

2.  DEFINE … Six Sigma projects to resolve operational issues

How does a company prioritize the operational issues to be resolved?   With these criteria:

a)  the extent of cost savings to be realized

b) the degree to which an operational issues is connected to larger critical-to-quality issues

c) the degree to which an operational issue is connected to the efficient and effective operation of a business support system, and

d) the expected length of time necessary to resolve a specific operational issue

3.  MEASURE … performance of the Six Sigma projects

Once the Six Sigma projects are chosen based on the criteria listed above in paragraph 2 (under DEFINE), the company’s progress metrics must be established, so that once the Six Sigma projects are initiated, data regarding their performance can be gathered.

4.  ANALYZE … project performance in relation to operational goals

Of course, monitoring the actual savings of a Black Belt Six Sigma project with its projected savings is important, but the performance of the  projects have to be measured against the larger operational goals of the business.

5.  IMPROVE … Six Sigma project management system

Once the operational issues recognized above are defined into Six Sigma projects, and these projects are then measured and analyzed, this tracking system needs to be improved and refined.   Maybe a business needs to track new variables (net savings, project scope, project completion time, etc.).   Maybe it has to track a different set of data.

6.  CONTROL … inputs to project management system

Once several iterations of improvement have gone forward, there should be a regular system audit to sustain the improvement.

7.  STANDARDIZE … best in-class management system practices

Once a Six Sigma project management system has achieved best-in-class status, the company should standardize it and replicate it throughout all relevant sectors within the business.   This is done by reward and recognition systems that give everyone incentives to keep up their “same old” ways of doing things to adopt the practices that have been proven to work.

8.  INTEGRATE … standardized Six Sigma practices into policies and procedures

Once the best-in-class management system practices have been standardized, then need to be integrated into operations of the business by creating policies and procedures that become the new fabric of operations for the business.

 

Six Sigma–The 8 Stages of The Breakthrough Strategy at the Business Level


In the seventh chapter of their book Six Sigma:  The Breakthrough Management Strategy Revolutionizing the World’s Top Corporations, the authors Mikel Harry, Ph.D., and Richard Schroeder get to the “meat” of the book by explaining what the Six Sigma Breakthrough Strategy consists of.   Before explaining the stages of the strategy, the authors related the three levels of the strategy:   the business level, the operations level, and the process level.   This is covered in the previous post.

In this post, we cover the 8 stages of the implementation of the Six Sigma Breakthrough Strategy at the Business Level.  As mentioned in the previous post, the strategy is implemented at a business level by a Deployment Champion, and the application of the strategy may take place over a 3-5 year period if it is to be done in a consistent and focused manner.

1)  RECOGNIZE … the true states of your business

The “states” of a business are the global business conditions used to guide and manage a business.   These could be “levels of customer satisfaction”, for example, which impact the economics of a business.   Knowing this can help a company leverage its efforts and resources to improving customer satisfaction, which will in turn impact the bottom line of the business in a positive way.

2)  DEFINE … what plans must be in place to realize improvement of each state

Let’s assume that “levels of customer satisfaction” is one of the “states of business” that is being considered for improvement.   What parts of the company’s organization are correlated to this state?   Is it the manufacturing system, the engineering (design) system, the delivery system or the service system?    Are there characteristics of these systems which are critical to quality (in the sense of customer satisfaction)?

3)  MEASURE … the business systems that support the plans.

The first question here is to ask “what” to measure, and then the second question to ask is “how” to measure it.  The third question to ask is “is there executive (management) commitment to go after the right measurements?”

4)  ANALYZE … the gaps in system performance benchmarks.

Let us say that a company analyzes its own performance in an area and finds that it is operating at a 3.4 sigma level.   Let’s  say that the company has analyzed a competitor which operates in the same or similar area at a 4.6 sigma level.  What is that other business doing that makes its performance better?

5)  IMPROVE … system elements to achieve performance goals.

Once the system that needs to be improved has identified those elements which comprise it, a company then needs to identify those elements which need to be improved first, which are considered to be most likely those that will affect quality.  This ensures that the company’s resources spent on Six Sigma projects to improve those elements are getting the most “bang for the buck.”

6) CONTROL … system-level characteristics that are critical to value

Once an improvement is identified and proved with Six Sigma techniques, then it is important to monitor and control this solution over a period of time to make sure that it is a permanent solution and that the system does not fall back into previous patterns, which could cause the gains made in the previous stage to erode over time.

7) STANDARDIZE … the systems that prove to be best-in-class

Let’s say that a system element is improved and then controlled so that the improvement is permanent.  Once the element is shown to be “best-in-class”, it can then be replicated, where applicable, in other business units to amplify the improvement throughout the entire company.

8) INTEGRATE … best-in-class systems into the strategic planning framework.

Once the best-in-class systems have been adopted on a business-wide basis, then the strategic planning framework needs to take them into account.  This business-wide improvement is then the new “state of business.”

Essentially, the process has come full circle and it is time for another iteration of the 8 stages of the breakthrough strategy, but with the results of the previous cycle being the basis for the next round of breakthroughs.

For example, if previous efforts have been made towards reducing manufacturing defects, the company may then alter the strategic planning framework so that the next phase of improvement tries to focus on reducing those design defects that produced the manufacturing defects in the first place.

This is how the business continuously improves and at the same time coordinates that improvement within all levels of the organization.    From this bird’s-eye view of the business, we next go to the operations level of the Breakthrough Strategy, which is the subject of the next post.

Six Sigma–The 3 Levels of the Breakthrough Strategy


In the first six chapters of their book Six Sigma:  The Breakthrough Management Strategy Revolutionizing the World’s Top Corporations, the authors Mikel Harry, Ph.D., and Richard Schroeder explain the fundamentals of Six Sigma.   In chapter 7, they reveal the “meat” of the book, which is what the Six Sigma breakthrough strategy actually entails.

In this post and the next, I introduce the levels and the stages of the breakthrough strategy implementing Six Sigma.  However, before explaining the eight stages of the strategy, in this post I would like to explain the three levels of implementing the strategy according to the authors of this book.

Here are the three levels, how they are applied within an organization, what they are used for, who ends up implementing them, and how long the implementation period can take on a typical basis.   These three levels need to be coordinated within an organization so that they are smoothly meshed gears.

1)  Business Level–applying the Six Sigma Breakthrough Strategy in a methodical and disciplined way throughout the corporation.   Used to improve market share, increase profitability, and ensure long-term viability.   A Deployment Champion can take 3-5 years to implement.

2)  Operations Level–applying the Six Sigma Breakthrough Strategy through projects which are correctly defined and executed, and incorporating the results of these projects into running the day-to-day business of the corporation (the focus is more tactical as opposed to the strategic focus in the Business Level).   Used to improve yield, eliminate “hidden factories” (rework and/or scrap of units found to have defects), and reduce labor and material costs.   A Project Champion can take 12-18 months to implement.

3)  Process level–applying the Six Sigma Breakthrough Strategy to individual processes that make up the day-to-day operations the corporation.   Used to reduce defects, variation, and to improve process capability in order to improve profitability and customer satisfaction.   A Black Belt can take 6-8 weeks to implement.

The next post shows the eight stages of implementation of the Breakthrough Strategy.

Six Sigma–Rolled Throughput Yield and Normalized Yield


In the last post, the authors Mikel Harry, Ph.D., and Richard Schroeder of the book Six Sigma:  The Breakthrough Management Strategy Revolutionizing the World’s Top Corporations, discussed the fact that first-time yield (the percentage of units that are defect-free) is a crude measure of quality, whereas throughput yield (the percentage of defects per defect opportunity) is a better measure of quality.

However, both first-time yield and throughput yield are terms that apply to a single step of the manufacturing process.   What are the equivalent concepts for multiple-step manufacturing processes.     The final yield is the multiple-step version of the first-time yield.    If out of 100 units that go into the assembly line, 90 units come out of the final step of the assembly process defect-free, then the final yield is 90%.    Now the multiple-step version of throughput yield is called the rolled throughput yield.   

If a product goes through four steps in the manufacturing process, and at each step the throughput yield is 50%, then the rolled throughput yield will be 50% x 50% x 50% x 50% = 6.25%.    However, it is unlikely that each step of the process will have a throughput yield that is the same; they will likely all differ.    If you have four throughput yields of 100%, 50%, 25%, and 50%, this will also create a rolled throughput yield of 6.25%.

If you have a rolled throughput yield and you want to find out what the “normalized” yield is of each process, what you are doing is computing the throughput yield that each process would have to have, on average, to create that rolled throughput yield.   To do this, if you have n steps in the manufacturing process, then the normalized yield will be the nth root of the rolled throughput yield.  In the example given on pp. 88-89 of the book, a rolled throughput yield of 36.8% for a process with 10 steps has a normalized yield of 90.5%.    This is because 90.5% multiplied by itself 10 times would create a rolled throughput yield of 36.8%.

In the same way that throughput yield is a more accurate metric than first-time yield, rolled throughput yield is a more accurate metric than final yield.    Using the more accurate metric is a way of really getting a handle on the quality of one’s products.   So a metric is essentially a type of mathematical tool for creating change.   But creating change that improves quality takes more than the right tool; it takes the right strategy, which is the subject of the next chapter “The Breakthrough Strategy”.  It is this chapter which is the subject of the next series of posts.

Six Sigma–Unmasking the Hidden Factory (4)


Placeholder for 11.02.2014 post

Six Sigma–First-Time Yield vs. Throughput Yield


In the last post, the authors Mikel Harry, Ph.D., and Richard Schroeder of the book Six Sigma:  The Breakthrough Management Strategy Revolutionizing the World’s Top Corporations, discussed the fact that first-time yield is a crude measure of quality.

To recap their argument, first-time yield takes the number of units that go out of an inspection point defect-free, and divides it by the number of units that go into that inspection point.   So if 100 units go in, and 50 are defect-free, the first-time yield is 50%.   The problem about this metric is that, if you have two product assembly lines, and they both have a first-time yield of 50%, can you both say they have equal levels of quality?   Well, that would depend on the complexity of the parts involved.

If the units on assembly line A are very simple parts that only have 2 ways that each part could be defective, and the units on assembly line B are very complex parts that have 200 ways that each part could be defective, then a 50% first-time yield of product B is actually a lot more impressive than a 50% first-time yield of product A.    By concentrating on the parts that are defect free, it doesn’t tell you how many defects are in each unit that aren’t defect free:   are there 2 defects, 8 defects, or 28?    The first-time yield metric doesn’t give you this information.

The more accurate measure is throughput yield.  This is the number of defects per defect opportunity.    Let’s take our product A and product B from the previous example.    They both have first-time yield of 50%.   What are their respective throughput yields?  Let’s assume each unit has only defect found.    Product A has 2 ways that each part could be defective, so it has a throughput yield of 50%.    Product A has 200 ways that each part could be defective, so it has a throughput yield of 0.5%.   Product B is of a higher quality than product A, and this is readily borne out by the throughput yield metric.

However, as you probably can guess, there are very few manufacturing processes that have only one step.    Most manufacturing processes have several steps.   How do you calculate the yield for multiple-step manufacturing processes?   That is the subject of the next post.