DPPM Calculation

Chart, graph, sampling, plan, calculation, z1.4

Question

Recently, there is a debate in my organization about Defective Parts Per Million (DPPM) computation.
Camp 1 – DPPM = (No of parts rejected / No of parts inspected) * 1,000,000
Camp 2 – DPPM = (No of parts rejected / No of parts received) * 1,000,000
We perform sampling inspection based on AQL.
Camp 1 insists they are correct and likewise for Camp 2.  Which is correct or more appropriate to reflect supplier quality?

Answer

This is not an uncommon question. If you look at the standard, they define the % nonconforming as the number of parts nonconforming/number of parts inspected x 100. If you are looking at DPPM, instead of multiplying by 100, you put in 1,000,000. This means that by your definition, Camp 1 is correct. This is also what was intended by the creators of the sampling scheme.

Jim Bossert
Sr Performance Improvement Specialist
JPS Hospital
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CSSMBB
Fort Worth, TX

Acceptance Sampling Inspection

Automotive inspection, TS 16949, IATF 16949

Question

We have an Acceptance sampling inspection in place where we use the ANSI/ASQ Z1.4 – 2013 standard under Normal Inspection, using General Inspection Level II to drive our samples size and accept, reject criteria. We do not uses switching rules as we have always found them too difficult to manage. I have two questions.

If I have one lot that fails Acceptance sampling and I am trying to bound the issue is it suitable to bound it to the one affected lot if the lot before and after pass or do I need to carry out additional sampling.

My second question is if I have a batch that passes acceptance sampling but at a subsequent downstream process a defect being inspected for by the upstream acceptance sampling inspection is found how do I determine if the lot is acceptable? Do I trust the acceptance sampling inspection or react?

Answer

The first question is not an uncommon one and actually it is a good practice to isolate the lot and do 100% inspection of it.  That way you can estimate the % defective and if another failure occurs in the next 5 lots, then increase the sampling until you have some confidence that the supplier has fixed the problem.  Once that confidence is restored, then you go back to what you inspected originally.

The second question, is one that you have to understand how well do you follow the acceptance sampling process?  If your alpha level is at 95%, 5% of the time, you can accept a bad batch as good. That is the pure definition of the alpha risk.  If this failure falls within the 5%, your process is working and while you sort through the lot, and notify the supplier, it is not something that you over react to.

I hope this helps.

Jim

James Bossert, PhD, MBB, CQA, CQE, CqM/OE
Sr Performance Improvement Consultant

Switching Rules

Manufacturing, inspection, exclusions

Question 

We are planning to implement ANSI/ASQ Z1.4-2003(R2013) sampling inspection plan with our Finish products which are currently 100% inspected by QC Inspectors.  I read  about the importance of the switching rules  on a continuing stream of lots and have the following  questions:
1.Is it acceptable to select a specific plan (tightened, normal or reduced ) and use it without the switching rules?
2.Are there any exceptions which allow us to use a specific plan without applying  the switching rules?

Answer

  1. You can use any plan without using the switching rules but it does run the risk of not meeting the alpha risk in the end. These plans were developed to be used as documented. A normal plan is generally used and the switching rules come in when the clearance number has been obtained.  Some processes may never switch.  If you choose a plan that is tightened or reduced to start with, you potentially will either spend too much on inspection (tightened) or risk having bad product go to the customer (reduced).  It is a business decision for you to make if your customer is not demanding it.  The switching rules are there to protect the producer when the product is running very well or it has problems.
  2. If your customer is not requiring a particular plan, you can use what you want. It is a business decision, no reason for any exceptions.

I hope this helps.

Jim Bossert
Sr Performance Improvement Specialist
JPS Hospital
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CSSMBB
Fort Worth, TX

Inspection Sample Size

Question

  1. The customer expects certain levels of inspection: pull 157 bottles for visual testing, but then they also want 20 pulled for dimensional testing. Can’t the 20 additional bottles be a subset of the original testing sample?
  1. When calculating the lot, do you pull the samples before or after your calculations? Do the samples get included in the produced quantity or not?  For example: If the customer orders 10,000 bottles and the level 2 inspection pulls 200 bottles that drops the total shipped to the customer to 9,800 pieces.  If 10,200 bottles are produced then the inspection level increases so that 315 bottles need to be pulled for testing.  What is the correct sample size and production number?

Answer

Hello,

Here are the responses to your questions:

  1. Yes since the first inspection is visual, you can use a subset for the additional testing.
  1. The lot size is 10,000. You should be putting the samples back into the lot if they are not destroyed by the testing. You send what is contracted for.  You are sampling with replacement.

Jim Bossert

SVP Process Design Manger, Process Optimization
Bank of America
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CSSMBB
Fort Worth, TX

DMAIC Guidelines

Question

Are there general guidelines or target durations for each phase of DMAIC?

Knowing that we have fruit salad in our portfolio – apples, oranges, grapes, melons, plus 10 more – are there recommendations on how to generate meaningful guidelines for duration where trying to categorize a project may have a considerable number of characteristics and be quite different?

Answer

DMAIC process

Duration is a good metric but in the operational definition you need to add a complexity component.  You are seeing what happens when it is not added.  I have defined complexity as the number of groups/departments that need to be engaged in the project.  A simple example is when IT gets involved, there may be some additional time lags that need to be added so they can do their due diligence.  Likewise if you have a project that involves finance or legal, it will add time to the project.  Organization wide projects take more time than departmental ones so when scoping a project, consider how much longer it will take with more groups needed to be involved.  This may help in your estimates and tracking.

As for general guidelines for DMAIC, what I have told executives and belt candidates is that Define should take about 3 weeks, Measure about 8 -12 weeks (depending on how good your data is).  Analyze 3-4 weeks, depending on complexity and that you have data flowing consistently.  Improve 3-4 weeks depending how quickly you can get the improvement in place, training completed, and process stabilized. Control is generally 4 weeks, just to make sure that everything is running as expected and you can show the magnitude of the improvement.

These are mine but it all depends on the Measure phase and getting the baseline well defined and data flowing.  That is the most critical phase in DMAIC and shortcuts there will impact the project.

I hope this helps.

Jim Bossert

Sr Performance Improvement Specialist
JPS Hospital
Fort Worth, TX

Sampling

Question

Is there a sampling plan for determining the number of cases to pull in a batch from which you perform the ANSI/ASQ sampling of individual products?  For example: you receive 550 cases with 145 product vials/case.  Is it proper to sample a total of 500 vials from 25 cases (using square root of n+1) or would applying the ANSI/ASQ single level II be more appropriate?  We would then need to pull 500 vials from 80 cases.  Or is there a better statistical method?

Answer

There are two ways to answer this. One is to follow the standard and take samples from 80 cases until you get 500. It is assumed that the samples are random so that you do not always take the samples from the same location in the case.  That is following the standard.

The second is that you take a sample from 25 cases in a random manner.  That is fine also.  There are no standards for sampling from cases so either way will work.  Years ago, I developed a sampling scheme similar to what is proposed at the employer I was working with at the time.  Sometimes you have to be creative.

Jim Bossert

SVP Process Design Manger, Process Optimization
Bank of America
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CSSMBB
Fort Worth, TX

Six Sigma Black Belt

Question

I am currently an Executive Chef working that has been taking online classes for Green & Black Belt Six Sigma.  I am about halfway through my Black Belt classes and would like to pursue my certifications.  However, my company does not have a Six Sigma department and seem to be getting no where on working on a Six Sigma project so I could qualify for the Black Belt certification.  Do you have any advice or guidance that could help.

Answer

This is not an uncommon issue with a number of people.  What he should look into is to work as a volunteer at some non-profit organization on a Black Belt improvement project.  These organizations are always looking for help and this is a win-win for both him and the organization.  He will need to talk to them about what Six Sigma is and the type of project he is interested in doing.

Another possibility is to look at his place of work and if there is a part of the job that has to be done but no one likes doing it. If it is a process, then he could follow the DMAIC process and show improvement.  This could also serve as BB project if he can show the time savings was greater than 50%.

Jim

Jim Bossert
SVP Process Design Manger, Process Optimization
Bank of America
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CSSMBB
Fort Worth, TX

Control Chart to Analyze Customer Satisfaction Data

Control chart, data, analysis

Q: Let’s assume we have a process that is under control and we want to monitor a number of key quality characteristics expressed through small subjective scales, such as: excellent, very good, good, acceptable, poor and awful. This kind of data is typically available from customer satisfaction surveys, peer reviews, or similar sources.

In my situation, I have full historical data available and the process volume average is approximately 200 deliveries per month, giving me enough data and plenty of freedom to design the control chart I want.

What control chart would you recommend?

I don’t want to reduce my small scale data to pass/fail, since I would lose insight in the underlying data. Ideally, I’d like a chart that both provides control limits for process monitoring and gives insight on the repartition of scale items (i.e., “poor,” “good,” “excellent”).

A: You can handle this analysis a couple of ways.  The most obvious choice and probably the one that would give you the most information is a Q-chart. This chart is sometimes called a quality score chart.

The Q-chart assigns a weight to each category. Using the criteria presented, values would be:

  • excellent = 6
  • very good =5
  • good =4
  • acceptable =3
  • poor =2
  • awful=1.

You calculate the subgroup score by taking the weight of each score and multiply it by the count and then add all of the totals for the subgroup mean.

If 100 surveys were returned with results of 20 that were excellent, 25 very good, 25 good, 15  acceptable, 12 poor, and 3 awful, the calculation is:

6(20)+5(25)+4(25)+3(15)+2(12)+3(1)= 417

This is your score for this subgroup.   If you have more subgroups, you can calculate a grand mean by adding all the subgroup scores and dividing it by the number of subgroups.

If you had 10 subgroup scores of 417, 520, 395, 470, 250, 389, 530, 440, 420, and 405, the grand mean is simply:

((417+ 520+ 395+ 470+ 250+ 389+ 530+ 440+ 420+ 405)/10) = 4236/10 =423.6

The control limits would be the grand mean +/- 3 √grand mean.  Again, in this example, 423.6 +/-3√423.6 = 423.6 +/-3(20.58).   The lower limit is  361.86 and the upper limit is 485.34. This gives you a chance to see if things are stable or not.  If there is an out of control situation, you need to investigate further to find the cause.

The other choice is similar, but the weights have to total to 1. Using the criteria presented, the values would be:

  •  excellent = .3
  • very good = .28
  • good =.25
  • acceptable =.1
  • poor=.05
  • awful = .02.

You would calculate the numbers the same way for each subgroup:

.3(20)+.28(25)+.25(25)+.1(15)+.05(12)+.02(1)= 6+7+6.25+1.5+.6+.02=21.37

If you had 10 subgroup scores of 21.37, 19.3, 20.22, 25.7, 21.3, 17.2, 23.3, 22, 19.23, and 22.45, the grand mean is simply ((21.37+ 19.3+ 20.22+ 25.7+ 21.3+ 17.2+ 23.3+ 22+ 19.23+ 22.45)/10)= 212.07/10 =21.207.

The control limits would be the grand mean +/- 3 √grand mean.  Therefore, the limits would be 21.207+/-3 √21.207= 21.207+/-3(4.605).  The lower limit is  7.39 and the upper limit is 35.02.

The method is up to you.  The weights I used were simply arbitrary for this example. You would have to create your own weights for this analysis to be meaningful in your situation.  In the first example, I have it somewhat equally weighted. In the second example, it is biased to the high side.

I hope this helps.

Jim Bossert
SVP Process Design Manger, Process Optimization
Bank of America
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CSSMBB
Fort Worth, TX

Related Resources from the ASQ Knowledge Center:

Find more open access articles and resources about control charts in ASQ Knowledge Center search results:

Learn About Quality: Control Charts

The control chart is a graph used to study how a process changes over time. Data  are plotted in time order. A control chart always has a central line for the  average, an upper line for the upper control limit and a lower line for the lower control limit. These lines are determined from historical data. Read the full overview and download a free control template here.

Should Observations Be Grouped for Effective Process Monitoring? Journal of Quality Technology

During process monitoring, it is assumed that a special cause will result in a sustained shift in a process parameter that will continue until the shift is detected and the cause is removed.

In some cases, special causes may produce a transient shift that lasts only a short time. Control charts used to detect these shifts are usually based on samples taken at the end of the sampling interval d, but another option is to disperse the sample over the interval. For this purpose, combinations of two Shewhart or two cumulative sum (CUSUM) charts are considered. Results demonstrate that the statistical performance of the Shewhart chart combination is inferior compared with the CUSUM chart combination. Read more.

The Use of Control Charts in Health-Care and Public-Health Surveillance (With Discussion and Rejoinder), Journal of Quality Technology

Applications of control charts in healthcare monitoring and public health surveillance are introduced to industrial practitioners. Ideas that originate in this venue that may be applicable in industrial monitoring are discussed. Relevant contributions in the industrial statistical process control literature are considered. Read more.

Browse ASQ Knowledge Center search results for more open access articles about control charts.

Find featured open access articles from ASQ magazines and journals here.

Sampling Employee Tasks

Q: We are collecting data on what tasks our employees in various departments do each day. We hope to eventually get a representation of what each employee does all year long.  Randomly, throughout the day, employees record the tasks they are doing.  We are not sure how to calculate an appropriate sample size and we are not sure how many data points to collect.

A: I wish there was a simple answer.  We need to consider:

  • If it makes a difference on how long an employee has been performing a job?
  • Are the departments are equivalent in terms of what they are doing?
  • What is the difference that you  want to detect?

The simple rule is that the smaller the difference, then the larger the sample size. By smaller, it is less than 1 standard deviation from the data that has been detected.

Random records are O.K., but really, shouldn’t you want a record for everyone for at least a week? That would give you an idea of what is done across the board and, then, if you are trying to readjust the workloads, you have some basis for it based on the logs.  My concern with the current method is that you may have a lot of extra paperwork to account for everyone for a certain time.

Additional information provided by the questioner:

The goal of this project is to establish a baseline of activities that occur in the department and to answer the question “What does the department do all day?”

The amount of time an employee has been performing a job does not make a difference. The tasks performed in each department are considered equivalent.  We are not accounting for the amount of time it takes to complete a task — we are more interested in how frequently that task is required/requested.

The results will be used to identify enhancement opportunities to our database and identifying improvements to the current (and more frequent) processes.  The team will use a system (form in Metastorm) to capture activities throughout the day.  Frequency is approximately 5 entries an hour at random times of the hour.

I have worked with the department’s manager to capture content for the following fields using the form:

  1. Department (network management or dealer relation)
  2. Task (tier 1)
  3. Why (tier 2 – dependent on selection of task)
  4. Lessee/client name
  5. Application
  6. Country
  7. Source of request (department)

We are looking for a reasonable approach to calculate the sample size required for a 90 – 95% confidence level.  The frequency of hourly entries and length of period to capture the data can be adjusted to accommodate the resulting sample size.

A: The additional information helps.  Since you have no previous data and you are getting 5 samples an hour from each employee, (assuming a 7 hour workday, taking out lunch and two breaks), that will give you approximately 35 samples a day. Assuming a five-day week, that gives you approximately 175 data points per employee.  This should give you enough information to get an estimate of what is done for a week.

Now, you will probably want to extend this out another three weeks so that you have an idea of what happens over a month.  If you can assume that the data collected is representative of all months, then you should be O.K.  If you feel that some months are different, then you may want to look at taking another sample during the months where you anticipate different volumes from the one you have. You can use the sample size calculation for discrete data using the information that you have already collected and not look at all employees, but target your average performers.

Jim Bossert
SVP Process Design Manger, Process Optimization
Bank of America
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CMBB
Fort Worth, TX

Learn more about sampling with open access articles from ASQ publications:

Explore more in the ASQ Knowledge Center.

 

Guidance on Z1.4 Levels

Chart, graph, sampling, plan, calculation, z1.4

Q: My company is using ANSI/ASQ Z1.4-2008 Sampling Procedures and Tables for Inspection by Attributes, and we need some clarification on the levels and the sampling plans.

We are specifically looking at Acceptable Quality Limits (AQLs) 1.5, 2.5, 4.0, and 6.5 for post manufacturing of apparel, footwear, home products, and jewelry.

Do you have any guidelines to determine when and where to use levels I, II, and III? I understand that level II is the norm and used most of the time. However, we are not clear on levels I and III versus normal, tightened, and reduced.

Are there any recommended guidelines that correlate between levels I, II, III and single sampling plans, normal, tightened, and reduced?

The tables referenced in the standard show single sampling plans for normal, tightened, and reduced, can you confirm that these are for level II (pages 11, 12, 13)?

Do you have any tables showing the levels I and III for normal, tightened, and reduced?

A: Level I is used when you need less discrimination or when you are not as critical on the acceptance criteria. This is usually used for cosmetic defects where you may have color differences, but it is not noticeable in a single unit. Level III is used when you want to be very picky.  This is a more difficult level to get acceptance with, so it needs to be used sparingly or it can cost you a lot of money.

Each level has a normal, tightened and reduced scheme.  I am not sure about what you are asking for with respect to correlation to levels I, II and III and normal, tightened and reduced.  The goal is to simply inspect the minimum amount to get an accept or reject decision. Since inspection costs money, we do not want to do too much. Likewise, we do not want to reject much since that also costs money both in product availability and extra shipping.

Yes, the tables on pages 11, 12 and 13 are for normal, tightened, and reduced, but if you look at the letters for sample size, you will note that in most cases there are different letters for the levels I, II, and III.  Accept and reject numbers are based on the defect level and the sample size. The switching rules tell you when you can switch to either a reduced or tightened level. The tables can handle not just the levels I, II , and III, but also the special levels.

Jim Bossert
SVP Process Design Manger, Process Optimization
Bank of America
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CMBB
Fort Worth, TX