Z1.4 2008: AQL, Nonconformities, and Defects Explained

Pharmaceutical sampling

Q: My question is regarding the noncomformities per hundred units and percent nonconforming.  This topic is discussed in ANSI/ASQ Z1.4-2008 Sampling Procedures and Tables for Inspection by Attributes under sections 3.2 and 3.3 on page 2.  Regardless of the explanations provided, I find myself puzzled as to what the following numbers refer to in “Table II-A– Single sampling plans for normal inspection (Master table).”

Specifically, I am having problems understanding the following unit numbers just above the Acceptance and Rejection numbers (example, 0.010, 0.015, 0.025, 1000).  Do these represent percent noncomformities and if so,  does 0.010 = 0.01%, and conversely, how can 1000 = 1000%?

As you may see, I am very confused by these numbers, and I was hoping to have some light shed on this subject. Thank you for your answers in advance.

A: The numbers on the top of the table are just as the questioner stated: .0.010 = .01% defective.  That is the acceptable quality limit (AQL) number.  Generally, most companies want 1% or less, but as noted in the table, it does go up to 1000. It is extreme to think of something being more than 100%, but consider that it may be a minor or cosmetic defect that does not affect the function but just does not look good.  Scratch and dent sales are a common result of these higher numbers.

The AQL number is the worst quality level you would expect to find at this level.  The thing you have to remember is that these plans work best when the quality is very good or very bad.  If you are at the limit, you could end up taking more samples and spend a lot of time in tightened inspection.

Many people use percent nonconforming instead of percent defective, simply because of the connotation of “defective.” No one wants to say they shipped a defective product.  They may have shipped a nonconforming product that the customer could not use simply because their requirements were too strict, where another customer may be able to use the same thing because they have less stringent requirements.

Jim Bossert
SVP Process Design Manger, Process Optimization
Bank of America
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CMBB
Fort Worth, TX

For more on this topic, please visit ASQ’s website.

Is C=0 in Z1.4?

Chart, graph, sampling, plan, calculation, z1.4

Q: I have ANSI/ASQ Z1.4-2008 Sampling Procedures and Tables for Inspection by Attributes. I looked through it rapidly, and I still can’t find the C=0 plan directly, so I am a little confused. I thought C=0 is included in Z1.4. Is the C=0 plan spirit/concept contained in Z1.4 or does C=0 need to be calculated from the several tables in Z1.4? (if yes, which tables?).

A: Z1.4:2008 is a general sampling plan for attributes.  It is tabled by AQL with varying accept reject numbers.  The standard gives a framework for attribute inspection plans. Though Z1.4 does have some plans where C=0, they are NOT optimal to minimize the Type II error. For C=0 plans specifically, I would recommend purchasing Zero Acceptance Number Sampling Plans, Fifth Edition.  The value of the Z1.4 standard is the switching rules used for incoming inspection.

Steven Walfish
Secretary, U.S. TAG to ISO/TC 69
ASQ CQE
Statistician, GE Healthcare
http://statisticaloutsourcingservices.com/

For more on this topic, please visit ASQ’s website.

Z1.9 Sigma for Variability Known Method

Audit, audit by exception

Q: I have a question about  Z1.9-2008: Sampling Procedures and Tables for Inspection by Variables for Percent Nonconforming. I have seen there is a “Variability Known” method. However I don’t know how to get a Sigma, so I don’t know how to use this method. Could you please share how to get a Sigma?

A: To get a Sigma to use for the Variability Known method is to have data that has been collected over a period of time and calculate the standard deviation. The rule here is at least six months of data with at least 50 data points.  Depending on the process, if the data has been collected and there is over 1000 data points, the time limitation goes away since you have an extremely large data set to work with.

Q: During the 6 months, the process should be under control, right? And data should be normal distribution, right? Is there any process control needed? And how do I maintain this process and Sigma?

A: Yes, there is the assumption that the process is normally distributed and is stable.  That means some type of process control is being used.  Ideally this would be an X-bar and r or an X-bar and S chart. If an out of control situation occurs and you can bring the process back into control, then you are ok.

Q: Could you tell me the meaning of “data point”? As you know, during the 6 months, we will get lots of batches. For each batch, we will have a certificate of analysis (COA), and many data. I am not sure how do you combine data for different batches. How do you calculate this?

A: Data point, in the most simple format, could be the statistics associated with a batch or a mean and standard deviation/range. Each batch gives you a new set of data points. You can combine the time based data in a couple of different ways:

1. You can take each batch and use the means and plot them on an X-bar and R or an X-bar and S-chart.
2. You can take the raw data and combine it into one large distribution.

The preferred way is the control chart approach since you will know if the process is stable since it is already plotted.

Jim Bossert
SVP Process Design Manger, Process Optimization
Bank of America
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CMBB
Fort Worth, TX

For more on this topic, please visit ASQ’s website.

Z1.4 or Z1.9 Sampling Plan for IT Tickets

Data review, data analysis, data migration

Q: I need to purchase a sampling standard. However I notice there are a few options for sampling plans, such as attributes vs. variables.  I am not sure which one will best fit my needs.  I need help in determining this.

I need to determine what the best sample size would be for recurring IT operations.  For example:  If my server team closes 500 tickets a month and I want to pick a sample size to review for quality purposes, what is the best chart to use to determine what the industry standards say are the recommended sample size?  My understanding is there is a light, normal and heavy chart that can be offered.

Please help.  Thanks!

A: The answer is “it depends.”  What it depends on what is she reviewing for quality purposes?  If the inspection is for either “good quality” or “poor quality,” then Z1.4-2008: Sampling Procedures and Tables for Inspection by Attributes, would be appropriate.  If she is measuring something, “time to close,” for example, then Z1.9-2008: Sampling Procedures and Tables for Inspection by Variables for Percent Nonconforming, might be appropriate, although Z1.9 is really only good if the data are normally distributed, which waiting times are generally not.

With more information, I could provide a more definitive answer.

Q: Our intention right now, is to evaluate tickets closed  (or work processed, which could be in other facets other than tickets, may be items logged in a log sheet to check service statuses, etc) to determine if the quality of work performed meets our quality standards.  We are determining what “quality” means to us.  For example:  We want to look at tickets closed to determine if the ticket was escalated properly from our tier 1 to tier 2 team AND if the work log of that ticket had the correct data and correct amount of data documented.  Meaning a tech didn’t just say “resolved user issue,” but rather they documented more relevant data about what they did to resolve the issue.  All of the work performed is service delivery in an operations environment, so the evaluations will be performed on the quality of following our processes and the quality of our resources.  We have an amount of tickets closed per month that vary, slightly up or slightly down.  I want to look at a table to determine what our sample size should be.

However, in addition to the above, I am very interested in learning the other plan too because we do have Service Level Objectives (SLO’s and SLA’s) in this environment (example: time to close, first call resolution, call abandonment rate, etc) If I can understand that other table and how to use it, both may be valuable and I may purchase both.

I didn’t understand the comment that “Z1.9 is really only good if the data are normally distributed, which waiting times are generally not.”  What does normally distributed mean?  I would like that explained.
Can your expert answer and provide information on both sampling plans for me?

Thanks again and I look forward to the response.

A: Normally distributed means that the data follow a bell-shaped curve with the most frequency values falling around some average and tailing off in frequency both above and below that average.  Many processes in real life follow the normal distribution.  Time to close is an exception.  It is more likely to follow the exponential distribution, which means that there will be lots of tickets closed at shorter durations, with some tailing out very far into longer durations.  Also a ticket can’t be closed at less than 0 duration.  The normal distribution extends, in theory, to +/- infinity.  Rates (percentages, I’m assuming) can often be approximated using the normal distribution as long as they aren’t too near 0% or 100%.  If they are near the edges a square root transformation often help to make the data more approximately normal.

Most of the quality characteristics you described are of the pass-fail variety which implies Z1.4 would be appropriate.

I strongly recommend that you take a course and/or read a book on statistical process control or acceptance sampling before attempting this. 

Brenda Bishop
US Liaison to TC 69/WG3
CQE,CQA,CMQ/OE,CRE,SSBB,CQIA
Belleville, Illinois

For more on this topic, please visit ASQ’s website.

Random Sampling

Inventory, Inspection, Review, Suppliers, Supplies

Q: When inspecting components on tape and reel, pulling parts at random can present a problem in a pick and place operation.  Also once removed, the samples would have to be put back on tape for use.

Is there a practical or common sense procedure to follow?

A: This is not an uncommon problem and I know that I’ve been in a similar situation. What we did was to inspect at the beginning and the end of each tape. That way we were not causing disruption to the process.  It worked pretty well with the suppliers we had. But prior to doing that, we certified our suppliers by going to their facility and performing a process audit to make sure that the process was meeting our requirements.

Jim Bossert
ASQ Fellow

For more on this topic, please visit ASQ’s website.

Z1.4:2008, Using Acceptance Quality Limit (AQL)

Pharmaceutical sampling

Q: I have a question about how to use ANSI/ASQ Z1.4-2008 Sampling Procedures and Tables for Inspection by Attributes.

I am looking to achieve a 99.5% production yield.  How do I calculate that using the Acceptance Quality Limit (AQL) in this standard?  Is it as simple as taking (100-AQL) to calculate the expected yield?

A: The ANSI Z1.4-2008 standard is not intended for calculating production yield or expected production yield.  The AQL is the maximum percent non-conforming that can be considered acceptable as a process average.  Typically we set this as the percent defective that would be accepted at a 95% confidence.  If you want to sample such that you have 95% confidence that the average production yield is 99.5%, you can find a sampling plan with an AQL of 0.5%.  Also, please understand that the tables in the standard are not exact value for AQL.  Using the binomial distribution (or hypergeometric for sampling with no replacement) you can calculate the exact probability.

Steven Walfish
Secretary, U.S. TAG to ISO/TC 69
ASQ CQE
Statistician, GE Healthcare
http://statisticaloutsourcingservices.com/

For more on this topic, please visit ASQ’s website.