Acceptance Sampling Inspection

Automotive inspection, TS 16949, IATF 16949

Question

We have an Acceptance sampling inspection in place where we use the ANSI/ASQ Z1.4 – 2013 standard under Normal Inspection, using General Inspection Level II to drive our samples size and accept, reject criteria. We do not uses switching rules as we have always found them too difficult to manage. I have two questions.

If I have one lot that fails Acceptance sampling and I am trying to bound the issue is it suitable to bound it to the one affected lot if the lot before and after pass or do I need to carry out additional sampling.

My second question is if I have a batch that passes acceptance sampling but at a subsequent downstream process a defect being inspected for by the upstream acceptance sampling inspection is found how do I determine if the lot is acceptable? Do I trust the acceptance sampling inspection or react?

Answer

The first question is not an uncommon one and actually it is a good practice to isolate the lot and do 100% inspection of it.  That way you can estimate the % defective and if another failure occurs in the next 5 lots, then increase the sampling until you have some confidence that the supplier has fixed the problem.  Once that confidence is restored, then you go back to what you inspected originally.

The second question, is one that you have to understand how well do you follow the acceptance sampling process?  If your alpha level is at 95%, 5% of the time, you can accept a bad batch as good. That is the pure definition of the alpha risk.  If this failure falls within the 5%, your process is working and while you sort through the lot, and notify the supplier, it is not something that you over react to.

I hope this helps.

Jim

James Bossert, PhD, MBB, CQA, CQE, CqM/OE
Sr Performance Improvement Consultant

Sample Size and Z1.4

Data review, data analysis, data migration

Question

My question is if I’m trying to determine the sample size of migrated data to see if it migrated correctly to the target database, is the Z1.4 table applicable to that?

The scenario is data is being transferred from an old system to a new system and I want to do a quality check on the data in the new database to make sure everything was transferred correctly. I’m hoping to use the Z1.4 table to determine the sample size if its applicable. Is it applicable and if not, do you know of other standards that I should be looking into that is more applicable?

Answer

The movement of a database from one system to another certainly may introduce errors and it may also carry over errors that already exist. In some cases the move may also find and repair errors, yet that generally is done by design.

So, let’s say it’s just a move and you are checking for any new errors that are introduced.

Since you have access to the entire population, the database, in a before (old system) and after the move (new system) and I’m assuming you do not want to check every entry, instead just a sample, then I would recommend using an hypothesis test approach rather than a lot sampling approach.

A hypothesis test based on the binomial distribution may be appropriate as you are checking field entries to determine if they are correct or not (pass/fail).

You can set a threshold defect rate that you want to check the new system is at least this good or better, or you can measure the old system and compare to the new system – it should be equal to the old system as null hypothesis.

You can find a bit more information about a p-test in a good stats book or online at a short tutorial I wrote at https://creprep.wordpress.com/2013/06/01/hypothesis-tests-for-proportion/

The Z1.4 standard would require you to artificially define a lot or consider the entire database as one lot. The standard lot testing approach does not provide the control and statistical power of hypothesis testing, thus my recommendation. With the p-test you can define the confidence, defect rate to detect, and sample size to fit your needs concerning ability to make measurements, cost, and risk.

Cheers,

Fred

Fred Schenkelberg
Reliability Engineering and Management Consultant
FMS Reliability
(408) 710-8248
fms@fmsreliability.com
www.fmsreliability.com
@fmsreliability

Z1.4 Split Sampling

Chemistry, micro testing, chemical analysis, sampling

Q: I have two questions about Z1.4-2008: Sampling Procedures and Tables for Inspection by Attributes.

1. Does the plan allow one to “split” sampling plans among multiple items, or is only one item per plan intended?

2. The plan states a 95% confidence level, which means the findings of the sampling will statistically show that the findings (or number of defects) will be consistent with the findings of the entire inspected lot. So, if we split the sampling, how can you determine what happens to the confidence level?

A: Thank you for submitting your question to ASQ’s Ask the Experts Program. Answers to your inquiries follow.

1. In attempting to answer any given question, one needs to understand the question with respect to its gist and terms used.

Z1.4 uses the term “unit” to represent an individual “product” entity (unit here can represent a discrete fairly simple product, such as a bolt or nut), or it can represent a complex product (such as a computer, or a large piece of machinery, or even a square meter of cloth or other material, a length of wire or other material, etc.).

It is assumed here that the use of the term “item” in the question refers to a “unit.” It might, however, refer to a quality characteristic, and the explanation given here will attempt to explain either case.

Now, units can have a single principal quality characteristic or they can have many different quality characteristics.

Z1.4 allows for some of these quality characteristics to be of greater importance (severity for example, with respect to quality and/or economic effects) than others, whereby separate sampling is applied to each group with different sampling parameters (such as sample size, acceptance number, lot size). Hence, units with a single quality characteristic can be checked by sampling via Z1.4 and units with multiple quality characteristics can be checked by sampling via Z1.4.

In each case, the chosen Acceptable Quality Limit (AQL) and what it stands for applies to whatever is included in the inspection made on each unit. It is also assumed that this separate handling of units and quality characteristics is what the question means with respect to the term “split.”

Furthermore, it should also be understood that sampling inspection can be conducted with respect to two distinctly different statistics. One is the number of nonconforming units found in the sample. These are sometimes referred to as “defectives.” The second is the number (sum) of nonconformities found on all units in the sample, where any given single unit can have multiple nonconformities. These are often referred to as “defects.”

A “nonconforming unit” is defined as a unit with one or more nonconformities (defects) — but counted only as one “defective” unit. A “nonconformity” is any departure for any quality characteristic being considered in the inspection of each unit. In Z1.4, one can use either statistic as desired. The choice is largely dependent on the nature of product units and the reason for doing the sampling inspection — whether it is to control or oversee defective units or to control or oversee defects.

In the tables of Z1.4, note the top line above the range of AQLs: “Acceptance Quality Limits (AQLs), Percent Nonconforming Items and Nonconformities per 100 Items”. It should also be pointed out that Z1.4 is intended to be a sampling scheme or system, not just a selection of a given sampling plan. Please review the standard and any number of excellent books available on sampling inspection covering Z1.4, ISO 2859, and etc.

2. If one examines the Z1.4 standard from cover to cover, one will not encounter the term “confidence level.” Z1.4 contains no confidence intervals (or levels) related to any of its features.

Furthermore, the 95% figure is a very general figure associated with the expected “probability of acceptance” at the designated (selected) AQL. This is NOT a confidence level! In fact, the AQL is NOT a statistic!

Setting an AQL is generally an agreement/negotiation process between the customer and supplier. It is more of an index. Essentially, it refers to a level of nonconformity that is generally “acceptable” — a value of 0 being desired of course — but otherwise, a compromise figure.

And it is not by any means a constant, as can be seen by examining the Operating Characteristic (OC) Curves for the various code letters A through R using the same AQL in every table.

For example, for an AQL of 2.5% with the code letter C plan, incoming quality p must be 1.03% for Pa to be 95%, and Pa at 2.5% is less than 90%; for the code letter F plan, p must be 1.80% for Pa to be 95% and Pa at 2.5% is between 90% and 95%, etc.

If confidence intervals at chosen levels are desired for any given sampling plan, one most resort to the theory and methodologies of statistical inference with the available information provided by the sample statistics.

Kenneth Stephens
ASQ Fellow
ASQ Quality Press Author

Related Resources:

Browse the free, open access articles below, or find more in the ASQ Knowledge Center.

Acceptance Sampling With Rectification When Inspection Errors Are Present, Journal of Quality Technology

In this paper the authors consider the problem of estimating the number of nonconformances remaining in outgoing lots after acceptance sampling with rectification when inspection errors can occur. Read more.

Zero Defect Sampling, World Conference on Quality and Improvement

Zero defect sampling is an alternative method to the obsolete Mil Std 105E sampling scheme previously used to accept or reject products, and the remaining ANSI Z1.4-1993 which is still in use. This paper discusses the development of zero defect sampling and compares it to Mil Std 105E. Read more.

Explore the ASQ Knowledge Center for more case studies, articles, benchmarking reports, and more.

Browse ASQ magazines and journals here.

Sampling Plan for Pharmaceuticals

Pharmaceutical sampling

Q: We are a U.S. dietary supplements manufacturer operating under c-GMP conditions set by the U.S. Food & Drug Administration (FDA).

As such, we perform analyses of incoming raw materials (finished product ingredients), intermediate products (during manufacturing), and finished products. Analyses include identity testing (incoming raw materials), and other types of analysis (e.g. microbiological, heavy metals, some quantitative assays on specific compounds). These tests would be the attributes we wish to assess.

Basically, we are refining our sampling procedures and need to ascertain an acceptable number of samples to be taken for the various testing purposes outlined above.

The World Health Organization’s (WHO) Technical Report Series No. 929,  Annex 4, “WHO Guidelines for sampling of pharmaceutical products and related materials” references ANSI/ISO/ASQ 2859-1:1999 Sampling procedures for inspection of attributes – Part 1: Sampling schemes indexed by acceptance quality limit (AQL) for lot-by-lot inspection in reference to the selection of a statistically-valid number of samples for testing purposes.

I note from your website that there are a number of other sampling standards available. I am seeking some guidance as to the most appropriate standard(s) for our particular purposes.

Any assistance you can offer would be much appreciated.

A: Though many of the sampling plans are similar, many standards organizations have published different interpretations of sampling schemes.  Since WHO recommends using ISO 2859-1 as the guidance document, I suggest selecting that plan.

There are similar documents that could be used as an alternative, if necessary:

1. ANSI/ASQ Z1.4-2008: Sampling Procedures and tables for inspection by attributes

2. BS 6001-1:1999/ISO 2859-1:1999+A1:2011 Sampling procedures for inspection by attributes. Sampling schemes indexed by acceptance quality limit (AQL) for lot-by-lot inspection

3. MIL-STD-105E – Sampling Procedures and Tables for Inspection by Attributes*

4. JIS Z9015-0-1999 Sampling procedures for inspection by attributes — Part 0 Introduction to the JIS Z 9015 attribute sampling system

A few points to consider:

  • Usually for FDA-regulated products, a c=0 sampling plan is appropriate. See H1331 Zero Acceptance Number Sampling Plans, Fifth Edition, by Nicholas L. Squeglia
  • Based on risk, an Acceptable Quality Level (AQL) should be selected
  • Your sample size is usually set to be proportional to lot size.  If you are doing testing on bulk raw materials, the sample size will be set based on the variability of the lot as well as the variability of the method.

Steven Walfish
Secretary, U.S. TAG to ISO/TC 69
ASQ CQE
Principal Statistician, BD
http://statisticaloutsourcingservices.com/

Note:

 *military standard, cancelled and superceded by MIL-STD-1916, “DoD Preferred Methods for Acceptance of Product”, or ANSI/ASQ Z1.4:2008, according to Notice of Cancellation

More open access resources about sampling from ASQ:

Explore more in the ASQ Knowledge Center.

Guidance on Z1.4 Levels

Chart, graph, sampling, plan, calculation, z1.4

Q: My company is using ANSI/ASQ Z1.4-2008 Sampling Procedures and Tables for Inspection by Attributes, and we need some clarification on the levels and the sampling plans.

We are specifically looking at Acceptable Quality Limits (AQLs) 1.5, 2.5, 4.0, and 6.5 for post manufacturing of apparel, footwear, home products, and jewelry.

Do you have any guidelines to determine when and where to use levels I, II, and III? I understand that level II is the norm and used most of the time. However, we are not clear on levels I and III versus normal, tightened, and reduced.

Are there any recommended guidelines that correlate between levels I, II, III and single sampling plans, normal, tightened, and reduced?

The tables referenced in the standard show single sampling plans for normal, tightened, and reduced, can you confirm that these are for level II (pages 11, 12, 13)?

Do you have any tables showing the levels I and III for normal, tightened, and reduced?

A: Level I is used when you need less discrimination or when you are not as critical on the acceptance criteria. This is usually used for cosmetic defects where you may have color differences, but it is not noticeable in a single unit. Level III is used when you want to be very picky.  This is a more difficult level to get acceptance with, so it needs to be used sparingly or it can cost you a lot of money.

Each level has a normal, tightened and reduced scheme.  I am not sure about what you are asking for with respect to correlation to levels I, II and III and normal, tightened and reduced.  The goal is to simply inspect the minimum amount to get an accept or reject decision. Since inspection costs money, we do not want to do too much. Likewise, we do not want to reject much since that also costs money both in product availability and extra shipping.

Yes, the tables on pages 11, 12 and 13 are for normal, tightened, and reduced, but if you look at the letters for sample size, you will note that in most cases there are different letters for the levels I, II, and III.  Accept and reject numbers are based on the defect level and the sample size. The switching rules tell you when you can switch to either a reduced or tightened level. The tables can handle not just the levels I, II , and III, but also the special levels.

Jim Bossert
SVP Process Design Manger, Process Optimization
Bank of America
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CMBB
Fort Worth, TX

Is C=0 in Z1.4?

Chart, graph, sampling, plan, calculation, z1.4

Q: I have ANSI/ASQ Z1.4-2008 Sampling Procedures and Tables for Inspection by Attributes. I looked through it rapidly, and I still can’t find the C=0 plan directly, so I am a little confused. I thought C=0 is included in Z1.4. Is the C=0 plan spirit/concept contained in Z1.4 or does C=0 need to be calculated from the several tables in Z1.4? (if yes, which tables?).

A: Z1.4:2008 is a general sampling plan for attributes.  It is tabled by AQL with varying accept reject numbers.  The standard gives a framework for attribute inspection plans. Though Z1.4 does have some plans where C=0, they are NOT optimal to minimize the Type II error. For C=0 plans specifically, I would recommend purchasing Zero Acceptance Number Sampling Plans, Fifth Edition.  The value of the Z1.4 standard is the switching rules used for incoming inspection.

Steven Walfish
Secretary, U.S. TAG to ISO/TC 69
ASQ CQE
Statistician, GE Healthcare
http://statisticaloutsourcingservices.com/

Z1.4 or Z1.9 Sampling Plan for IT Tickets

Data review, data analysis, data migration

Q: I need to purchase a sampling standard. However I notice there are a few options for sampling plans, such as attributes vs. variables.  I am not sure which one will best fit my needs.  I need help in determining this.

I need to determine what the best sample size would be for recurring IT operations.  For example:  If my server team closes 500 tickets a month and I want to pick a sample size to review for quality purposes, what is the best chart to use to determine what the industry standards say are the recommended sample size?  My understanding is there is a light, normal and heavy chart that can be offered.

Please help.  Thanks!

A: The answer is “it depends.”  What it depends on what is she reviewing for quality purposes?  If the inspection is for either “good quality” or “poor quality,” then Z1.4-2008: Sampling Procedures and Tables for Inspection by Attributes, would be appropriate.  If she is measuring something, “time to close,” for example, then Z1.9-2008: Sampling Procedures and Tables for Inspection by Variables for Percent Nonconforming, might be appropriate, although Z1.9 is really only good if the data are normally distributed, which waiting times are generally not.

With more information, I could provide a more definitive answer.

Q: Our intention right now, is to evaluate tickets closed  (or work processed, which could be in other facets other than tickets, may be items logged in a log sheet to check service statuses, etc) to determine if the quality of work performed meets our quality standards.  We are determining what “quality” means to us.  For example:  We want to look at tickets closed to determine if the ticket was escalated properly from our tier 1 to tier 2 team AND if the work log of that ticket had the correct data and correct amount of data documented.  Meaning a tech didn’t just say “resolved user issue,” but rather they documented more relevant data about what they did to resolve the issue.  All of the work performed is service delivery in an operations environment, so the evaluations will be performed on the quality of following our processes and the quality of our resources.  We have an amount of tickets closed per month that vary, slightly up or slightly down.  I want to look at a table to determine what our sample size should be.

However, in addition to the above, I am very interested in learning the other plan too because we do have Service Level Objectives (SLO’s and SLA’s) in this environment (example: time to close, first call resolution, call abandonment rate, etc) If I can understand that other table and how to use it, both may be valuable and I may purchase both.

I didn’t understand the comment that “Z1.9 is really only good if the data are normally distributed, which waiting times are generally not.”  What does normally distributed mean?  I would like that explained.
Can your expert answer and provide information on both sampling plans for me?

Thanks again and I look forward to the response.

A: Normally distributed means that the data follow a bell-shaped curve with the most frequency values falling around some average and tailing off in frequency both above and below that average.  Many processes in real life follow the normal distribution.  Time to close is an exception.  It is more likely to follow the exponential distribution, which means that there will be lots of tickets closed at shorter durations, with some tailing out very far into longer durations.  Also a ticket can’t be closed at less than 0 duration.  The normal distribution extends, in theory, to +/- infinity.  Rates (percentages, I’m assuming) can often be approximated using the normal distribution as long as they aren’t too near 0% or 100%.  If they are near the edges a square root transformation often help to make the data more approximately normal.

Most of the quality characteristics you described are of the pass-fail variety which implies Z1.4 would be appropriate.

I strongly recommend that you take a course and/or read a book on statistical process control or acceptance sampling before attempting this.  There are many potential gotchas that can lead to erroneous analysis and therefore decision making.  ASQ offers some that are quite good.  A comprehensive book would be:

Process Quality Control: Troubleshooting and Interpretation of Data, Fourth Edition
by Ellis R. Ott, Edward G. Schilling, and Dean V. Neubauer.

Brenda Bishop
US Liaison to TC 69/WG3
CQE,CQA,CMQ/OE,CRE,SSBB,CQIA
Belleville, Illinois