Z1.4: 2008 Sampling


Sterile, Lab, Clean Room, Requirements, Standard


We are having an interpretation issue regarding the ANSI/ASQ Z1.4:2008 standard with some of our component vendors. We have a number of different defects that fall into an AQL of 1.0.

Please note that the same question applies to all AQL levels, as our critical and minor defects can also have multiple defects.

Our interpretation of the standard is that if the sampling plan table (based on sample size and inspection level) shows Accept 7 / Reject 8 then all defects in this major category would be cumulative for the accept / reject criteria. (i.e. 3 that fail outer diameter, 3 that fail height of the bottle finish and 3 that fail weight – total of 9 – would constitute a rejection of the lot). The vendor’s interpretation is that each of the items within the major category should have an accept / reject allowance of 7 / 8 (so potentially, in this case, 56 defects would still be accepted).


In this case, it depends on the question the lot sampling is trying to answer. If they want to know if individual units within the lot are acceptable – based on all criteria that is considered acceptable, then the tally of all defects found is correct. This is further supported by any item with one of the many specifications out of range would be deemed a failure.

On the other hand, if the lot sampling is to detect lots with specific faults, isolated to a specific specification then the defect types would be considered separately. If the AQL 1.0 is suitable for the specific defects, then considering them separate for the 8 criteria would no longer be an overall ASQ 1.0 protection; it would be much less.

Your example of 56 defects being accepted underscores the point that the AQL protection is no longer 1.0.

I’m assuming the specifications and causes of the defects are independent, yet that may not be the case. When not independent I’m not sure how to adjust the sample size to a present the same AQL protection. When independent you would need separate draws of samples for each defect of interest, then apply the Accept 7/Reject 8 criteria judging only the one specification.

In practice, if you want to inspect for isolated specifications, one should allocate the acceptable AQL and LPTD points and develop your sampling plan from there. Instead of a 1.0% defect rate for AQL it would need to less for one of the Reject 8 specifications; try 0.125 so that the tally of failure rates across the various specification of interest (assuming the possibility of failing any specifications is equal). This will lead to much larger sample sizes that may be useful when troubleshooting specific faults.


For more on this topic, please visit ASQ’s website.

ISO 9001 Quality Policy

Audit, audit by exception


ISO 9001:2008 clause 5.3 regarding quality policy requires that it should include commitment to continually improve the effectiveness of quality management system. Our Registrar is saying that for compliance these same words should be included in the quality policy. Our opinion is that our policy includes commitment to continually improve the standard of services to that client which in real terms is how the effectiveness of QMS would be measured. We feel that copying words from the standard will not add any value. Any suggestions on how we should respond to the Registrar.


Good Morning,

I read your question and can understand why you might be somewhat confused. Please notice that the words in the standard say that you are to “continually improve the effectiveness of the quality management system.” I’m sure that your quality management system (QMS) covers all parts of your organization, not just your ‘standard of service.’ The intent of the standard is not to insist that your quality policy is copied word-for-word from the standard itself. Nonetheless, the word “shall” at the beginning of 5.3 indicates a requirement. You are required to include those main points in your policy, which will help your entire organization remain compliant.

Consider this-it is common practice that companies generate their Corporate Quality Policy first. Everything after that, the procedures, the work instructions, etc. fall in line under the main points delivered in that policy (5.3c “provides a framework for establishing and reviewing quality objectives”). If your organization’s quality policy only suggests improving your ‘standard of service’, then is the rest of your QMS to be left on its own as “good enough”? That question is just to make a point. I hope you see the point I’m making. Your registrar can be a valuable member of your team. You would be wise to consider what that particular team mate has to contribute.

Thank you very much for sending your good question to Ask The Experts.

Bud Salsbury, CQT, CQI

For more on this topic, please visit ASQ’s website.

TS 16949 Conformance for a Non Value Add Company

Automotive inspection, TS 16949, IATF 16949

We’re a fabless semiconductor company, Tier 2, who is in the process of designing and developing an automotive product to deliver through our TS 16949 certified subcontractor, Tier 3, to an auto supplier Tier 1, for an OEM.

We know and understand that we cannot get TS 16949 certified, but we are still working at bringing up our ISO 9001 processes certified for 14 years to withstand a TS 16949 audit.
As we do our internal process audits in preparation for our ISO 9001/14001 Surveillance audit in June/13 we’re looking for TS gaps which we’ll document and work to close.
We’re looking for a registrar who would audit us to TS 16949 and give us a report that basically states that, assuming we do, have withstood the audit and that if we were an Mfg’r qualified to be TS 16949 certified company that we would pass a TS 16949 audit.
Are you aware of any other companies who have done this or of any registrars who provide this type of service?

We’re either setting precedence for other fabless semiconductor companies designing to deliver for auto, or it’s already been done.  If it has, then what is this type audit of called and do you know anyone who has done it?

Thanks for any input you may provide.

Thank you for your question. There are two issues here. Firstly, contact your existing registrar with this question and see if they can comply with your request.

Secondly, this is about your obligation to provide a proper PPAP submission for these parts, whether they are manufactured by you or by a supplier. If you are the supplier for these parts, there are likely terms and conditions in your Purchase Order that require you to submit a level 3 PPAP. If these requirements are present, they are auditable as a customer-specific requirement whether you are registered to TS 16949 or not.

I hope this answers your question.

Denis J. Devos, P. Engineer
A Fellow of the American Society for Quality
Devos Associates Inc.
Advisors to the Automotive Industry

For more on this topic, please visit ASQ’s website.

Gap Analysis Vs. Pre-assessment for a Standards Audit

Audit, audit by exception

Can you clarify the difference between a gap analysis and a pre-assessment in relation to an activity that takes place prior to the full compliance audit? It is my understanding that a gap analysis compares something against a set performance level or standard requirement and an assessment is the collection and analysis of information to determine the projected compliance of an organization to a standard. Both provide the answer of what is missing, but the gap analysis also provides information on where an organization wants to be without going so far as to telling the organization how to get there (consulting).

Thanks for contacting ASQ’s Ask the Experts program. With regard to your question, the primary difference between a gap analysis and a pre-assessment is that a gap analysis applies to management systems such as ISO 9001:2008, ISO TS29001 or others. A gap analysis is typically the initial step in the QMS certification process. It is used to identify areas within a quality management system that do not meet defined requirements for certification. This can include processes, persons or product. The results of the gap analysis are based upon objective evidence, such as records reviewed, interviews conducted and observations made, to evaluate an Auditee’s conformance with requirements.
A pre-assessment is usually the initial phase of the accreditation process. A pre-assessment, or a practice assessment, is conducted prior to a conformity assessment to identify areas that must be improved or corrected before accreditation can be obtained. Unlike a compliance audit where the Auditor verifies conformance based upon objective evidence as mentioned earlier, an Assessor is also focused on assessing an organization’s competencies and performance of required tasks, such as measurement of uncertainty (MU), metrological traceability and proficiency testing (PT) as defined by ISO 17025:2005 and referred to by some as the “big three”.
A commonality shared by a gap analysis and a pre-assessment is that they both identify nonconformities or gaps between what exists and what is required by the standard or other defined criteria.
As you are aware, “gap analysis” and “pre-assessment” are not interchangeable terms. A gap analysis is associated with QMS certification or registration as issued by a Registrar and pre-assessment or practice assessment is associated with an activity performed prior to conducting a conformance assessment for accreditation. ISO 9000:2005 and ISO 17000:2004 provide vocabulary and terms for ISO 9001:2008 and ISO 17025:2005 quality management systems, respectively. Additional vocabulary and terms, as applicable to ISO 17025:2005, are provided in ISO/IEC Guide 99:2007, International Vocabulary of Metrology.

I hope this helps.

Best regards,


Bill Aston, Managing Director
Aston Technical Consulting Services, LLC
Kingwood, TX 77339
Website: www.astontechconsult.com

For more on this topic, please visit ASQ’s website.

MSA Location Variation

Gage R&R, Torque Wrence


Is it acceptable to use traceable standards (VLSI, NIST, etc.) to complete stability, bias and linearity studies if these standards cover the operating range of the gage? For stability and bias, the AIAG MSA 4th edition (p. 85, p. 87) states, “Obtain a sample and establish its reference value(s) relative to a traceable standard…” For linearity, the AIAG MSA 4th edition (p. 96) states, “select g>= 5 parts whose measurements, due to process variation, cover the operating range of the gage.” Specifically for designating a master sample (from production) to assess stability, we have an issue with degredation or oxide growth on the master sample that introduces known variation in thickness measurements. In this case, would it be justifiable to use VLSI standards to assess stability over time? Thanks for your help and guidance!


The quick answer is “that depends”.

The purpose of measurement systems studies is to evaluate the entire measurement system which includes the equipment, method, appraiser and the within part variability. The problem with using standards is that they are too good in that their reference value will usually be on an incremental value / discrimination point and not between points which require either truncation or rounding. Using standards can end up with average ranges of zero which is interpreted that the measurement system does not sufficient discrimination for the task.

Consequently I am disinclined to support the use of standards in measurement system studies.

However, it appears that this question is motivated by the need to look at the stability of the measurement system and actual parts will degrade over time. In this case we are not interested in the common cause variation but the existence / occurrence of a special cause. If the measurement system shows itself to be acceptable using normal parts then the stability of the system can be monitored using standards (at least 2 at the limits, preferably 3 with one in the middle).

If a bias studies on the standards have the overall range of 0 then ANY reading in a stability study using these standards other than the reference value indicates that the measurement system has changed.

If a bias studies on the standards does not have zero variability then things get a little tricky. Statistical control limits will probably not work unless this variability is not small (then the question become understanding why). You may have to resort to using the “precision” limits provided by the equipment manufacturer. Remember the objective is to be able to identify whether something changed (a special cause is affecting) the measurement system.


For more on this topic, please visit ASQ’s website.

Pesticide Residues Surveillance Program Sampling

ISO 14004, Environmental Management System, EMS


I am plant production specialist working in government sector.  I am the manager of pesticides residues surveillance program, on this program we targeting local commodities of fresh fruits and vegetables (F&V) by sampling the targeted numbers and types of F&V in regular basis around the year and we analyze samples and results and establish the annual report. I have checked many similar program in other countries included USDA program but I didn’t find approach methodology or statistical way to identify the sample size to be targeted in the year taking in account type and number of crops, crop production,…etc. to elaborate annual sampling plan. My question here is how can elaborate sampling plan for mentioned program considering all valuable factors?

Your cooperation is highly appreciated.


Sampling is a method to estimate population parameters. For example, if the goal is to determine the amount of unacceptable residue on store bought apples, and testing every individual apple is impractical, then we use a sample to estimate the proportion with unacceptable residue.

The sample plan must focus on the goal and balance with the resources and technology available. If the goal is to accurately detect a very low proportion with residue, say 1 in 1 million, then the sample size will be larger than if the goal is to detect 1 in 100 with unacceptable residue. The goal to detect 1 in 100 is easier to accomplish (fewer apples tested) yet does not reveal is there is a 1 in 1000 level or not.

A key element is the specific goal for detection and design a sample plan that is capable to detect at or better than the goal’s level. Capable includes the measurement system errors and an understanding of the nature of how failures occur.

Another consideration is the nature of the measurement and goal. If the test is only pass / fail for presence of residue, then we have to use the relatively inefficient sampling plans based on the binomial distribution. If the data is a variable value, such as part per million residue presence, then we can use more efficient sampling plans based on the appropriate continuous distribution. If the testing is destructive to the item being tested that limits the sampling techniques available.

How is the lot defined? If this is an annual report then the lot may be the annual production of a specific fruit or vegetable, say a specific variety of apples. Define the population clearly and any relevant subgroups of interest. If the data is only for an annual report the sampling plan is marked different than if the goal is a monthly monitoring and warning system.

Another consideration is the thresholds along with confidence. For sampling plan creation we use two specific points of interest. The Producer Risk Point (PRP) made up of the Acceptable Quality Level (AQL) and the producers’ risk (Type I risk or Alpha – which is the probability of rejecting a good lot, or in this case stating the residual level is above a specific AQL or value when it actually is not). The second point is the Consumer’s Risk Point made up of the Lot Tolerance Percent Defective (LTPD) and the consumer’s risk (Type II risk or Beta – which is the probability of accepting a bad lot, or in this case stating the residual level is below the LTPD when it is actually is not.)

The closer the AQL and LTPD are the more difficult (more samples) it is to determine an accurate estimate of the population. Likewise the less risk either the producer or consumer desire to incur again results in higher sample sizes.

One more consideration which is often overlooked is the selection of samples for testing. Most sampling plans are based on the assumption that the samples are taken randomly from the entire population. For example with say 50 million apples of a specific variety we would create a system to select samples that each has an equal change of any specific apply being selected. This is not a trivial matter in most cases. The availability and distribution of apples along with storage, shipping and display of apples all contribute to limited or biasing selecting a random sample. If it is not possible to select test items randomly, then study the impact on the study and means to account for a non-random sample.

In summary, for any sample plan:

  • Define the population
  • Define the desire goal of the study
  • Understand the measurement system
  • Use variables data if at all possible
  • Define PRP and CRP
  • Determine capable sampling plan
  • Design method to select random sample

This quick summary of consideration is what I consider the essential elements, yet other may impact the sampling plan. For example, seasonal variations in production, location in supply chain when measurements are made, variations in supply chain impact on presence of residual, differing nature of residue commonly found of different fruit or vegetables, and probably a few more. Understanding the goal, measurement system and random sampling will help determine areas that require consideration.


Fred Schenkelberg

For more on this topic, please visit ASQ’s website.

ISO/TS 29001:2010 Standard in Oil and Gas Production

Oil and gas industry, petroleum industry

We are an Oil & Gas production testing, frac flow back, and trucking company and while in the beginning stages of instituting ISO 9001:2008 standards, we ran across Oil & Gas industry specific standards ISO/TS 29001:2010 and we are curious as to whether or not we have to apply TS 29001:2010, ISO 9001:9008, and maybe some ISO standards for trucking to receive our ISO certification.

All of the TS should include ISO as the back bone with Industry specifics. The customers dictate which is required. For Auto Industry it is TS 16949 and for Aerospace AS9004. The technical specifications shall include ISO 9001 and the company is registered to the ISO with a TS. A little confusing but eliminates a vast set of international standards. The QMS is ISO 9001. I will always go on the side of using the industry specifics if that is the only industry that they work within as most TS requirements require the use of core tools. If you have these particular TS requirements I will review them but I very sure about this answer.

Ron Berglund
Global Quality Coach

For more on this topic, please visit ASQ’s website.