Z1.4 and Z1.9 in Micro Testing and API Chemical Analysis

Chemistry, micro testing, chemical analysis, sampling

Q: I work at a cosmetics manufacturing company that produces sunscreen in bulk amounts. When we make 3,000 kg of sunscreen, we will use that in 10,000 units of final sunscreen products which will weigh 300 g each.

How many samples do I need to collect from the 10,000 units to pass the qualification?

The products need to pass both attribute and variable sampling tests such as container damage, coding error, micro testing, and Active Pharmaceutical Ingredients (API)  failure. Almost 100 percent of final products were inspected for appearance error, but a small number of them should be measured for micro testing and API chemical analysis.

For Z1.4-2008: Sampling Procedures and Tables for Inspection by Attributes, we have to collect a sample of 200 (lot size of 3,201-10,000; general inspection level II;  acceptable quality level 4.0 L), and more than 179 should pass for qualification.

For Z1.9-2008: Sampling Procedures and Tables for Inspection by Variables for Percent Nonconforming, we have to collect a sample of 25 (lot size of 3,201-10,000; general inspection level II; acceptable quality level 4.0, L), to meet the requirement of 1.12 percent of nonconformance.

Which sampling plan should we follow for micro testing and API chemical analysis?

A: If the micro test is pass/fail, then you should use Z1.4. The API chemical test  probably yields a numerical result for which you can calculate the average and standard deviation. Then, the proper standard to use is Z1.9. If the micro test gives you a numerical result, then you can use Z1.9 for it as well.

One thing to consider is the fact that the materials are from a
batch. If the batch can be assumed to be completely mixed without settling or separation prior to loading into final packaging, then the API chemical test may only need to be done on the batch, not on the final product. Micro testing, which can be affected by the cleanliness of the packaging equipment, probably needs to be done on the final product.

Brenda Bishop
U.S. Liaison to TC 69/WG3
ASQ CQE, CQA, CMQ/OE, CRE, SSBB, CQIA
Belleville, Illinois

Related Resources:

Getting the Right Data Up Front: A Key Challenge, Quality Engineering, open access

Rational decisions require transforming data into useful information by appropriate analyses. Such analyses, however, can be only as good as the data upon which they are based. In this article, the authors urge that careful consideration be given, up front, to procuring the right data and provide some guidelines. Read more.

A Graphical Tool for Detection of Outliers in Completely Randomized, Unreplicated 2k and 2k-P Factorials, Quality Engineering, open access

With the increased awareness of statistical methods in industry today, many non-statisticians are implementing statistical studies and conducting statistically designed experiments (DOEs). With this increased use of DOEs by non-statisticians in applied settings, there is a need for more graphical methodologies to support both analysis and interpretations of DOE results. Read more.

Z1.4 or Z1.9 Sampling Plan for IT Tickets

Data review, data analysis, data migration

Q: I need to purchase a sampling standard. However I notice there are a few options for sampling plans, such as attributes vs. variables.  I am not sure which one will best fit my needs.  I need help in determining this.

I need to determine what the best sample size would be for recurring IT operations.  For example:  If my server team closes 500 tickets a month and I want to pick a sample size to review for quality purposes, what is the best chart to use to determine what the industry standards say are the recommended sample size?  My understanding is there is a light, normal and heavy chart that can be offered.

Please help.  Thanks!

A: The answer is “it depends.”  What it depends on what is she reviewing for quality purposes?  If the inspection is for either “good quality” or “poor quality,” then Z1.4-2008: Sampling Procedures and Tables for Inspection by Attributes, would be appropriate.  If she is measuring something, “time to close,” for example, then Z1.9-2008: Sampling Procedures and Tables for Inspection by Variables for Percent Nonconforming, might be appropriate, although Z1.9 is really only good if the data are normally distributed, which waiting times are generally not.

With more information, I could provide a more definitive answer.

Q: Our intention right now, is to evaluate tickets closed  (or work processed, which could be in other facets other than tickets, may be items logged in a log sheet to check service statuses, etc) to determine if the quality of work performed meets our quality standards.  We are determining what “quality” means to us.  For example:  We want to look at tickets closed to determine if the ticket was escalated properly from our tier 1 to tier 2 team AND if the work log of that ticket had the correct data and correct amount of data documented.  Meaning a tech didn’t just say “resolved user issue,” but rather they documented more relevant data about what they did to resolve the issue.  All of the work performed is service delivery in an operations environment, so the evaluations will be performed on the quality of following our processes and the quality of our resources.  We have an amount of tickets closed per month that vary, slightly up or slightly down.  I want to look at a table to determine what our sample size should be.

However, in addition to the above, I am very interested in learning the other plan too because we do have Service Level Objectives (SLO’s and SLA’s) in this environment (example: time to close, first call resolution, call abandonment rate, etc) If I can understand that other table and how to use it, both may be valuable and I may purchase both.

I didn’t understand the comment that “Z1.9 is really only good if the data are normally distributed, which waiting times are generally not.”  What does normally distributed mean?  I would like that explained.
Can your expert answer and provide information on both sampling plans for me?

Thanks again and I look forward to the response.

A: Normally distributed means that the data follow a bell-shaped curve with the most frequency values falling around some average and tailing off in frequency both above and below that average.  Many processes in real life follow the normal distribution.  Time to close is an exception.  It is more likely to follow the exponential distribution, which means that there will be lots of tickets closed at shorter durations, with some tailing out very far into longer durations.  Also a ticket can’t be closed at less than 0 duration.  The normal distribution extends, in theory, to +/- infinity.  Rates (percentages, I’m assuming) can often be approximated using the normal distribution as long as they aren’t too near 0% or 100%.  If they are near the edges a square root transformation often help to make the data more approximately normal.

Most of the quality characteristics you described are of the pass-fail variety which implies Z1.4 would be appropriate.

I strongly recommend that you take a course and/or read a book on statistical process control or acceptance sampling before attempting this.  There are many potential gotchas that can lead to erroneous analysis and therefore decision making.  ASQ offers some that are quite good.  A comprehensive book would be:

Process Quality Control: Troubleshooting and Interpretation of Data, Fourth Edition
by Ellis R. Ott, Edward G. Schilling, and Dean V. Neubauer.

Brenda Bishop
US Liaison to TC 69/WG3
CQE,CQA,CMQ/OE,CRE,SSBB,CQIA
Belleville, Illinois