Sampling Employee Tasks

Employees, Training, Working, Learning, Duties, Tasks

Question

We are collecting data on what tasks our employees in various departments do each day. We hope to eventually get a representation of what each employee does all year long.  Randomly, throughout the day, employees record the tasks they are doing.  We are not sure how to calculate an appropriate sample size and we are not sure how many data points to collect.

Answer

I wish there was a simple answer.  We need to consider:

  • If it makes a difference on how long an employee has been performing a job?
  • Are the departments are equivalent in terms of what they are doing?
  • What is the difference that you  want to detect?

The simple rule is that the smaller the difference, then the larger the sample size. By smaller, it is less than 1 standard deviation from the data that has been detected.

Random records are O.K., but really, shouldn’t you want a record for everyone for at least a week? That would give you an idea of what is done across the board and, then, if you are trying to readjust the workloads, you have some basis for it based on the logs.  My concern with the current method is that you may have a lot of extra paperwork to account for everyone for a certain time.

Additional information provided by the questioner:

The goal of this project is to establish a baseline of activities that occur in the department and to answer the question “What does the department do all day?”

The amount of time an employee has been performing a job does not make a difference. The tasks performed in each department are considered equivalent.  We are not accounting for the amount of time it takes to complete a task — we are more interested in how frequently that task is required/requested.

The results will be used to identify enhancement opportunities to our database and identifying improvements to the current (and more frequent) processes.  The team will use a system (form in Metastorm) to capture activities throughout the day.  Frequency is approximately 5 entries an hour at random times of the hour.

I have worked with the department’s manager to capture content for the following fields using the form:

  1. Department (network management or dealer relation)
  2. Task (tier 1)
  3. Why (tier 2 – dependent on selection of task)
  4. Lessee/client name
  5. Application
  6. Country
  7. Source of request (department)

We are looking for a reasonable approach to calculate the sample size required for a 90 – 95% confidence level.  The frequency of hourly entries and length of period to capture the data can be adjusted to accommodate the resulting sample size.

Answer

The additional information helps.  Since you have no previous data and you are getting 5 samples an hour from each employee, (assuming a 7 hour workday, taking out lunch and two breaks), that will give you approximately 35 samples a day. Assuming a five-day week, that gives you approximately 175 data points per employee.  This should give you enough information to get an estimate of what is done for a week.

Now, you will probably want to extend this out another three weeks so that you have an idea of what happens over a month.  If you can assume that the data collected is representative of all months, then you should be O.K.  If you feel that some months are different, then you may want to look at taking another sample during the months where you anticipate different volumes from the one you have. You can use the sample size calculation for discrete data using the information that you have already collected and not look at all employees, but target your average performers.

Jim Bossert
SVP Process Design Manger, Process Optimization
Bank of America
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CMBB
Fort Worth, TX

For more on this topic, please visit ASQ’s website.

Guidance on Z1.4 Levels

Chart, graph, sampling, plan, calculation, z1.4

Q: My company is using ANSI/ASQ Z1.4-2008 Sampling Procedures and Tables for Inspection by Attributes, and we need some clarification on the levels and the sampling plans.

We are specifically looking at Acceptable Quality Limits (AQLs) 1.5, 2.5, 4.0, and 6.5 for post manufacturing of apparel, footwear, home products, and jewelry.

Do you have any guidelines to determine when and where to use levels I, II, and III? I understand that level II is the norm and used most of the time. However, we are not clear on levels I and III versus normal, tightened, and reduced.

Are there any recommended guidelines that correlate between levels I, II, III and single sampling plans, normal, tightened, and reduced?

The tables referenced in the standard show single sampling plans for normal, tightened, and reduced, can you confirm that these are for level II (pages 11, 12, 13)?

Do you have any tables showing the levels I and III for normal, tightened, and reduced?

A: Level I is used when you need less discrimination or when you are not as critical on the acceptance criteria. This is usually used for cosmetic defects where you may have color differences, but it is not noticeable in a single unit. Level III is used when you want to be very picky.  This is a more difficult level to get acceptance with, so it needs to be used sparingly or it can cost you a lot of money.

Each level has a normal, tightened and reduced scheme.  I am not sure about what you are asking for with respect to correlation to levels I, II and III and normal, tightened and reduced.  The goal is to simply inspect the minimum amount to get an accept or reject decision. Since inspection costs money, we do not want to do too much. Likewise, we do not want to reject much since that also costs money both in product availability and extra shipping.

Yes, the tables on pages 11, 12 and 13 are for normal, tightened, and reduced, but if you look at the letters for sample size, you will note that in most cases there are different letters for the levels I, II, and III.  Accept and reject numbers are based on the defect level and the sample size. The switching rules tell you when you can switch to either a reduced or tightened level. The tables can handle not just the levels I, II , and III, but also the special levels.

Jim Bossert
SVP Process Design Manger, Process Optimization
Bank of America
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CMBB
Fort Worth, TX

Operational Qualification (OQ) Challenges; Cpk vs. AQL

Q: We’re completing a validation of a plastic extrusion process, which has raised a few questions with me.

This validation exercise encompasses the installation qualification (IQ), operational qualification (OQ), and the performance qualification (PQ). The IQ is self explanatory, but the OQ is challenging. The process is dependent on the batch resin properties which vary enough that the extrusion processing parameters cannot be setup where good parts are always produced. One resin batch can use processing parameters that will not work with the next batch. A justification will be written and included in the documentation package to explain this. Does the inability of defining an operating window void or limit the validation?

My second question has to do with PQ acceptance criteria. The PQ will be three production runs using at least two different material resins (the largest source of variation). While production acceptance will be on an AQL=1.0, C=0 basis, these initial validation lots will be accepted on a process capability index (Cpk) level. While on the surface the acceptance difference may seem benign, it is causing some changes. The tolerance is such that the process routinely passes the Acceptable Quality Limit (AQL) test criteria but fails a Cpk requirement. Is it possible to accept PQ runs as they would be accepted in production?

A related question is the power of a Cpk vs. an AQL sampling plan. A Cpk value can be calculated using the same number of samples on a 100-foot run vs. a 10,000-foot run, while an AQL sampling plan is size dependant. Is there a criterion on sample size or a rule of thumb as to when one plan should be used over another?

A: First, the plastic extrusion process is always a tricky one to qualify simply because each new batch of resin always requires adjustments no matter how controlled the storage conditions are. So yes, you will have to define what adjustments your organization has to make and how big an operating window you need to transition from batch to batch.  If you can demonstrate that it can be resolved within a certain time (say, 15-30 minutes), then it should be ok for validation.  This is assuming that the customer is in agreement with what your company is doing.

Cpk formula, Cpk indexThe second question is a bit more difficult in that the Cpk is assuming that the process is in control and performing at a steady rate.  Cpk is a long term measure and requires the use of control charts to really control the process.  You may be able to work with your customer on help to get validated to the Cpk requirement, but you have to show the plan to get here.  In the past, some customers have been willing to provide an extended period to attain validation. You may want to talk to your customer representative to find out what help they can provide.

The third question gets to the fundamental heart of the situation: the question of using Cpk vs. AQL.  Cpk is a measure of process capability and AQL is a measure of long-term, outgoing quality.  Are they the same?  On some studies I did early on with Cpk and specifications, it was not always clear.  I have not seen any criterion on sample size on when to use Cpk vs. AQL.

Jim Bossert
SVP Process Design Manger, Process Optimization
Bank of America
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CMBB
Fort Worth, TX

For more on this topic, please visit ASQ’s website.

Z1.4 2008: AQL, Nonconformities, and Defects Explained

Pharmaceutical sampling

Q: My question is regarding the noncomformities per hundred units and percent nonconforming.  This topic is discussed in ANSI/ASQ Z1.4-2008 Sampling Procedures and Tables for Inspection by Attributes under sections 3.2 and 3.3 on page 2.  Regardless of the explanations provided, I find myself puzzled as to what the following numbers refer to in “Table II-A– Single sampling plans for normal inspection (Master table).”

Specifically, I am having problems understanding the following unit numbers just above the Acceptance and Rejection numbers (example, 0.010, 0.015, 0.025, 1000).  Do these represent percent noncomformities and if so,  does 0.010 = 0.01%, and conversely, how can 1000 = 1000%?

As you may see, I am very confused by these numbers, and I was hoping to have some light shed on this subject. Thank you for your answers in advance.

A: The numbers on the top of the table are just as the questioner stated: .0.010 = .01% defective.  That is the acceptable quality limit (AQL) number.  Generally, most companies want 1% or less, but as noted in the table, it does go up to 1000. It is extreme to think of something being more than 100%, but consider that it may be a minor or cosmetic defect that does not affect the function but just does not look good.  Scratch and dent sales are a common result of these higher numbers.

The AQL number is the worst quality level you would expect to find at this level.  The thing you have to remember is that these plans work best when the quality is very good or very bad.  If you are at the limit, you could end up taking more samples and spend a lot of time in tightened inspection.

Many people use percent nonconforming instead of percent defective, simply because of the connotation of “defective.” No one wants to say they shipped a defective product.  They may have shipped a nonconforming product that the customer could not use simply because their requirements were too strict, where another customer may be able to use the same thing because they have less stringent requirements.

Jim Bossert
SVP Process Design Manger, Process Optimization
Bank of America
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CMBB
Fort Worth, TX

For more on this topic, please visit ASQ’s website.

Six Sigma Green Belt Projects

Data review, data analysis, data migration

Q: I teach a course called “Statistical Methods of Six Sigma” at an engineering college. I’m preparing students to take the ASQ Certified Six Sigma Green Belt exam if they are interested (it is not a mandatory requirement of my class).

Here’s my question — most of my students already have jobs lined up after graduation. Some of them are going to places where Six Sigma programs are already fully established. I do have one particular student who is expected to implement a Six Sigma program at the company that she is going to. It’s a small company, and they don’t already have a Six Sigma program in place.

If she passes the ASQ Green Belt exam and receives her Six Sigma Green Belt Certification, how does she go about getting a project approved if she’s working for a company that doesn’t already have existing Belts?

A: To ask a Green Belt to implement a Six Sigma program is not only ambitious, but also somewhat risky.  Green Belts have the least amount of experience in Six Sigma. Regardless, what this person should be doing is look at the company and decide what a good first project would be with an executive mentor.  The candidate should be looking at something that is important to the company and has impact to the business.  It should be something that requires some work and is not obvious to just anyone looking at the project.

Q: I think the expert more accurately posed what my real question is: how does a new grad working for a company that doesn’t currently have a Six Sigma Black Belt program find an executive mentor to approve or qualify her project?

I agree that she will need a Black Belt, but who will/can certify her project if there is not an existing Black Belt or Master Black Belt at her place of work?  (It is a small consulting firm for medical hospitals).

A: I recommend that she approach her local ASQ Section and inquire about mentors.

Jim Bossert
SVP Process Design Manger, Process Optimization
Bank of America
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CMBB
Fort Worth, TX

For more on this topic, please visit ASQ’s website.

Z1.9 Sigma for Variability Known Method

Audit, audit by exception

Q: I have a question about  Z1.9-2008: Sampling Procedures and Tables for Inspection by Variables for Percent Nonconforming. I have seen there is a “Variability Known” method. However I don’t know how to get a Sigma, so I don’t know how to use this method. Could you please share how to get a Sigma?

A: To get a Sigma to use for the Variability Known method is to have data that has been collected over a period of time and calculate the standard deviation. The rule here is at least six months of data with at least 50 data points.  Depending on the process, if the data has been collected and there is over 1000 data points, the time limitation goes away since you have an extremely large data set to work with.

Q: During the 6 months, the process should be under control, right? And data should be normal distribution, right? Is there any process control needed? And how do I maintain this process and Sigma?

A: Yes, there is the assumption that the process is normally distributed and is stable.  That means some type of process control is being used.  Ideally this would be an X-bar and r or an X-bar and S chart. If an out of control situation occurs and you can bring the process back into control, then you are ok.

Q: Could you tell me the meaning of “data point”? As you know, during the 6 months, we will get lots of batches. For each batch, we will have a certificate of analysis (COA), and many data. I am not sure how do you combine data for different batches. How do you calculate this?

A: Data point, in the most simple format, could be the statistics associated with a batch or a mean and standard deviation/range. Each batch gives you a new set of data points. You can combine the time based data in a couple of different ways:

1. You can take each batch and use the means and plot them on an X-bar and R or an X-bar and S-chart.
2. You can take the raw data and combine it into one large distribution.

The preferred way is the control chart approach since you will know if the process is stable since it is already plotted.

Jim Bossert
SVP Process Design Manger, Process Optimization
Bank of America
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CMBB
Fort Worth, TX

For more on this topic, please visit ASQ’s website.

Random Sampling

Inventory, Inspection, Review, Suppliers, Supplies

Q: When inspecting components on tape and reel, pulling parts at random can present a problem in a pick and place operation.  Also once removed, the samples would have to be put back on tape for use.

Is there a practical or common sense procedure to follow?

A: This is not an uncommon problem and I know that I’ve been in a similar situation. What we did was to inspect at the beginning and the end of each tape. That way we were not causing disruption to the process.  It worked pretty well with the suppliers we had. But prior to doing that, we certified our suppliers by going to their facility and performing a process audit to make sure that the process was meeting our requirements.

Jim Bossert
ASQ Fellow

For more on this topic, please visit ASQ’s website.