Measurement System Analysis

ISO/IEC 17025:2017 General requirements for the competence of testing and calibration laboratories

Question:

Is there ever an exception to the rule about needing full Measurement System Analysis for any instrument placed in the Evaluation / Measurement Technique column on the Control Plan?  If an instrument is listed on the control plan, does it HAVE to have GRRs done, in addition to having to prove stability?  Please base off of ISO 9001 and TS 16949 requirements, and if there is a difference between them for this requirement.   

Answer:

Thank you for this interesting question. Clause 7.6 of ISO 9001-2008 makes most of this fairly clear. Any monitoring and measuring equipment used to verify conformity of product must “be calibrated or verified, or both, at specified intervals, or prior to use. . .” Notice I made ‘at specified intervals’ bold. This is just to bring to light the importance of calibration cycles. You/your organization can determine what those cycles will be based on the stability of the measuring tool, frequency of use, working conditions, etc. For example, if you were using a micrometer to check close tolerance parts and, you found it a good process to measure the parts frequently, this would be a contributing factor to the decision process. Then, if the working conditions included a lot of cutting fluids or perhaps a good deal of metal dust, another factor is added to the decision process. What I am driving at is this; once you have determined that the product conformity which you are checking is good and/or consistent and that your sample frequency is satisfactory, you would have no definite requirement for GRR’s on the measuring equipment. The calibrations and or verifications you do must be with equipment which is traceable to international or national measurement standards. If you use working standards as gages to check measuring equipment throughout production and those standards are traceable, then you are doing fine. The processes you use to verify the tools and any in-process measuring practices should be documented in Work Instructions or even with the use of photographs or flow charts.

In the second part of your question, you ask if there is a difference between 9001 and TS 16949. I reference section 7.6.1 of TS 16949. Here it is put straight forward:

7.6.1  Measurement System Analysis 

Conduct statistical studies to analyze variation present in the results of each type of MMD that is referenced in the Control Plan.

Use analytical methods & acceptance criteria that: 

Conform to methods and criteria in customer reference (MSA) manuals Or use other methods, if approved by the customer 

This is an automotive sector specific QMS standard. Herein it is necessary to consider safety and liability in everything you do. So, Gage R&R’s are a common practice. Nonetheless, the necessity for these is dictated by individual processes. Some may need them, some may not.

So, if an instrument is listed on YOUR control plan, GRR’s will become a requirement based on all the criteria I’ve noted above. A gage which has proven stability is most often safe from that requirement under 9001 but TS16949 has more extensive requirements.

Bud Salsbury, CQT, CQI

Difference Between ISO/IEC 17025 and ISO 10012

ISO/IEC 17025:2017 General requirements for the competence of testing and calibration laboratoriesQ: I am updating the instrumentation section of a product fabrication specification to replace a cancelled military specification (MIL-STD 45662) that specified calibration systems requirements.  I am looking for an industry standard that provides requirements/guidance for documentation of our established schedules and procedures for all of our measuring and test equipment and measurement standards.

I am looking into ANSI/ISO/ASQ Q10012-2003: Measurement management systems — Requirements for measurement processes and measuring equipment and ISO/IEC 17025-2005: General requirements for the competence of testing and calibration laboratories, and I would like guidance on usage and application of these standards.

A: The two standards in question, ISO 10012 and ISO 17025 have different scopes.

While the scope of both documents includes language that can perhaps cause confusion, what follows is the salient text from both that illuminates the difference between the two.

From the scope of ISO 10012:

“It specifies the quality management requirements of a measurement management system that can be used by an organization performing measurements as part of the overall management system, and to ensure metrological requirements are met.”

From scope of ISO 17025:

“This International Standard is for use by laboratories in developing their management system for quality, administrative and technical operations.”

ISO 10012 focuses on the requirements of the measurement management system. You can consider it a system within the quality management system. It defines requirements relevant to the measurement management system in language that may illustrate interrelations to other parts of an overall quality management system.

ISO 10012 is a guidance document and not intended for certification. An organization, for example, could have a quality management systems that is certified to ISO 9001:2008. Even if the organization chooses to adhere to the requirements of ISO 10012, the certification to ISO 9001 does not imply certification to the requirements of ISO 10012.

ISO 17025 describes the requirements for a quality management system that can be accredited (a process comparable but different from certification). It encompasses all aspects of the laboratory.

The competence referred to in the title of the standard relates to the competence of the entire system – not just training of personnel. It addresses such factors as contracts with customers, purchasing, internal auditing, and management review of the entire quality management system – ISO 10012 does not.

In summary, ISO 10012 is a guidance document that addresses one element (namely management of a measurement system) of a quality management system. ISO 17025 defines requirements for entire quality management system that can be accredited.

Denise Robitaille
Vice Chair, U.S. TAG to ISO/TC 176 on Quality Management and Assurance
SC3 Expert – Supporting Technologies

Related Content:

Expert Answers: Metrology Program 101, Quality Progress

Question and answer related to defining an organization’s metrology program. Read more. 

Measure for Measure: Managing the Measurement System, Quality Progress

Discussion related to the importance and timing of equipment calibration. Read more. 

10 Quality Basics, Quality Progress

Correctly applied measurement, wherever and however it occurs, is an essential element of a successful business QMS. Read more.

Standards Column: Using the Whole ISO 9000 Family of Quality Management System Standards, Quality Engineering

There is a great deal of richness in the ISO 9000 family of documents and it is a shame for users to not know about and take advantage of the full range of possibilities. Read more.

Ask A Librarian

Gage R&R Study on a Torque Wrench

 

Gage R&R, Torque Wrence

Q: I need information on performing a Gage R&R on a torque wrench. We are using the wrench to check customer parts.

A: For reference on both variable and attribute Gage R & R techniques, a good source is the Automotive Industry Action Group (AIAG) Measurement Systems Analysis (MSA) publication.

The traditional torque wrench is a “generate” device in the sense that it generates a torque to tighten or loosen a fastener (a nut or a bolt, etc.). So, in a strict sense, it is not a “measurement” device. Therefore, both preset and settable torque wrenches are set to a torque value and then used to tighten a fastener or loosen a fastener. When loosening a fastener, it will determine how much torque is required to loosen the fastener. Usually, the clockwise motion is for tightening and counterclockwise motion is for loosening in a torque wrench.

To conduct a variable Gage R & R study on a torque wrench, we would need a “measurement” device which would be a torque checker with a capability to register peak (or breaking) torque. Many such devices are commercially available and if a facility is using torque wrenches, it is a good idea to have one of these to verify performance of torque wrenches. Such a device is usually calibrated (ensure traceable accredited calibration) and provides reference for proper working of torque wrenches.

Now,  one would conduct a Gage R&R study using the typical format:

  • Two  or more appraisers.
  • 5 to 10 repeat  measurements at a preset torque by each appraiser, replicated 2 to 3 or more times.

A word of caution on torque wrenches and setting up the Gage R&R:

  • The measurement is operator dependent, so operators need to be trained on proper toque wrench usage techniques.
  • Ensure that torque is set between every measurement in the settable torque wrench to simulate actual usage between repeated readings.
  • Ensure the number of repeated reading and replicated readings are the same for all appraisers.

The templates for data collection are available in spreadsheet format  from commercial providers. Alternatively, one can design the template from the MSA publication referenced. The data would be analyzed using the guidelines from the MSA publication.

Good luck with the Gage R&R! It is a very useful and worthwhile exercise in understanding your measurement process.

Dilip A Shah
ASQ CQE, CQA, CCT
President, E = mc3 Solutions
Chair, ASQ Measurement Quality Division (2012-2013)
Secretary and Member of the A2LA Board of Directors (2006-2014)
Medina, Ohio
www.emc3solutions.com/

Related Content:

Explore the open access resources below for more information, or browse ASQ Knowledge Center search results.

Comparing Variability of Two Measurement Processes Using R&R Studies, Journal of Quality Technology

Quality Quandaries – A Gage R&R Study in a Hospital, Quality Engineering

Improved Gage R&R Measurement Studies, Quality Progress

ISO 9001 7.6a Calibration and Traceability

Gage R&R, Torque Wrence

Q: ANSI/ISO/ASQ Q9001-2008 Quality management systems — Requirements, clause 7.6a states, in part:

“Where necessary to ensure valid results, measuring equipment shall

a) be calibrated or verified, or both, at specified intervals, or prior to use, against measurement standards traceable to international standards or national measurement standards…”

Does this sub clause require that the calibration process be performed in accordance with international or national calibration procedures? Or does it require that the measurement standards (hardware) used for calibration be traceable to international or national measurement standards (hardware)?

A: The standard is clear that it is the traceability of the calibration standards they are looking for.

Note: By definition, the traceability needs to eventually lead to an accredited lab who will be following procedures such as those set forth in ISO/IEC 17025:2005 General requirements for the competence of testing and calibration laboratories.

 Your internal calibration processes can best be guided by acquiring a copy of ANSI/NCSL Z540.3.

I hope this helped answer your questions.

Bud Salsbury
ASQ Senior Member, CQT, CQI

Related Content:

Open access articles from ASQ

Measure for Measure: Improved Gage R&R Measurement Studies, Quality Progress

Back to Basics: Assessing Failure — The effect of faulty measurement on previously produced products, Quality Progress

The Prediction Properties of Classical and Inverse Regression for the Simple Linear Calibration Problem, Journal of Quality Technology

Explore more articles.

Variation in Continuous and Discrete Measurements

Q: I would appreciate some advice on how I can fairly assess process variation for metrics derived from “discrete” variables over time.

For example, I am looking at “unit iron/unit air” rates for a foundry cupola melt furnace in which the “unit air” rate is derived from the “continuous” air blast, while the unit iron rate is derived from input weights made at “discrete” points in time every 3 to 5 minutes.

The coefficient of variation (CV), for the air rate is exceedingly small (good) due to its “continuous’ nature” but the CV for iron rate is quite large because of its “discrete nature,” even when I use moving averages for extended periods of time. Hence, that seemingly large variation for iron rate then carries over when computing the unit iron/unit air rate.

I think the discrete nature of some process variables results in unfairly high assessments of process variation, so I would appreciate some advice on any statistical methods that would more fairly assess process variation for metrics derived from discrete variables.

A: I’m not sure I fully understand the problem, But I do have a few assumptions and possibly a reasonable answer for you. As you know, when making a measurement, using a discrete scale (red, blue, green; on/off, or similar), the item being measured is placed into one of the “discrete” buckets. For continuous measurements, we use some theoretically infinite scale to place the units location on that scale. For this latter type of measurement, we are often limited by the accuracy of the equipment to the level of precision the measurement can be accomplished.

In the question, you mention measurements of air from the “continuous” air blast. The air may be moving without interruption (continuously), yet the measurement is probably recorded periodically unless you are using a continuous chart recorder. Even so, matching up the reading with the unit iron readings every 3 to 5 minutes, does create individual readings for the air value. The unit iron reading is a “weights” based reading (not sure what is meant by derived, yet let’s assume the measurement is a weight scale of some sort.) Weight, like mass or length, is an infinite scale measurement, limited by the ability of the specific measurement system to differentiate between sufficiently small units.

I think you see where I’m heading with this line of thought. The variability with the unit iron reading may simply reflect the ability of the measurement process. I do not think either air rate or unit iron (weight based) is a discrete measurement, per se. Improve the ability to measure the unit iron and that may reduce some measurement error and subsequent variation. Or, it may confirm that the unit iron is variable to an unacceptable amount.

Another assumption I could make is that the unit iron is measured for the batch that then has unit air rates regularly measured. The issue here may just be the time scales involved. Not being familiar with the particular process involved, I’ll assume some manner of metal forming, where a batch of metal is created then formed over time where the unit air is important. And, furthermore, assume the batch of metal takes an hour for the processing. That means we would have about a dozen or so readings of unit air for the one reading of unit iron.

If you recall, the standard deviation formula is divided by square root of n (number of samples). In this case, there is about a 10 to 1 difference in n (10 for unit air to one for unit iron). Over many batches of metal, the ratio of readings remains at or about 10 to 1, thus impacting the relative stability of the two coefficient of variations. Get more readings for unit iron or reduce the unit air readings, and it may just even out. Or, again, you may discover the unit iron readings and underlying process is just more variable.

From the scant information provided, I think this provides two areas to conduct further exploration. Good luck.

Fred Schenkelberg
Voting member of U.S. TAG to ISO/TC 56
Voting member of U.S. TAG to ISO/TC 69
Reliability Engineering and Management Consultant
FMS Reliability
www.fmsreliability.com

Accuracy of Measurement Equipment

Automotive inspection, TS 16949, IATF 16949

Q: I work for an incoming quality assurance department. In our recent audits, the auditor claimed that high precision machines such as the Coordinate Measuring Machines (CMM) and touchless measurement system should have higher Gage Repeatability and Reproducibility (GR&R) values compared to less precise equipment such as hand-held calipers and gages. If this is the case, does Measurement System Analysis (MSA) cater to this by providing a guidance on what are the recommended values for each measuring equipment by general? If not, should we still stick to the general MSA rules, regardless of the equipment’s precision value?

A: When you noted “higher GR&R values,” that in itself can be a bit confusing because the GR&R value is a percentage of errors caused by repeatability and reproducibility variation. The higher the number, the more variation present — and the worse the measurement method is.

As far as I know, MSA doesn’t give specific guidance for recommended values depending on the measuring equipment. Also, I’m not sure of the validity of saying that a CMM is consistently more accurate than other equipment, such as calipers. Although the equipment may theoretically be more accurate, how you stage the part to be measured will also affect the amount of variability, as will the feature being measured.  Consequently, even though the CMM is theoretically more accurate, there may be 20 percent GR&R, mainly due to the holding fixture or the feature being measured. I’m sure you get the point here.

As far as I know, MSA manuals do discuss what the major inputs should be when deciding the amount of acceptable variation. It strongly recommends to look at each application individually to verify what is required and how the measurement is going to be used.

Another thing to consider is whether you are looking at the GR&R based on total variation or on the specified tolerance. Tolerance-based is more commonly used than total variation, but that may depend on the type of industry.

One thing I would like to mention is that if you have three people take 10 measurements each, and then dump the information into one of the common software programs, it will not matter if they take the 10 measurements with a dial caliper or with a CMM. The instruments’ “accuracy” should not be the deciding factor, but the tolerance base should be.

Also, ISO standards do not dictate GR&R values. If you do what your quality management system says you do, most auditors will not push such an issue. While some auditors may offer “opinions” and suggestions, such items are rarely cause for nonconformance findings.

I hope this helps answer your question.

Bud Salsbury
ASQ Senior Member, CQT, CQI

Editor’s picks: Read open access content on measurement from the ASQ Knowledge Center:

Measure for Measure: Improved Gage R&R Measurement Studies, Quality Progress

Comparing Variability of Two Measurement Processes Using R&R Studies, Journal of Quality Technology

Confidence Intervals for Misclassification Rates in a Gauge R&R Study, Journal of Quality Technology

Quality Quandaries – A Gage R&R Study in a Hospital, Quality Engineering

Visual Fill Requirements

Q: I work for a consumer products company where more than 60% of our products have a visual fill requirement. This means, aside from meeting label claim, we must ensure the fill level meets a visual level.

What is the industry standard for visual fills?

We just launched Statistical Process Control (SPC), and we notice that our products requiring visual fills show significant variability.

A: This is an interesting question. The NIST SP 1020-2 Consumer Package Labeling Guide and the Fair Packaging and Labeling Act, along with any other industry standards, regulate how you must label a product “accurately.” However, it appears you have been burdened with a separate, and somewhat conflicting requirement —  a visual fill requirement.

In most cases, you probably cannot satisfy both requirements without variability. The laws and standards will direct labeling requirements with regard to accuracy, and your company is liable for that. If you choose to use visual fill standards for “in-process” quality assurance, then you would need a fairly broad range between the upper and lower acceptance limits.

Personally, I would use weights and measures as needed to meet customer and legal requirements. These are the data I would use for SPC records.

If your company has a need (or a desire) to use visual fill levels for a gage, then generating a work instruction telling employees where a caution level is would be a way to start. In other words, “If the visual level is above point A or below point B, immediately notify management.” If you are to remain compliant with what you put on a label, visuals will change from run to run. Using them as a guide for production personnel can be a helpful tool, but not as a viable SPC input.

Bud Salsbury
ASQ Senior Member, CQT, CQI

Editor’s Pick: Hear how Procter & Gamble developed a solution for setting appropriate targets for product filling processes in Setting Appropriate Fill Weight Targets—A Statistical Engineering Case Study from the April 2012 Issue of Quality Engineering.

Calibration of AutoCAD Software

About ASQ's Ask the Standards Expert program and blog

Q: To what extent must an engineering firm, specializing in railway infrastructure and transportation, have its AutoCAD software “calibrated” or verified?

Also, what about software designed to calculate earthwork quantities for railway alignments laid out on topographic mapping for all levels of studies – pre-feasibility through preliminary engineering (not for final design, operation simulation and design dynamic system models)? This type of software is utilized by competent draft persons and engineers, but it is not verified prior to use or periodically calibrated.

We don’t confirm “the ability of computer software to satisfy the intended application…”

Your assistance or reference is appreciated

A: AutoCAD is considered “Commercial -Off-The-Shelf” (COTS) software. It is purchased without modification and cannot be modified by the end-user. A similar example would be Excel spreadsheet software. The COTS software by itself should be considered validated and used as is provided it is configured per the software manufacturer’s instructions.

The functionality of the software (distance, volume, formulae and other functions) is fit to be used as intended. If an application is created using COTS software (Excel Templates, AutoCAD applications), then it must be validated and records of validation must be kept.

It should also be noted that definitions of verification and validation are not clearly understood. So, I am repeating them here:

ISO/IEC Guide 99:2007—International vocabulary of metrology—Basic and general concepts and associated terms, defines these terms as:

Verification: provision of objective evidence that a given item fulfills specified requirements

Validation: verification, where the specified requirements are adequate for an intended use

Further explanation:

Validation is a quality assurance process of establishing evidence that provides a high degree of assurance that a product, service, or system accomplishes its intended requirements. This often involves acceptance of fitness for purpose with end users and other product stakeholders.

It is sometimes said that validation can be expressed by the query “Are you building the right product?” and verification by “Are you building it right?”

“Building the right thing” refers back to the user’s needs, while “Are we building the product right?” checks that the specifications are correctly implemented by the system. In some contexts, it is required to have written requirements for both as well as formal procedures or protocols for determining compliance.

Dilip A Shah
ASQ CQE, CQA, CCT
President, E = mc3 Solutions
Chair, ASQ Measurement Quality Division (2012-2013)
Secretary and Member of the A2LA Board of Directors (2006-2014)
Medina, Ohio
www.emc3solutions.com/

ISO 17025; Rounding Measurements

ISO/IEC 17025:2017 General requirements for the competence of testing and calibration laboratories

Q: At the lab I work for, certified to ISO 17025:2005 General requirements for the competence of testing and calibration laboratories, the documented quality assurance system does not allow the rounding of numbers. For example, the requirement for the weight of an adhesive material is 25 to 35 grams, and the actual weight is 24.6 grams.

The engineering member of the team feels this is acceptable because 25 grams is specified with two significant figures; 24.6 grams, expressed as two significant figures is 25 grams. If the intent was not to round off in the tenths place, the document would read “25.0” and rounding would be in the hundredths.

A: If the requirement (specification) is 25 to 35 grams, the need to specify accurately (24.6 grams) is not as critical and the number can be rounded to 25 grams. We would assume that the nominal desired value would be 30 grams. (Personal opinion: the 25 to 35 gram requirement is a fairly loose tolerance, but I do not know the application).

But, this raises more questions:

How was the weight measured? Was the reported value an average of repeated measurements? Was the measuring instrument capable of reading two or three significant digits? What was the measurement uncertainty of the measurement? Was the measurement uncertainty higher than the 25 to 35 grams requirement?

If the reported measurement was an average of n number of measurements made with a two significant digit measuring scale, the reported averaged is always carried to an extra significant digit. If it was three significant digits, then round to four significant digits.

If the measurement uncertainty was +/- 7 grams, the reported value could fall between 17.6 to 31.6 grams. This scenario would require a better measurement process with smaller measurement uncertainty.

For general number rounding conventions, NIST offers Publication SP811 (appendix B.7 on page 43) which provides a good reference. It can be downloaded as a free PDF.

Dilip A Shah
ASQ CQE, CQA, CCT
President, E = mc3 Solutions
Chair, ASQ Measurement Quality Division (2012-2013)
Secretary and Member of the A2LA Board of Directors (2006-2014)
Medina, Ohio
www.emc3solutions.com/

Using the 10:1 Ratio Rule and the 4:1 Ratio Rule

Q: Can you explain when I should be using  the 10:1 ratio rule and the 4:1 ratio rule within my calibration lab? We calibrate standards as well as manufacturing gages.

A: First, I will use the right nomenclature. What the user means is 10:1 and 4:1 Test Accuracy Ratio (TAR). That is, one uses standards 4 or 10 times as accurate as the Unit Under Test (UUT) to calibrate it with.

Unfortunately, the answer to the user’s question is NEVER if we were to use newer metrologically accepted practices.

The TAR is replaced by Test Uncertainty Ratio (TUR).  The ANSI/NCSLI Z540.3:2006 definition of TUR is:

“The ratio of the span of the tolerance of a measurement quantity subject to calibration, to twice the 95% expanded uncertainty of the measurement process used for calibration.”

*NOTE: This applies to two-sided tolerances.

The TUR is represented as a mathematical equation below:

Test Uncertainty Ratio (TUR) represented as an equation

 

Because of advances in technology, one can purchase highly precise and accurate instrumentation at the end user level, it gets challenging to find standards 4 or 10 times as precise with which to calibrate it and maintain metrological traceability at the same time (definition per ISO Guide 99:2007, Property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty).

Proper measurement uncertainty analysis of the UUT (including standards used with its uncertainty) identifies all the errors associated with the measurement process and ensures confidence that calibration is within the specification desired by the end user.

ISO/IEC 17025-2005: General requirements for the competence of testing and calibration laboratories, clause 5.10.4.2, third paragraph, also states that “When statements of compliance are made, the uncertainty of measurement shall be taken into account.”

This would also ensure confidence in the calibration employing the metrological and statistical practices recommended.

The other rule of thumb not to be confused in this discussion is to measure/calibrate with the right resolution. In the ASQ Quality Progress March 2011 Measure for Measure column, I wrote more on resolution with respect to specification and measurement uncertainty. The general rule of the thumb is if you want to measure/calibrate a 2-decimal place resolution device, you need at least 3-decimal place or higher resolution device.

This is a very good question posed and it is also unfortunately the most misunderstood practice among a lot of folks performing calibration.

Dilip A Shah
ASQ CQE, CQA, CCT
President, E = mc3 Solutions
Chair, ASQ Measurement Quality Division (2012-2013)
Secretary and Member of the A2LA Board of Directors (2006-2014)
Medina, Ohio
www.emc3solutions.com/

Related Content: 

Measure for Measure: Avoiding Calibration Overkill, Quality Progress

History and overview of calibration science. Read more.

Evolution of Measurement Acceptance Risk Decisions, World Conference on Quality and Improvement

TAR, TUR, and GUM are examined. Read more. 

Measure for Measure: Calculating Uncertainty, Quality Progress

Understanding test accuracy and uncertainty ratios. Read more. 

Ask A Librarian