Calibration Questions

Automotive inspection, TS 16949, IATF 16949

Question

I work at a hydraulic cylinder manufacturer. The company has homemade thread and ring gages in house that we are using for production that are not sent out for calibration but have a homemade master that is used to check them with once a year which does not get sent out either. I have been here six months and am thinking these gages and masters are a violation of ISO. Am I correct?

Answer

The short answer is yes. The intent of ISO 9001:2015’s subclause 7.1.5 is to ensure that your company determines and provides suitable resources to ensure valid and reliable monitoring and measuring results, when evaluating the conformity of your products; and 7.1.5.2’s that is to ensure that your company provides measurement traceability when it is a requirement or when your company determines it to be necessary to have confidence in the validity of the measurement results. It seems that your practice for controlling the homemade thread and ring gages cannot fully fulfill those purposes. This is how I would address the situation:

  1. Assign a unique identifier to each homemade thread and ring gage. Maybe you can do that through your Document Control process.
  2. Ensure that those gages are protected from deterioration or damage when they are not in use.
  3. Have the homemade master measured by a service able to provide you with reliable certified measurements. That will make that gauge traceable to national or international standards. It will also allow you to demonstrate that the piece is fit for its intended purpose.  That means, you will be able to use this piece as the standard during the in-house calibration of the rest of the gages.
  4. Conduct an “in-house calibration” of each gage you use in production. You will need to issue an in-house calibration certificate for each one of those pieces, indicating on those documents how you achieve traceability to NIST or equivalent. If possible, identify the error for each one of those individual measurements you perform during calibration. Do not forget to include a statement indicating that the gauge was found suitable/unsuitable for use. That will demonstrate that each gage is fit for its intended purpose.
  5. Include ALL the gages in your calibration program. Make them subject to all the applicable provisions of your Quality System.

This approach will allow you to demonstrate that your thread and ring gages are properly controlled and maintained. If controlling those gages has not been an issue in the past, there is no guarantee that the situation will remain the same in the future. That is managing risk 😉

Aura Stewart

For more on this topic, please visit ASQ’s website.

Using the 10:1 Ratio Rule and the 4:1 Ratio Rule

Q: Can you explain when I should be using  the 10:1 ratio rule and the 4:1 ratio rule within my calibration lab? We calibrate standards as well as manufacturing gages.

A: First, I will use the right nomenclature. What the user means is 10:1 and 4:1 Test Accuracy Ratio (TAR). That is, one uses standards 4 or 10 times as accurate as the Unit Under Test (UUT) to calibrate it with.

Unfortunately, the answer to the user’s question is NEVER if we were to use newer metrologically accepted practices.

The TAR is replaced by Test Uncertainty Ratio (TUR).  The ANSI/NCSLI Z540.3:2006 definition of TUR is:

“The ratio of the span of the tolerance of a measurement quantity subject to calibration, to twice the 95% expanded uncertainty of the measurement process used for calibration.”

*NOTE: This applies to two-sided tolerances.

The TUR is represented as a mathematical equation below:

Test Uncertainty Ratio (TUR) represented as an equation

Because of advances in technology, one can purchase highly precise and accurate instrumentation at the end user level, it gets challenging to find standards 4 or 10 times as precise with which to calibrate it and maintain metrological traceability at the same time (definition per ISO Guide 99:2007, Property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty).

Proper measurement uncertainty analysis of the UUT (including standards used with its uncertainty) identifies all the errors associated with the measurement process and ensures confidence that calibration is within the specification desired by the end user.

ISO/IEC 17025-2005: General requirements for the competence of testing and calibration laboratories, clause 5.10.4.2, third paragraph, also states that “When statements of compliance are made, the uncertainty of measurement shall be taken into account.”

This would also ensure confidence in the calibration employing the metrological and statistical practices recommended.

The other rule of thumb not to be confused in this discussion is to measure/calibrate with the right resolution. In the ASQ Quality Progress March 2011 Measure for Measure column, I wrote more on resolution with respect to specification and measurement uncertainty. The general rule of the thumb is if you want to measure/calibrate a 2-decimal place resolution device, you need at least 3-decimal place or higher resolution device.

This is a very good question posed and it is also unfortunately the most misunderstood practice among a lot of folks performing calibration.

Dilip A Shah
ASQ CQE, CQA, CCT
President, E = mc3 Solutions
Chair, ASQ Measurement Quality Division (2012-2013)
Secretary and Member of the A2LA Board of Directors (2006-2014)
Medina, Ohio
emc3solutions.com

Related Content: 

Measure for Measure: Avoiding Calibration Overkill, Quality Progress

Evolution of Measurement Acceptance Risk Decisions, World Conference on Quality and Improvement

Measure for Measure: Calculating Uncertainty, Quality Progress

Measurement Tolerances and Techniques

ISO/IEC 17025:2017 General requirements for the competence of testing and calibration laboratoriesQ: I am looking for some information regarding blueprint tolerances and measurement tools used to measure certain features.

For example, can the same type of tolerance be applied over the length of 450 mm as it could be for a distance of 3 mm?  Is there additional measurement error or gage error that needs to be applied for longer distances?  If one uses a 1” micrometer for measuring a feature, does it make a difference in the measurement error by using the top end of the instrument versus using it to measure just very small features?

A: Thank you for your questions about measurement tolerances. First of all, since your questions were multi-layered, my answers will be as well. Nonetheless, I think I should be able to help you.

As for using the same type of tolerance for a dimension of 450 mm and a dimension of 3 mm, there is more than one answer. We’re talking about 17.7165 inches vs. .118 inches. The 3 F’s must first be considered.  That is Form, Fit, and Function.  In other words, where will this product be used?  If this will be for a medical product or for anything whatsoever where safety is a factor, the design engineer will most likely use a tighter tolerance. So both dimensions could be ± .001 or a more liberal ± .010.  The difference between the two sizes would just change the way they are manufactured.  For example: a 17.717 inch product with a tolerance of ± .030 could probably be rough machined or even made on a burn table.  If the size or location of the smaller dimension is critical, you would machine it with appropriate equipment and hold a tighter tolerance.  OK, enough Manufacturing 101 lingo.

With regard to measurement error, larger/longer dimensions can introduce the possibility of increased measurement error. However, if a “qualified” and experienced individual is doing the measurement, that should not be a major factor.  The same basic skills and standards would apply. The type of measurement equipment can make a difference.  In other words; if you use a Dial Caliper, you can probably rely on it to be accurate within .001-.002 inches.  If you use a 0-1 inch micrometer, you should be able to trust it’s accuracy within .0001 inch.

A qualified metrologist and/or a quality technician would know to check a micrometer at numerous points over its measuring range.  Measurement error should not increase significantly from one end to the other.  If it does, there is something wrong with the calibration or with the tool itself.

I know the above can be perceived as general answers, but I am confident you will see the specifics there as well.

Bud Salsbury
ASQ Senior Member, CQT, CQI

For more on this topic, please visit ASQ’s website.