Using the 10:1 Ratio Rule and the 4:1 Ratio Rule

Q: Can you explain when I should be using  the 10:1 ratio rule and the 4:1 ratio rule within my calibration lab? We calibrate standards as well as manufacturing gages.

A: First, I will use the right nomenclature. What the user means is 10:1 and 4:1 Test Accuracy Ratio (TAR). That is, one uses standards 4 or 10 times as accurate as the Unit Under Test (UUT) to calibrate it with.

Unfortunately, the answer to the user’s question is NEVER if we were to use newer metrologically accepted practices.

The TAR is replaced by Test Uncertainty Ratio (TUR).  The ANSI/NCSLI Z540.3:2006 definition of TUR is:

“The ratio of the span of the tolerance of a measurement quantity subject to calibration, to twice the 95% expanded uncertainty of the measurement process used for calibration.”

*NOTE: This applies to two-sided tolerances.

The TUR is represented as a mathematical equation below:

Test Uncertainty Ratio (TUR) represented as an equation

Because of advances in technology, one can purchase highly precise and accurate instrumentation at the end user level, it gets challenging to find standards 4 or 10 times as precise with which to calibrate it and maintain metrological traceability at the same time (definition per ISO Guide 99:2007, Property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty).

Proper measurement uncertainty analysis of the UUT (including standards used with its uncertainty) identifies all the errors associated with the measurement process and ensures confidence that calibration is within the specification desired by the end user.

ISO/IEC 17025-2005: General requirements for the competence of testing and calibration laboratories, clause 5.10.4.2, third paragraph, also states that “When statements of compliance are made, the uncertainty of measurement shall be taken into account.”

This would also ensure confidence in the calibration employing the metrological and statistical practices recommended.

The other rule of thumb not to be confused in this discussion is to measure/calibrate with the right resolution. In the ASQ Quality Progress March 2011 Measure for Measure column, I wrote more on resolution with respect to specification and measurement uncertainty. The general rule of the thumb is if you want to measure/calibrate a 2-decimal place resolution device, you need at least 3-decimal place or higher resolution device.

This is a very good question posed and it is also unfortunately the most misunderstood practice among a lot of folks performing calibration.

Dilip A Shah
ASQ CQE, CQA, CCT
President, E = mc3 Solutions
Chair, ASQ Measurement Quality Division (2012-2013)
Secretary and Member of the A2LA Board of Directors (2006-2014)
Medina, Ohio
emc3solutions.com

Related Content: 

Measure for Measure: Avoiding Calibration Overkill, Quality Progress

Evolution of Measurement Acceptance Risk Decisions, World Conference on Quality and Improvement

Measure for Measure: Calculating Uncertainty, Quality Progress

Measurement Tolerances and Techniques

ISO/IEC 17025:2017 General requirements for the competence of testing and calibration laboratoriesQ: I am looking for some information regarding blueprint tolerances and measurement tools used to measure certain features.

For example, can the same type of tolerance be applied over the length of 450 mm as it could be for a distance of 3 mm?  Is there additional measurement error or gage error that needs to be applied for longer distances?  If one uses a 1” micrometer for measuring a feature, does it make a difference in the measurement error by using the top end of the instrument versus using it to measure just very small features?

A: Thank you for your questions about measurement tolerances. First of all, since your questions were multi-layered, my answers will be as well. Nonetheless, I think I should be able to help you.

As for using the same type of tolerance for a dimension of 450 mm and a dimension of 3 mm, there is more than one answer. We’re talking about 17.7165 inches vs. .118 inches. The 3 F’s must first be considered.  That is Form, Fit, and Function.  In other words, where will this product be used?  If this will be for a medical product or for anything whatsoever where safety is a factor, the design engineer will most likely use a tighter tolerance. So both dimensions could be ± .001 or a more liberal ± .010.  The difference between the two sizes would just change the way they are manufactured.  For example: a 17.717 inch product with a tolerance of ± .030 could probably be rough machined or even made on a burn table.  If the size or location of the smaller dimension is critical, you would machine it with appropriate equipment and hold a tighter tolerance.  OK, enough Manufacturing 101 lingo.

With regard to measurement error, larger/longer dimensions can introduce the possibility of increased measurement error. However, if a “qualified” and experienced individual is doing the measurement, that should not be a major factor.  The same basic skills and standards would apply. The type of measurement equipment can make a difference.  In other words; if you use a Dial Caliper, you can probably rely on it to be accurate within .001-.002 inches.  If you use a 0-1 inch micrometer, you should be able to trust it’s accuracy within .0001 inch.

A qualified metrologist and/or a quality technician would know to check a micrometer at numerous points over its measuring range.  Measurement error should not increase significantly from one end to the other.  If it does, there is something wrong with the calibration or with the tool itself.

I know the above can be perceived as general answers, but I am confident you will see the specifics there as well.

Bud Salsbury
ASQ Senior Member, CQT, CQI

For more on this topic, please visit ASQ’s website.

Coordinate Measuring Machines (CMMs) and Digital Bore Gages

Gage R&R, Torque Wrence

Q: When inspecting diameters with tolerances of .0005 and below, are there any studies relating to the accuracy of different inspection methods, such as a coordinate measuring machine (CMM) versus a digital bore gage with setting ring combination?

A: The answer to this question can often be one of opinion and/or personal preference.  What I will present are my opinions, along with some known facts.

Non-contact measurement systems such as optical and laser equipment are bulky, expensive and impractical.  With these systems, the part must be taken to the system. This is not much good in a production environment.

While a CMM is without a doubt very accurate, they are also slow.  Like the optical or laser equipment, the parts must be taken to the system.  In many production situations it is more practical to check the part in the machine.  Also, even though CMMs come with reticulated heads, measuring at abstract angles or various depths is not always an option.  It is also wise to keep in mind that deeper bores would require longer stylus probes.  This is a situation that can introduce concerns of error and rapid movement can generate false contact readings with longer styli simply due to the motion.

A final thing to keep in mind is the high initial price of a CMM, as well as the maintenance costs.

Two and three point contact measurement is readily available.  Popular digital bore gages are calibrated to a master ring.  The rings themselves can be verified with a CMM or sent out for certification traceable to national standards. Most digital bore gages can be set up to interface with a statistical process control system. This is important when process control is vital.

Cylinder bore gages (generally two point contact) can sometimes have problems with linear accuracy. Analog versions can be more prone to operator error.

While two point systems will more readily detect ovality, where this is not a major concern, three point digital systems are, in this quality technician’s opinion, the best all-around option.

When I am inspecting parts in which ovality could be an issue, if the parts are readily portable, I will check a percentage with a CMM to verify their roundness.  However, for speed, accuracy, practicality, and price, a three point digital bore gage would be the way I would go to verify product with tight tolerances.

A final note: If parts are relatively small and can be in contact with other materials, robotics is often used with air gage instruments.  This is another expense but can be introduced in high volume manufacturing.

I hope this will help.

Bud Salsbury
ASQ Senior Member, CQT,CQI

For more on this topic, please visit ASQ’s website.

Customer Satisfaction and Loyalty

Suppliers, supplier management

Q: Can you give me more information about how organizations gain, measure, and retain customer satisfaction and loyalty?

A: The Quality Improvement Glossary, by Donald L. Siebels, defines customer loyalty/retention as “the result of an organization’s plans, processes, practice, and efforts designed to deliver their services or products in ways which create customer satisfaction so customers are retained and committed to remain loyal”.

ASQ has published extensively in this area. For more on this topic, please visit ASQ’s website