Tips & Tricks

Micro-Vu

“Auto Generate Drive Points” Reduces Program Run Time In 3 Easy Steps

Most programs written in the InSpec software can be improved by utilizing this feature. The programs that benefit the most are for parts that have small features. To use the “Auto Generate Drive Points” feature follow these steps:

Step 1:

Click on the Tools menu and select “Auto Generate Drive Points”

Step 2: 

When “Auto Generate Drive Points” is selected we’re presented with two options:

1. All Features
2. Selected Features

To utilize only Selected Features, highlight those program steps in the Features list of the program

Step 3:

At this point we’re presented with the results. In this example, the number of snapshots has been reduced from 407 to 195, a 50.5% reduction in run time!

Click Finish, Save the program.

Temperature: The Element Of Difference 

So often, it happens that we seem to forget the effect that temperature has on the measurement process. All too often there arises a disparity between the shop floor and those in the inspection lab. Also,between the supplier and the receiving inspection department of the customer. In many cases the reason for that disparity is measurements that were taken and recorded under different temperatureenvironments.

Feature characteristics that are measured at the machine contain thermal memory from the machining process and the production environment. Then, the inspection department that is monitoring theprocess measures that product in a controlled environment and discovers a different set of numbers.

Why? The culprit is temperature due to the Coefficient of Expansion of a body over the range of temperature from 68 degrees F is defined as the ratio of the fractional change in temperature.

According to ANSI B89.6.2 Standard for “Temperature and Humidity Environment for Dimensional Measurement”, a device used to perform a comparison of the part and a master called comparator beutilized to quantify the amount of change that takes place due to thermal expansion and contraction.

Differential Expansion is defined as the difference between the expansion of the part and the expansion of the master from 68 degrees to their time-mean temperatures at the time of the measurement.

The “fix” for these type of temperature related difficulties can be addressed by adhering to an experiment called a “Drift Test”. This test will provide data that may be used in correlating measurements to correct the differences associated with temperature. This test is described as a recommended procedure in section 20.3 of the Standard ANSI B89.6.2.

There are of course other methods that may be utilized to control the thermally induced measurement disparities such as by applying mathematical compensation or stating those differences as part of the“Measurement Uncertainty”. Another way would be to take all measurements in a controlled environment of 68 degrees F after allowing the parts to normalize in that environment for a period of24 hours under normal circumstances.

This obviously causes great difficulties in the production process. Response times are often morecritical to machining process that require immediate responses. This becomes even more of an issue tothose who are involved with real time SPC tracking. Therefore this requires a lot of investigationduring the set-up phase of the operation. For some, this is an issue that will be addressed during a PPAP (Pre-Process Approval Plan) operation.

The main idea here is to become aware that temperature as it relates to the measurement process is an issue that needs to be addressed by all. Help is available from “The American Society of MechanicalEngineers” by providing the standard that deals with correcting the temperature issue through “ANSI B89.6.2”

Surface finish & Process Control

Surface finish has historically been an indicator of process stability. As such, it has been monitored for just such a purpose. If something in the machining process is about to go haywire, surface finish is one of the first places in the process that will reflect the change.

Since surface finish is a fairly stable condition and can be predictable in that when the materials, cutting tools, speeds and feeds, and coolant, are known, a certain surface finish can be expected and determined by design and manufacturing engineers. Therefore, in the data collection and evaluation of your process control measures, it would be in your best interest to include surface finish analysis of critical dimensions in the process control plan.

Surface finish will be an indicator of problems that are about to occur due to such things as tool wear, unauthorized machine adjustments in speeds and feeds of the cutting tools, changes due to wide temperature fluctuations, coolant failure, etc. Surface finish is an indicator of the texture of the part features surface. Texture can be further broken down into its component parts of roughness, waviness, and form.

Roughness essentially describes the tool marks left in the wake of a machining pass. These are affected by speeds and feeds and the type of cutting tool used. Each pass of the cutter leaves some kind of indication of its passing and the marring of that surface is described as the roughness.

Waviness is the result of minute fluctuations in the distance between the cutting tool and the surface of the work-piece during machining. These are caused by cutting tool instability and by vibration. This vibration may be caused by various things such as tow motors passing by, other machines in the vicinity operating, attempting to remove too much material at a time, etc.

Form Errors are most often due to a lack of flatness or straightness in the machine tool ways. This is repeatable from part to part because the machine irregularity will always follow the same paths transferring this form error to the machined part.

What frequently occurs is that there may exist all three or combinations of these types of surface finish components present simultaneously. The evaluation process suggests that each of these conditions be addressed one at a time. Some assumptions must be made to tackle each of the conditions. Roughness has a shorter wavelength than waviness, and waviness has a shorter wavelength than form error. Note the following:

Roughness = wavelength < 0.030”
Waviness = wavelength = 0.030 to 0.300
Form Error = wavelength > 0.300

These values are determined through the use of measuring devices such as a profilometer. These instruments separate surface finish components by adjusting the cut-off lengths of the measurements. They work similar to a phonograph stylus by translating the peaks and valleys magnitude of the surface into a measurement in millionths of an inch. The machine has various filtering devices built into its design that permit the operator to choose the cutoff length in order to isolate roughness, waviness, and form or “total profile” which combines all three in the reading.

Ten to One or One to Ten Rule

Many companies experience difficulties regarding product acceptance to specifications simply because they fail to utilize the ten to one rule when making the choice of a measurement instrument to determine adherence to specifications.

Simply stated the “Rule of Ten” or “one to ten” is that the discrimination (resolution) of the measuring instrument should divide the tolerance of the characteristic to be measured into ten parts. In other words, the gage or measuring instrument should be 10 times as accurate as the characteristic to be measured. Many believe that this only applies to the instruments used to calibrate a gage or measuring instrument when in reality it applies to the choice of instrument for any measuring activity. The whole idea here is to choose an instrument that is capable of detecting the amount of variation present in a given characteristic.

If we were to plot on a run chart the achieved values from a gage that has been selected that is one to one or even two to one resolution to the part tolerance, the graph would show almost a straight line. This is because the instrument is not capable of detecting the inherent normal variation that exists in the part.

In order to achieve reliable measurement, the instrument needs to be accurate enough to accept all good parts and reject all bad parts. Conversely the gage should not reject good parts nor accept bad ones. The real problem will arise when your company uses an instrument that is only accurate enough to measure in thousandths and accepts parts based upon that result and the customer uses gages that discriminate to ten-thousandths and reject parts sent to them for being .0008” over the specification limit.

Any company that controls their processes through the use of statistical tools will have a very difficult task to meet SPC indices of acceptable levels if the data they collect is based upon numbers achieved with gages that will not reflect the normal variation present in the process.

One statistical tool that is used to test the worthiness of a gage to control the production process is called a Gage R&R Study (Repeatability & Reproducibility). Repeatability is the ability of one operator to achieve the same results when measuring the same dimension after repeated trials. Reproducibility is the ability of multiple operators to achieve the same results when using the same gage to measure the same dimension on the same parts after repeat trials.

Acceptance of the gage to perform the task at hand is determined when after performing the test (study) meets the following criteria:

10% of the total tolerance (or process variation) or less = the gage is acceptable
11-30% = acceptable only based upon the application, and must be closely monitored.
31% or over = the gage is unacceptable for use on this application.

Often it is quite difficult to pass the gage R&R study even when the ten to one rule is used. So, in order to give yourself the advantage to be begin with, start by choosing a gage that is accurate to 1/10   th of tolerance of the characteristic to be measured.

Measurement Error

Don enters the QA room and walks over to you and says,

“How come you rejected my part, I checked that thing right before I brought it in and it was right?” Now I am in the position of having to defend my method and results. First off, I want to become defensive, but if I put myself in his shoes and I believe I produced a quality part, then it is only logical that I should question the inspector, after all, he or she is capable of making a mistake. Maybe it’s just a method problem.

Okay, its time to do some analyzing. First lets see what kind of gage produced the questionable results. Did the inspector utilize the 10:1 rule? Was the stated accuracy of the gage 1/10 of the allowable tolerance? Yes! I did the check on the CMM (Coordinate Measuring Machine) which has an accuracy statement of 0.0001” and repeatable within 0.0003”. Was the part clamped to the table without distorting the part or allowing the part to move during the probings? No, that seems to be Okay.

“Well, lets check it again to see if we can repeat the results. Okay that done and the measurements are slightly different, but a very small difference. “Well, that makes sense since you first brought it to me hot off the machine, and now it has had time to cool down, so I would expect a little bit of difference.”

“Let’s see if we can duplicate the results on the surface plate using a V Block and clamp and a 0.0001” reading indicator.”

“Well, there it is big as life, the same result within 2/10 of the reading on the CMM, do you agree that it’s wrong?”

“Yeah, I just don’t understand what happened, I’m gonna pull out that drill and check it on the scope, maybe there’s a chip or something.”

Here is the key element, if your measurement results come into question, follow these steps to eliminate measuring error questions:

1) Don’t become defensive, either you or your method may be in error.
2) Redo the measurement to see if the results can be repeated.
3) Invite the machinist, customer, or vendor to stay and watch.
4) Use another method to see if the results can be duplicated.
5) If not, call in the QA Mgr. Or QA Engineer to help discover the “truth”.
6) Lastly, use team problem solving techniques to solve the problem.

Isolate the reason for variation and eliminate the variables. Arguing about who is right or wrong is counter productive, remove the ego’s and let’s produce quality parts. Remember, nobody produces bad parts on purpose, and we all want to do a good job. So, let’s approach the truth through achieving correlation of results.

Removing Linearity From Measurements

I often find out that more and more people are using digital equipment for measurement, even to the extent that they cannot read Vernier scales or even a pair of dial calipers with any degree of accuracy. Since we seem to be moving in the direction of the digital readout, here is a method that will remove any linear measurement error from the gage you employ to inspect your parts.

Linearity can be removed from the measurement being taken by turning your digital equipment into a direct comparator gage. This is accomplished very easy by making a gage block stack to the exact size of the measurement to be inspected. That is, to the nominal dimension given on the blueprint. This technique has a couple of advantages.

First, by “zeroing” the digital gage to the nominal dimension represented by the stack of gage blocks, all the dimensions will be + or – numbers from that reading. An example would be as follows:

Nominal Dimension = 2.2255
Gage block stack = 2.2255
Zero the gage on the stack = 0.0000

1st part measured = + 0.0002
2nd part measured = – 0.0001
3rd part measured = – 0.0002
4th part measured = + 0.0003

Second, the deviation from nominal is very easy to recognize as well as the range of that deviation and the mean value is also easily seen. Consider the difference of trying to decipher the dimensions when written as actual size.

1st part = 2.2257
2nd part = 2.2254
3rd part = 2.2253
4th part = 2.2258

Thirdly, it is much easier to record deviation from nominal and to enter the data when that data is in the format of the former example.

A fourth benefit is that by zeroing out the gage at the nominal dimension, there is a minimizing effect on any linearity contained within the gage because you are now not measuring from the instrument gage origin.

Who Do You Trust To Calibrate Your Gages?

There are many calibration laboratories out there competing for your business. But do they have your best interest in mind? Some are “lick and stick” type of businesses that perform the bare minimum evaluations on your equipment and provide you with a calibration sticker and Certificate of Conformance just to satisfy you and keep you coming back.

Many laboratories are in business to make a profit from the opportunities made possible by the increasing demand to comply to ISO requirements to calibrate and show traceability to a national or international standard. Even if the lab is ISO/IEC certified, some of these laboratories perform work beyond the scope of their own capabilities and often will subcontract work out without informing you in the process. Yes, they may have the lowest price tag, and indeed perhaps all you want is a label and certification for lowest price you can get.

However, for those companies which are truly interested in doing a quality business that delivers a quality product to consumers will send their measuring and test equipment to a calibration laboratory that is one of integrity with accreditation to one of the national accreditation bodies. Currently, that means A-Class, A2LA or NVLAP. These bodies are specific and deliver accreditation to Calibration and Testing Laboratories only.

Those labs that are accredited to these organizations undergo stringent evaluation for adherence to scope, procedures, traceability, and training in calibration techniques and equipment. This sometimes may cause these labs to charge slightly more for their services but the old adage applies, “you get what you pay for”.

So, in conclusion, the decision you’re left with is this, “are economic considerations more important to you than being provided with the best possible service and traceability for the measuring and test equipment you rely upon to produce quality products for your customers”?

Coordinate Measuring Machines (CMM)

RFM has assembled a document on the basics of coordinate measuring. The Table of Contents are detailed below and the 27 page document can be received for free by contacting RFM.

Table of Contents

1. Understanding the CMM
a. The Coordinate System
b. The Machine Coordinate System
c. The Part Coordinate System

2. What is Alignment

3. What is a Datum

4. What is a Translation

5. What is Rotation

6. Measured and Constructed Features

7. What is Volumetric Compensation

8. Qualifying Probe Tips

9. Projections

10. Using Effective Probe Techniques

11. Geometric Dimensioning and Tolerancing

Contact RFM to receive a free copy of the Basics of Coordinate Measuring (PDF).

Why Do I Have To Do Gage Repeatability And Reproducibility?

The standard tells me that I have to do Gage R&R Studies on each type of gage I have. This seems like a lot of work to do for no reason other than to comply with the standard. What is the purpose of doing this study anyway?

ISO and QS 9000 requirements state that;

     “The supplier must provide evidence that appropriate statistical studies have been performed on
all measurement systems referenced in the customer-approved control plan. Methods used
to gather analytical evidence should be based on the guidelines spelled out in the
“Measurement Systems Analysis” reference manual. Alternative methods can be used with customer
approval.”

Isn’t it interesting that these standards tell you what you must do, but all to often, they offer no explanation as to the reason for doing so. It’s kinda like when as a child you asked your mom, “why?” and her reply was “because I said so.”

Doing all the work necessary to comply with the standards can be very labor intensive, and as such, costly. Doing Gage R&R studies on all your types of gages just for the sake of doing them is in the very least, ridiculous. There IS however, a very good reason to do Gage R&R studies. “Ah Ha! And what pray tell might that be you might ask?”

Gages and measurement systems are at the heart of collecting the data by which the acceptance criteria of the manufacturing process is controlled! The ability of the gage to detect variation in your manufacturing process is of the utmost importance for product quality and customer satisfaction. And where does the R&R study fit into this puzzle?

The very nature of “Quality Control” is to control the manufacturing process so that non-conforming products are never going to be produced due to natural causes. By utilizing the GR&R study as one of the tools to isolate and thereby eliminate variation in your processes, you can determine if the gage is capable of detecting variation in the acquired measurements, and how much of the variation present in that measurement process is attributable to the gage itself.

It is extremely helpful to understand what the driving forces in the GR&R study are in order to make intelligent decisions about the measurement process. Anyone who has performed GR&R studies eventually begins to question the reason for doing them. And as such they begin to understand the relationships the calculations provide and what the driving forces are behind those calculations.

Tolerance and range are those two factors that ultimately become the values studied to determine where measurement variation comes from. They are the drivers.

To achieve a GR&R less than 10%. The main consideration is 10% of what? 10% of the tolerance is the primary concern. However, the guideline set forth in the MSA Manual suggests that the measurement system should be based upon 10% of the total process variation

This brings up some interesting considerations. If gages are chosen to control a process is first based upon choosing a gage the meets the 10:1 rule, in other words, the gage has a discrimination (resolution) of 10% of the blueprint tolerance, and if the characteristic to be controlled meets a 1.33 Cpk, the tolerance has now been reduced to 75% of the drawing tolerance.

SPC tracking should be done on processes based on the order of importance. Feature characteristics should be classified on the drawing by the design engineers as Critical, Major, or Minor. GR&R studies should be performed on all gages that control all critical dimensions, and some selected major characteristics. How do SPC and GR&R tie in together?

Consider the following:

EX: Suppose we are measuring an outside diameter that has a tolerance of +/- .005”.
Following the 10:1 rule, this would require a gage that has a resolution of .0001”.
However, the SPC indices require a 1.33 Cpk for this characteristic.
This in effect reduces the allowable tolerance by 25%.

Even at 1/10 of the original tolerance, an outside micrometer frequently performs at or near the 10% level. If that total tolerance is now reduced to .0075, this causes the variation due to the measurement system to climb beyond the acceptable limit of 10%.

Another factor to consider is if indeed the process is running at a Cpk of 1.33 or better, and the requirement of the gage system is to meet a GR&R of 10% or less as expressed as part of the total process variation, the gage being used is required to meet this requirement as well. And as one performs Continuous Process Improvement to reduce the amount of variation present in that process, this raises the percentage of variation in that process due to the measurement system beyond the 10% level required.

The bottom line is that your procedures should address these issues when attempting to comply with the requirements set forth in the ISO and QS Standards, and that GR&R studies should be used to help evaluate the amount of variation in your controlled process attributable to the measuring system and not just doing the work to be doing work.