Evident LogoOlympus Logo
Ultrasonic Flaw Detection Tutorial

3.6 Calibration Concepts

OmniScan MX2The term “calibration” has come to be used for three different processes associated with ultrasonic flaw detectors: velocity/zero calibration, which must be performed whenever a new test material or transducer is being used, reference calibration, which is done to set up a test with respect to a reference standard, and calibration certification, which periodically verifies that the instrument is measuring properly.

Velocity/Zero calibration

An ultrasonic flaw detector measures thickness, depth or distance by very precisely timing echoes. In order to turn these time measurements into distance measurements, the instrument must be programmed with the speed of sound in the test material as well as any necessary zero offset that is required by the instrument, transducer type, or echo shape. This process is commonly referred to as velocity/zero calibration. The accuracy of any ultrasonic thickness, depth or distance measurement is only as good as the accuracy and care with which this calibration has been performed. Incorrect calibration will result in inaccurate readings. Fortunately, calibration is usually a simple process, and calibrations for different materials and transducers can be stored and quickly recalled.

In velocity calibration, the flaw detector measures the speed of sound in a reference sample of the test material and then stores that value for use in calculating thickness from measured time intervals. Major factors that effect sound velocity are material density and elasticity, material composition, grain structure, and temperature. In zero calibration, the flaw detector uses a measurement of a material sample of known thickness to calculate a zero offset value that compensates for the portion of the total pulse transit time that represents factors other than the actual sound path in the test piece. The major factor affecting the zero value in common flaw detection applications is wedge delay, or the amount of time it takes for the sound wave to exit the probe. Other factors include electronic switching delays, cable delays, and couplant delays.

The recommended procedure for velocity and zero calibration is a "two point calibration", which requires samples of the test material of different thicknesses whose dimensions are precisely known. In flaw detection applications, two point calibration is frequently performed with an IIW reference block, which provides several different sound path lengths. The transducer is coupled to long and short sound paths of known length, the instrument measures pulse transit time at each, and the operator enters the known thickness or distance. Using those four available data points, the two entered thickness or distance values plus the measured transit time associated with each, the instrument calculates the unique velocity and zero values that solve that equation. Those values will then be used for measurements and can be stored as part of a setup.

Most contemporary digital flaw detectors have software prompts designed to guide the user through the initial velocity/zero calibration process. This process will be described in further detail in Section 4.

Reference calibration

Reference calibration is the process of setting up a specific test with respect to appropriate test blocks or similar reference standards. Typically this involves establishing a signal amplitude level from a reference standard for comparison to indications from the test piece. The details of required reference calibrations are normally found in the procedures established by the user for each specific test.

Calibration Certification

Calibration certification is the process of documenting the measurement accuracy and linearity of an ultrasonic instrument under specific test conditions. In the case of flaw detectors, both horizontal (depth or distance) and vertical (amplitude) certifications are provided. Frequently this certification is performed in accordance with a recognized standard or code, such as ASTM E-317 or EN12668. Measurement accuracy under documented test conditions is typically compared with the manufacturer's established tolerance for a given instrument. In the case of older analog instruments, calibration certification must be done manually with an operator collecting data, but digital instruments are often certified in an automated process using computer software that verified relevant parameters.

Because measurement accuracy in flaw detection applications is highly dependent on proper setup as well as the integrity of the instrument itself, it is the responsibility of the user to verify measurement accuracy to whatever level he/she requires for a given test. This is usually easy to do by simply checking readings on with appropriate reference standards.

Sorry, this page is not available in your country
Let us know what you're looking for by filling out the form below.
Sorry, this page is not available in your country