Open textbook with measurement and scientific notation on a clean desk

Measurement Glossary: Essential Terms Every Professional Should Know

Measurement terminology can be confusing, with terms that sound similar but have distinct meanings and applications. This glossary provides clear, practical definitions of the most important measurement terms, with examples to illustrate how each concept applies in real-world situations. Bookmark this page as a reference — you'll use it often.

A–B Terms

Accuracy
The degree to which a measurement agrees with the true or accepted reference value. If a scale reads 100.0g when the true mass is 100.0g, it is accurate. Accuracy is about correctness, not repeatability.
BIPM
The International Bureau of Weights and Measures (Bureau International des Poids et Mesures). The organization that maintains the SI unit definitions in Sèvres, France.
Bias
A systematic error that causes measurements to deviate consistently in one direction from the true value. If a scale always reads 0.5g high, it has a positive bias of 0.5g.
Burette
A graduated glass tube with a tap at one end, used for delivering precise volumes of liquid in titrations. Reads to ±0.02mL typically.

C Terms

Calibration
The process of comparing an instrument's readings to a known reference standard and determining the correction needed. Calibration does not change the instrument — it quantifies its error.
Candela (cd)
The SI base unit of luminous intensity — how bright a light source appears to the human eye. One candela is roughly the brightness of a candle flame.
Certified Reference Material (CRM)
A material with certified values for specific properties, accompanied by a certificate documenting its traceability. NIST produces thousands of CRMs for calibration verification.
Coefficient of Variation (CV)
The standard deviation expressed as a percentage of the mean: CV = (σ/μ) × 100%. Allows comparison of variability between measurements on different scales. A CV of 2% means the standard deviation is 2% of the average value.

D–F Terms

Drift
Slow change in an instrument's readings over time, even when the measurand remains constant. Drift can be caused by component aging, temperature changes, or mechanical wear. A digital thermometer that reads 0.2°C higher after 2 hours has positive drift.
Error (Measurement Error)
The difference between a measured value and the true value. Error = measured value − true value. Errors can be systematic (repeatable, bias) or random (variable, unpredictable).
Error Propagation
The process of determining how uncertainties in input quantities affect the uncertainty of a calculated result. If you calculate area from length and width, both length and width uncertainties contribute to area uncertainty.
Gauge
A device used to check a dimension against a standard without necessarily providing a numerical reading. A plug gauge checks whether a hole is larger than a minimum size; a ring gauge checks whether a part is smaller than a maximum size.
Hysteresis
The phenomenon where an instrument gives different readings for the same input depending on whether the input is increasing or decreasing. A pressure gauge might read 101.2 kPa when pressure is rising but 100.8 kPa when it's falling — the 0.4 kPa difference is hysteresis error.
Indicator
A measurement instrument designed to detect and display deviation from a set point or nominal dimension. A dial indicator shows how much a machined surface varies from a reference plane, typically to 0.001mm resolution.
Least Count
The smallest division on a measurement scale — the smallest increment that can be read directly. A ruler with millimeter divisions has a least count of 1mm; any measurement must be estimated to the nearest fraction of that division.
Measurement Uncertainty
A quantitative expression of the doubt about a measurement result. Uncertainty is not the same as error — it's the range within which the true value is believed to lie, with a stated level of confidence. A length reported as 100.3mm ± 0.2mm (k=2) has an uncertainty of 0.2mm.

M Terms

Mean
The arithmetic average of a set of measurements: sum of all values divided by the number of values. The mean of [10.1, 10.3, 9.9, 10.0] is 40.3/4 = 10.075. The mean summarizes central tendency but not variability.
Median
The middle value when data is arranged in order. For [9.9, 10.0, 10.1, 10.3], the median is 10.05 (the average of the two middle values). The median is less sensitive to outliers than the mean.
Metrology
The scientific study of measurement. Metrology covers the theory and practice of all aspects of measurement, from fundamental standards development to practical calibration and verification.
Mode
The most frequently occurring value in a dataset. In [10.1, 10.2, 10.2, 10.3], the mode is 10.2. A dataset may have no mode (all values unique), one mode (unimodal), or multiple modes (bimodal, multimodal).

N–R Terms

NIST
The National Institute of Standards and Technology, a US federal agency that develops and maintains measurement standards for the United States. Provides calibration services traceable to SI definitions.
Normal Tension
The standard measuring force applied by a test instrument, as opposed to any abnormal or excessive force. Hardness testers, for example, apply a defined normal tension to ensure consistent readings.
Parallax
The apparent shift in position of an object when viewed from different angles. In measurement, parallax error occurs when reading a scale from an angle rather than directly perpendicular to it, causing an offset reading. Align your eye directly with the scale to avoid parallax error.
Precision
The degree to which repeated measurements under unchanged conditions agree with each other. Also called repeatability. Precision is about consistency, not correctness — precise measurements may all be wrong by the same amount (biased).
Primary Standard
A reference standard of the highest metrological quality, from which measurements are derived. Primary standards are maintained by national metrology institutes like NIST and BIPM.
Range
The difference between the maximum and minimum values a measuring instrument can reliably display. A thermometer with a range of -10°C to 110°C can measure anywhere in that interval but not outside it.
Reference Standard
A standard used as a comparison for calibrating other instruments. Reference standards are maintained with higher accuracy than working standards but below primary standards.
Repeatability
The closeness of agreement between successive measurements of the same measurand made under the same conditions. A measurement is repeatable when repeated measurements made the same way give similar results.
Reproducibility
The closeness of agreement between measurements of the same measurand made under changed conditions — different operators, different instruments, different locations. Reproducibility is a broader assessment of measurement consistency than repeatability.
Resolution
The smallest change in a quantity that can be detected by an instrument. A digital caliper with 0.01mm resolution can detect changes of 0.01mm or larger. Resolution limits how fine a distinction an instrument can make, regardless of accuracy.

S–Z Terms

Scale Division
The spacing between consecutive marks on a measurement scale. A ruler with 1mm divisions has a scale division of 1mm. The smaller the scale division, the finer the resolution available to the user.
SI Units
The International System of Units (Système International d'Unités): the globally accepted system of measurement based on seven base units (meter, kilogram, second, ampere, kelvin, mole, candela).
Significant Figures
The digits in a reported measurement that carry meaning contributing to its precision. In 12.70 cm, all four digits are significant — the "0" indicates precision to the hundredth of a centimeter. Zeros between non-zero digits are significant; leading zeros are not.
Standard Deviation (σ)
A measure of the dispersion or spread of a dataset. Standard deviation quantifies how much individual measurements vary from the mean. About 68% of measurements fall within ±1σ of the mean; 95% within ±2σ.
Tolerance
The permissible limit or limits of variation in a dimension or measured quantity. A shaft specified as 25.00mm ± 0.05mm has a tolerance of 0.10mm total — measurements must fall between 24.95mm and 25.05mm to be acceptable.
Traceability
The property of a measurement result whereby it can be related to appropriate reference standards through an unbroken chain of calibrations, each with documented uncertainty. Traceable measurements can be trusted because they are connected to national standards.
Type A Evaluation
Uncertainty evaluation by statistical analysis of repeated measurements. Type A evaluation applies when you have data from multiple measurements and calculate uncertainty from their statistical distribution.
Type B Evaluation
Uncertainty evaluation using sources other than statistical analysis — such as manufacturer specifications, calibration certificates, published data, or professional judgment. Type B evaluation quantifies uncertainty without repeated measurements.
Uncertainty Budget
A tabulation of all uncertainty sources in a measurement and their contributions to the combined standard uncertainty. The uncertainty budget identifies which sources dominate total uncertainty and guides efforts to reduce it.
Zero Error
An error in an instrument where the zero reading is not at true zero. A micrometer with a zero error reads 0.003mm when its jaws are fully closed. All subsequent measurements include this offset and must be corrected.

Why These Terms Matter

These terms form the vocabulary of measurement science, and knowing them precisely enables clear communication with colleagues, accurate interpretation of specifications, and correct application of measurement techniques. Whether you're writing a quality report, interpreting a calibration certificate, or specifying tolerances for a component, these terms are the language you work in.