*By: Hannah Pell*

Two years ago on November 16th, 2018, representatives from more than 60 member nations of the Bureau International des Poids et Mesures (International Bureau of Weights and Measures) convened in Versailles, France to make a very important decision. Representatives in attendance to the 26th General Conference on Weights and Measures (CGPM) unanimously voted to redefine the International System of Units (SI) according to fundamental constants in nature, including the speed of light in a vacuum, elementary charge value of the electron, and the Planck, Avogadro, and Boltzmann constants. Such a change — appropriately made on World Metrology Day — would have ripple effects around the world with regard to how scientific measurements are determined.

The proposition noted that an “essential requirement” for the SI system is that it is “uniform and accessible world-wide,” and that the units themselves “must be stable in the long term, internally self-consistent, and practically realizable.” By adopting these revisions initially proposed in 2011, the International Bureau of Weights and Measurements was ensuring that measurements made using SI units would be universally precise.

Interestingly enough, measurements have not always been consistent. In fact, the meter has been updated five times since its original definition in 1798. The “English inch” was once defined as three pieces of barley laid end to end. And at one point there were more than 250,000 different measures in France. Considering almost everything around us relies on precise measurements to function, how we define what those measurements are is incredibly important. Standards have to come from somewhere.

So how have distance measurements changed over time? (While we’re at it, we could even ask how time itself has changed over time). And what can unit redefinitions teach us about the dynamic history of scientific measurements?

Let’s dive into the lengthy history of the meter.

The history of standardized length measurements stretches back thousands of years. Ancient Egyptians defined the “Egyptian Royal Cubit” as the length of the ruling Pharaoh’s forearm and hand in order to design and build the pyramids. The Greek and Roman civilizations built complicated networks of roads and aqueducts by crews working simultaneously along the routes. To be successful, standard measurements would have been necessary. People have long recognized the importance of standardized measuring systems.

A copy of the prototype meter in Paris, installed 1796-1797. Image Credit: Wikipedia. |

A platinum bar was made as the prototype meter for this new official distance and became known as the “mètre des Archives” (Archives meter) because it was stored in the French National Archives. However, it was not until more than 100 years later at the 1875 Metre Convention (created alongside the International Bureau of Weights and Measures) that the meter was officially established as an international unit of measurement.

International Bureau of Weights and Measures in Sèvres, France. Image Credit: NIST historical collection. |

At the end of the 1800s, physicists experimented with using interferometry — a process of using interfering waves (usually electromagnetic) to extract information — on the prototype meter in order to get an even more precise definition. Albert A. Michelson (of Michelson-Morley fame) won the 1907 Nobel Prize in Physics in part for his contribution to measuring the prototype meter within one-tenth of a wavelength. Their results led to the definition of the ångström (one-ten billionth of a meter), a standard unit in spectroscopy.

As a consequence of Michelson’s work and additional advances in interferometry, the General CGPM agreed to a new definition of the meter in 1960: “The metre is the length equal to 1 650 763.73 wavelengths in a vacuum of the radiation corresponding to the transition between the levels 2p10 and5d5 of the krypton 86 atom.” The wavelength of Krypton had been chosen as the new wavelength standard.

The Krypton-86 lamp. Image Credit: NIST Digital Archives. |

The meter was again redefined in 1983, but this time in terms of the speed of light. Developments in electronics and the invention of lasers made it possible to independently measure the frequency and wavelength of a given light source. Knowing c = fλ, the speed of light could be experimentally determined. In 1975, the Conference on General Weights and Measures approved an official value for the speed of light: 299 792 458 meters per second. In 1983, the 17th General Conference on Weights and Measure updated the definition of a meter as the path travelled by light in a vacuum in 1/299,792,458 of a second.

Notice the units. The meter (a length) was — and is still — defined in terms of seconds (time). This is useful for a few reasons: (1) time can be measured more accurately than length, and (2) because the speed of light is constant, the meter can be realized using any known source of frequency (c = fλ).

This decision hasn’t gone without some comment:

Image Credit: New Scientist, 27 October 1983. |

The current definition of the meter was reworded in 2019: “The metre, symbol m, is the SI unit of length. It is defined by taking the fixed numerical value of the speed of light in a vacuum c to be 299792458 when expressed in the unit m⋅s−1, where the second is defined in terms of the caesium frequency ΔνCs.”

We’ve seen how length standards have been redefined in terms of forearms to meridians to the speed of light. Tracing how standards have changed is itself a useful way to “measure” the history of science; the increasingly precise definitions of the meter represent, in a sense, snapshots of our scientific capabilities and ingenuities during that time. We can only wonder: what future scientific discoveries will characterize the next redefinition of the meter?