Additional information can be found in the Dictionary of Units
Length is probably the oldest and most commonly used category of measurement in the world (though there is an argument for 'time' having a prior claim to both). This is reflected in the number of different units (both by name and by definition which can be found. Including historical as well current usage the total count - worldwide - must come to a few hundreds.
All early measures of length were based on the human body, as seen by their names: "nail", "digit", 'palm', 'hand', 'span', 'pace'; with multiples of those to make bigger units.
Perhaps one of the best-known examples of an historical measure is the 'cubit' used in Ancient Egypt and other countries. Research has identified 8 examples of this which range in length from 44 to 64 centimetres. Imagine the trouble this must cause anyone wishing to translate an old text which uses them!
In England, a major beginning in standardisation was being made by about the 13th century. But there was quite a lot of confusion (as we can now see when looking back). For instance, there was a statute of those times which declared
"It is ordained that 3 grains of barley, dry and round, make an inch, 12 inches make a foot, 3 feet make an Ulna, 5 and a half Ulnae make a rod...."
The Ulna later became the yard.
This looks like a reasonable attempt to build from the smallest to the biggest. But, at the same time there were also references to "the iron yard of our Lord the King"; and that came from the very first "Assize of Measures" ordered by Richard the First in 1196, which defined all other measures in terms of the yard!
However, what is certain is that over the following centuries it was the yard which was the 'standard' measure, (defined as the distance between two marks scratched in a metal bar held by the Exchequer), and all the other measures - from inches to miles were based on that.
This standard for the yard lasted until The Weights and Measures Act of 1963 when it was decreed that
The metre is the SI standard of length and is defined as the distance a beam of light would travel in
The above definition of the yard resulted in the imperial foot being 0.3048 metres exactly.|
However, in the USA, in 1866 the metre was declared to be 39.37 inches.
This made the US foot to be 0.3048006096---- metres (approx.)
Not a lot of difference? It was to scientists and engineers, especially as measuring instruments became more and more accurate. It was not until the 1950's that agreement was reached on that, when the imperial definition was adopted by the USA.
In the meantime, most of the USA had been surveyed using the 1866 definition which became identified as the US(survey)foot.
Metrication of the survey undertaken towards the end of the 1900's used, for conversion purposes, the fact that
Astonomers (and science fiction writers) need much larger units such as
The light year which is the distance travelled by light in one year.
All of the units used to measure area are based on those for length and, in nearly all cases their names reflect this. So, one square yard means the area covered by a square measuring one yard by one yard. Of course, that is only a matter of defining the size. In practical work the shape is usually anything but a square. And we work out things like half a yard by two yards = one square yard, and then move on to 'nastier' shapes and numbers.
One interesting measure of area which can be found in many different countries and cultures over time is one which does not 'declare' its size in the above manner, but which is based on how much land can be tilled in one day - usually by one man and a team of oxen (or horses) drawing a plough. Not surprisingly, wide variations are found in the size of this unit. After all there must be a big difference between working in hilly countryside with tough rocky ground, and being on a level plain with good quality 'crumbly' soil. Then again, maybe some people were more 'workish' than others?
In English-speaking countries it is the acre which is the 'work-related' unit.|
This unit is well covered by various documents going back before 1000AD. (one is dated at 732). It was generally understood to be the amount of land that could be tilled in one day by a team of eight oxen. In practice, this meant one morning since the team was usually rested in the afternoon. This compares with an early Germanic unit, the morgen which was similarly defined as 'the land ploughable between dawn and noon'.
The actual size of the acre was officially given (in a statute of 1305) as
A 'circular inch' is the area of a circle having a diameter of one inch. While a 'circular mil' refers to a circle with a diameter of one-thousandth of an inch. These units were once used by engineering trades when working with circular objects such pipes and wires.
|Visualising land areas is not easy without experience and practice. But if you need to, it can be a help to think in terms of football pitches.|
In English Association Football the rules state that the pitch must be between 50 and 100 yards wide, and between 100 and 130 yards long (with length greater than breadth, so that it is oblong in shape). This big variation (by coincidence) means that the smallest possible pitch is almost exactly 1 acre in size; while the largest is almost exactly 1 hectare in size. Small pitches are those usually crowded onto school playing-fields, while the large pitches are those on which international matches are played.
Of course you can do something similiar with any existing piece of land with which you might be familiar (Baseball diamond, hockey pitch etc.) but will need to find the official sizes and do a few sums.
|Volume or capacity? They measure the same thing - three-dimensional space, but they are slightly different in usage: capacity refers to a containing space and the room available to hold something, while volume is the space actually occupied by an object or the bulk of some substance. For example, it might be said that a bucket has a capacity of 20 litres, which means that the volume of water needed to fill the bucket is 20 litres.|
With the imperial system a distinction was made in the units used to measure them. Volume was measured in cubic measures (cubic feet etc.) while measures of capacity used pints, gallons etc.. With the metric system it is a little more blurred since cubic metres and related sizes are so closely linked to litres and its multiples in a way that the imperial system never was. (How many people ever knew how many gallons there were in a cubic foot?)
In the old imperial system the gallon had its own definition, independent of any cubic measures. It was defined (in 1824) as "the space occupied by 10 pounds of pure water at 62°F". (This was just over 277 cubic inches.)
In the 1970's when all units were re-defined in terms of metric measures then it was decreed that
|It is important to remember that the UK(imperial) gallon is NOT the same size as the US gallon. It is bigger by about three-quarters of a litre.|
The US gallon was originally defined to be 231 cubic inches. This was in fact the Queen Anne 'wine gallon' (of 1707) which has a traceable history back to 1493. With metrication, the US gallon is now defined as
Though hardly used now at all in the UK, and increasingly less so in the USA, there was, for many centuries a parallel system of pints, gallons, bushels used for dry measures. And, though sharing the same name, they were not the same size as the liquid measures! They were used to measure things like corn, flour, beans, small fruit or fish, and similar articles. For 'accuracy' in these measures it was generally required that the measuring container be over-filled, and then a straight-edge scraped across the top to make it level. This was referred to as 'struck measure'. The alternative was a 'heaped measure'. Oh there were as many sharp practices in those (good old) days as we have now - they were just different that's all!
Or is it weight? And does it matter? Well it matters if you are a scientist or a space engineer, otherwise - don't bother.
However, for those who want to know, try this.
The mass of an object is a measure of how much matter it contains, and it is the property of that object that controls the way it will behave under the action of a force.
The weight of an object is a measure of the force the object itself produces when it is in a gravitional field.
(No, it isn't easy!)
The mass of any given object does not change wherever it is in the Universe, whereas the weight depends upon the gravitational force at the place it is being weighed. We are generally all familiar with the idea that an astronaut in space can experience 'weightlessness' (if there is no gravitational pull acting), but there is just as much of the astronaut as there ever was - his/her mass has not changed.
Does it matter here on Earth? Well (since the Earth is not a true sphere) there is a slight variation in its gravitational field at different places. However, the most extreme difference is between the Equator and the North Pole, and that is less than one-half of one per cent. So, do your shopping at the Equator where, for a given weight you will get a greater mass (of whatever) than you will at the North Pole. Mind, it does depend on how the weighing is done!
Note that the SI unit of mass, which is the kilogram, is the only basic unit defined with a prefix (kilo) already in place and, also, the only one which is defined by reference to a physical object - a mass of platinum-iridium held at Sevres in France. Work is being done on developing a workable definition based on the laws of physics, just like all the other basic units are, but making it practicable so that it can be replicated independently in any other (properly equipped) laboratory is difficult.
'troy ounces' and 'metric carats' are used in the weighing of precious stones and metals.
The 'slug' is a technical unit of mass in the (now obsolete) foot-pound-second system.
Not surprisingly, over time, there have been various attempts at and proposals for, the measurement of temperature. Only the five most important, that had any degree of usage or practicality are dealt with here.
One of the earliest was that devised by Gabriel Daniel Fahrenheit (1686-1736) a German physicist. He introduced the use of mercury as the measuring fluid, which brought a higher degree of accuracy and stability to thermometers than had previously been known. He also used a mixture of salt and water, which had a lower freezing-point than pure water, to mark the zero point on his scale - by doing this he hoped to avoid having negative temperatures. He wanted the temperature of the human body to be about 100° on his scale but then adjusted things so that the difference between the freezing- and boiling-points of pure water were whole numbers and had a 'nice' value between them (180°). So these values then became 32° and 212° respectively. Body temperature was then 98.6 degrees. And that scale was adopted very quickly (about 1724) and been used ever since.
Another early attempt was by R A F de Réamur (1683-1757) a French scientist. He knew nothing of Fahrenheit's work and did not use mercury, but did produce a good working thermometer. He used the freezing-point of water as his zero mark, and put the boiling-point at 80 degrees. This scale was widely used (especially in France) for some time but is now obsolete. He has a greater claim to fame for much of the other scientific work he did.
|Another well-known name in thermometry is that of Anders Celsius (1701-1744) a Swedish astronomer. In his proposal he set the boiling-point of water as his zero mark, and put the freezing-point at 100 degrees!!! It was only reversed a few years later (and it was possibly not done by him). It gained wide popularity, especially with scientists, but became known as the Centigrade scale because of its 100 divisions. Perhaps this did not matter but, unfortunately, there was also a unit called a 'grade' which was sub-divided into 'centigrades' and used for measuring angles. So, in 1947 the General Committee on Weights and Measures ruled that the word 'celsius' and °C was to be used. But these things take time - even in the year 2000, a leading manufacturer of clinical thermometers was still labelling them as 'centigrade'!|
For scientists the most important name in thermometry is that of Kelvin. He was William Thomson, Baron Kelvin (1824-1907) a Scottish physicist and mathematician. He it was who discovered the principal laws governing the behaviour of matter in relation to energy and heat. And this led to the idea of an 'absolute zero', a temperature below which it was impossible to go since matter had zero energy at that point. He used the celsius degree for making his measurements so 1K=1°C. (Note the degree sign is not used with the Kelvin scale.) It is hardly likely to be used for 'ordinary' work - imagine stating a room temperature as 293K (about 20°C) or a normal body temperature as 310.15K (=37°C)!
Finally, W J M Rankine (1820-1872) a Scottish engineer created his scale, which was merely the Kelvin scale using the Fahrenheit degree instead of the Celsius.
|Go to the||the top||OR the||Dictionary of Units||OR the||Calculator Menu|