Money is a commodity accepted by general consent as a medium of economic exchange. It is the medium in which prices and values are expressed; as currency, it circulates anonymously from person to person and country to country, thus facilitating trade, and it is the principal measure of wealth.
The subject of money has
fascinated people from the time of Aristotle to the present day. The piece of
paper labeled 1 dollar, 10 euros, 100 yuan, or 1,000 yen is a little different,
as paper, from a piece of the same size torn from a newspaper or magazine, yet
it will enable its bearer to command some measure of food, drink, clothing, and
the remaining goods of life while the other is fit only to light the fire.
Whence the difference? The easy answer, and the right one, is that modern money
is a social contrivance. People accept money as such because they know that
others will. This common knowledge makes the pieces of paper valuable because
everyone thinks they are, and everyone thinks they are because in his or her
experience money has always been accepted in exchange for valuable goods,
assets, or services. At bottom money is, then, a social convention, but a
convention of uncommon strength that people will abide by even under extreme
provocation. The strength of the convention is, of course, what enables governments
to profit by inflating (increasing the quantity of) the currency. But it is not
indestructible. When great increases occur in the quantity of these pieces of
paper they have during and after money may be seen to be, after all, no
more than pieces of paper. If the social arrangement that sustains money as a
medium of exchange breaks down, people will then seek substitutes-like the
cigarettes and cognac that for a time served as the medium of exchange in
Germany after World War II. New money may substitute for old under less extreme
conditions. In many countries with a history of high inflation, such as
Argentina, Israel, or Russia, prices may be quoted in a different currency,
such as the U.S. dollar, because the dollar has a more stable value than the
local currency. Furthermore, the country’s residents accept the dollar as a
medium of exchange because it is well-known and offers more stable purchasing
power than local money.
The basic function of money is to
enable buying to be separated from selling, thus permitting trade to take place
without the so-called double coincidence of barter. In principle, credit could
perform this function, but, before extending credit, the seller would want to
know about the prospects of repayment. That requires much more information
about the buyer and imposes costs of information and verification that the use
of money avoids.
If a person has something to sell
and wants something else in return, the use of money avoids the need to search
for someone able and willing to make the desired exchange of items. The person
can sell the surplus item for general purchasing power—that is, “money”—to
anyone who wants to buy it and then use the proceeds to buy the desired item
from anyone who wants to sell it.
The importance of this function of money is dramatically illustrated by the experience of Germany just after World War II when paper money was rendered largely useless because of price controls that were enforced effectively by the American, French, and British armies of occupation. Money rapidly lost its value. People were unwilling to exchange real goods for Germany’s depreciating currency. They resorted to barter or to other inefficient money substitutes (such as cigarettes). Price controls reduced incentives to produce. The country’s economic output fell by half. Later the German “economic miracle” that took root just after 1948 reflected, in part, a currency reform instituted by the occupation authorities that replaced depreciating money with money of stable value. At the same time, the reform eliminated all price controls, thereby permitting a money economy to replace a barter economy.
These examples have shown the
“medium of exchange” function of money. Separation of the act of sale from the
act of purchase requires the existence of something that will be generally
accepted in payment. But there must also be something that can serve as a
temporary store of purchasing power, in which the seller holds the proceeds in
the interim between the sale and the subsequent purchase or from which the
buyer can extract the general purchasing power with which to pay for what is
bought. This is called the “asset” function of money.
Anything can serve as money that
habit or social convention and successful experience endow with the quality of
general acceptability and a variety of items have so served—from the wampum
(beads made from shells) of American Indians, to cowries (brightly colored
shells) in India, to whales’ teeth among the Fijians, to tobacco among early
colonists in North America, to large stone disks on the Pacific island of Yap,
to cigarettes in post-World War II Germany and in prisons the world over. In
fact, the wide use of cattle as money in primitive times survives in the word
pecuniary, which comes from the Latin pectus, meaning cattle. The development of
money has been marked by repeated innovations in the objects used as money.
Metals have been used as money
throughout history. As Aristotle observed, the various necessities of life are
not easily carried about; hence people agreed to employ in their dealings with
each other something that was intrinsically useful and easily applicable to the
purposes of life—for example, iron, silver, and the like. The value of the
metal was at first measured by weight, but in time governments or sovereigns
put a stamp upon it to avoid the trouble of weighing it and to make the value
known at sight.
The use of metal for money can be
traced back to Babylon more than 2000 years BC, but standardization and
certification in the form of coinage did not occur except perhaps in isolated
instances until the 7th century BC. Historians generally ascribe the first use
of coined money to Croesus, king of Lydia, a state in Anatolia. The earliest
coins were made of electrum, a natural mixture of gold and silver, and were
crude, bean-shaped ingots bearing a primitive punch mark certifying either
weight or fineness or both.
The use of coins enabled payment
to be by “tale,” or count, rather than weight, greatly facilitating commerce.
But this in turn encouraged “clipping” (shaving off tiny slivers from the sides
or edges of coins) and “sweating” (shaking a bunch of coins together in a
leather bag and collecting the dust that was thereby knocked off) in the hope
of passing on the lighter coin at its face value. The resulting economic
situation was described by Gresham’s law (that “bad money drives out good” when
there is a fixed rate of exchange between them): heavy, good coins were held
for their metallic value, while light coins were passed on to others. In time
the coins became lighter and lighter and prices higher and higher. As a means
of correcting this problem, payment by weight would be resumed for large
transactions, and there would be pressure for recoinage. These particular
defects were largely ended by the “milling” of coins (making serrations around
the circumference of a coin), which began in the late 17th century.
A more serious problem occurred
when the sovereign would attempt to benefit from the monopoly of coinage. In
this respect, the Greek and Roman experience offers an interesting contrast. Solon,
on taking office in Athens in 594 BC, did institute a partial debasement of the
currency. For the next four centuries (until the absorption of Greece into the
Roman Empire) the Athenian drachma had an almost constant silver content (67
grains of fine silver until Alexander, 65 grains thereafter) and became the
standard coin of trade in Greece and in much of Asia and Europe as well. Even after
the Roman conquest of the Mediterranean peninsula in roughly the 2nd century
BC, the drachma continued to be minted and widely used.
The Roman experience was very different. Not long after the silver denarius, patterned after the Greek drachma, was introduced about 212 BC, the prior copper coinage (aes, or libra) began to be debased until, by the onset of the empire, its weight had been reduced from 1 pound (about 450 grams) to half an ounce (about 15 grams). By contrast the silver denarius and the gold aureus (introduced about 87 BC) suffered only minor debasement until the time of Nero (AD 54), when almost continuous tampering with the coinage began. The metal content of the gold and silver coins was reduced, while the proportion of alloy was increased to three-fourths or more of its weight. Debasement in Rome (as ever since) used the state’s profit from money creation to cover its inability or unwillingness to finance its expenditures through explicit taxes. But the debasement in turn raised prices, worsened Rome’s economic situation, and contributed to the collapse of the empire.
Paper money
Experience has shown that
carrying large quantities of gold, silver, or other metals proved inconvenient
and risked loss or theft. The first use of paper money occurred in China more
than 1,000 years ago. By the late 18th and early 19th centuries, paper money and
banknotes had spread to other parts of the world. The bulk of the money in use
came to consist not of actual gold or silver but of fiduciary money—promises to
pay specified amounts of gold and silver. These promises were initially issued
by individuals or companies as banknotes or as transferable book entries
that came to be called deposits. Although deposits and banknotes began as
claims to gold or silver on deposit at a bank or with a merchant, this later
changed. Knowing that everyone would not claim his or her balance at once, the
banker (or merchant) could issue more claims to the gold and silver than the
amount held in safekeeping. Bankers could then invest the difference or lend it
at interest. In periods of distress, however, when borrowers did not repay
their loans or in case of overissue, the banks could fail.
Gradually, governments assumed a
supervisory role. They specified legal tender, defining the type of payment
that legally discharged a debt when offered to the creditor and that could be
used to pay taxes. Governments also set the weight and metallic composition of
coins. Later they replaced fiduciary paper money—promises to pay in gold or
silver—with fiat paper money—that is, notes that are issued on the “fiat” of
the sovereign government, are specified to be so many dollars, pounds, or yen,
etc., and are legal tender but are not promises to pay something else.
The first large-scale issue of
paper money in a Western country occurred in France in the early 18th century.
Subsequently, the French Revolutionary government issued assignats from 1789 to
1796. Similarly, the American colonies and later the Continental Congress
issued bills of credit that could be used in making payments. Yet these and
other early experiments gave fiat money a deservedly bad name. The money was
overissued, and prices rose drastically until the money became worthless or was
redeemed in metallic money (or promises to pay metallic money) at a small
fraction of its initial value.
Subsequent issues of fiat money
in the major countries during the 19th century were temporary departures from a
metallic standard. In Great Britain, for example, the government suspended
payment of gold for all outstanding banknotes during the Napoleonic Wars
(1797–1815). To finance the war, the government issued fiat paper money. Prices
in Great Britain doubled as a result, and gold coin and bullion became more
expensive in terms of paper. To restore the gold standard at the former gold
price, the government deflated the price level by reducing the quantity of
money. In 1821 Great Britain restored the gold standard. Similarly, during the
American Civil War, the U.S. government suspended the convertibility of Union
currency (greenbacks) into specie (gold or silver coin), and resumption did not
occur until 1879 (see specie payment). At its peak in 1864, the greenback price
of gold, nominally equivalent to $100, reached more than $250.
Episodes of this kind, which were repeated in many countries, convinced the public that war brings inflation and that the aftermath of war brings deflation and depression. This sequence is not inevitable. It reflected 19th-century experience under metallic money standards. Typically, wars require increased government spending and budget deficits. Governments suspended the metallic (gold) standard and financed their deficits by borrowing and printing paper money. Prices rose.
Throughout history, the price of
gold would be far above its prewar value when wartime spending and inflation
ended. To restore the metallic standard to the prewar price of gold in paper
money, prices quoted in paper money had to fall. The alternative was to accept
the increased price of gold in paper money by devaluing the currency (that is,
reducing money’s purchasing power). After World War I, the British and the
United States governments forced prices to fall, but many other countries
devalued their currencies against gold. After World War II, all major countries
accepted the higher wartime price level, and most devalued their currencies to
avoid deflation and depression.
The widespread use of paper money
brought other problems. Since the cost of producing paper money is far lower
than its exchange value, forgery is common (it cost about 4 cents to produce
one piece of U.S. paper currency in 1999). Later the development of copying
machines necessitated changes in paper and the use of metallic strips and other
devices to make forgery more difficult. In addition, the use of machines to
identify, count, or change currency increased the need for tests to identify
genuine currency.
Standards of value
In the Middle Ages, when money
consisted primarily of coins, silver and gold coins circulated simultaneously.
As governments came increasingly to take over the coinage and especially as
fiduciary money was introduced, they specified their nominal (face value)
monetary units in terms of fixed weights of either silver or gold. Some adopted
a national bimetallic standard, with fixed weights for both gold and silver
based on their relative values on a given date—for example, 15 ounces of silver
equal 1 ounce of gold (see bimetallism). As the prices changed, the phenomenon
associated with Gresham’s law ensured that the bimetallic standard degenerated
into a monometallic standard. If, for example, the quantity of silver
designated as the monetary equivalent of 1 ounce of gold (15 to 1) was less
than the quantity that could be purchased in the market for 1 ounce of gold
(say 16 to 1), no one would bring gold to be coined. Holders of gold could
instead profit by buying silver in the market, receiving 16 ounces for each
ounce of gold; they would then take 15 ounces of silver to the mint to be
coined and accept payment in gold.
Continuing this profitable exchange drained gold from the mint, leaving the mint with silver coinage. In this example, silver, the cheaper metal in the market, “drove out” gold and became the standard. This happened in most of the countries of Europe so that by the early 19th century all were effectively on a silver standard. In Britain, on the other hand, the ratio established in the 18th century on the advice of Sir Isaac Newton, then serving as master of the mint, overvalued gold and therefore led to an effective gold standard. In the United States, a ratio of 15 ounces of silver to 1 ounce of gold was set in 1792. This ratio overvalued silver, so silver became the standard. Then in 1834, the ratio was altered to 16 to 1, which overvalued gold, so gold again became the standard.
Post a Comment