ZERO
Recall your Roman numerals:
I = 1
II = 2
III= 3
IV = 4
V = 5
.
.
.
X = 10
.
.
.
L = 50
.
.
.
C = 100
and so on.
Notice that there is no symbol for "zero" in the Roman numerals.
This is typical of the ancient methods of numeration (even those that employed positional notation,
such as the Babylonian system). The failure to recognize the concept of zero and the absence of a symbol
representing the number zero had its drawbacks. In particular, college professors of those days
were unable to make grade book entries indicating that a student had missed every possible point
on a test (or hadn't even taken the test). The only practical solution was to have such students
executed. Although this approach did result in higher academic standards, some parents complained.
Although the concept of zero was apparently familiar to some ancient Greek astronomers, the first clearly documented study the number zero comes from India
in the 7th century AD. The author Brahmagupta defined zero to be the result of subtracting a number from itself.
He also identified several other properties of zero, although he had a few things wrong (for instance, he
didn't recognize that division by zero is impossible).
A few centuries after Bramagupta, Islamic and Arabic scholars such as al-Khwarizmi brought the Indian mathematical knowledge to the west. Even so, it wasn't
until the 14th century and afterward that the use of zero as a number became commonplace among Europeans.
For further reading:
R. Kaplan, The Nothing That Is: A Natural History of Zero (London, 1999).
R. Mukherjee, Discovery of Zero and Its Impact on Indian Mathematics (Calcutta, 1991).