The concept of a number zero is an indispensable part of calculus, algebra, computation and our ability to clearly express large numbers. However, using zero as an actual number is a relatively recent idea in mathematical history. At some point in Babylonian and Sumerian history during the first three millennia BC, the idea of a positional number started to be used in their numerical system – meaning that the value of a number depended on its position (in the same way as 1 has a large value when it is the first digit of a two digit number such as 12). However, rather than writing a number in an empty position (as we would in the number 205), they left a space or marked the space with two dashes.
It was not until about the fifth century AD that Indian mathematicians started to treat zero as a real number. This meant that, in their system, the placeholder could be marked with a zero. It also made it much, much easier to express and imagine large numbers. Imagining a number like a million using Greek and Roman numerals was extremely cumbersome, whereas the Indians could now simply write a one followed by six zeros. This revolutionized the way that calculations were performed, and laid the groundwork for the arithmetic we all now learn in school.