It’s easy to forget that, for most of its existence, the English word “computer” referred not to machines, but to people who performed calculations. First used in the seventeenth century, the term arrived via French from the Latin computare, meaning to count or add up. Computare itself derived from the combination of the words com, meaning “with,” and putare, which originally meant “to prune” in the sense of trimming something down to size, and which came to imply “reckoning” by analogy with mentally pruning something down to a manageable estimate.
Long before eminent Victorians like Charles Babbage had even dreamed of calculating machines, human computing had been vital to such feats as the ancient Egyptians’ understanding of the motion of the stars and planets, with mathematicians like Ptolemy laboriously determining their paths (he also managed to calculate pi accurately to the equivalent of three decimal places: no mean feat for the first century AD).
As mathematics developed, the opportunities for elaborate and useful calculations increased—not least through the development of tables of logarithms, the first of which were compiled by English mathematician Henry Briggs in 1617. Such tables immensely simplified the complex calculations vital to tasks like navigation and astronomy by providing precalculated lists of the ratios between different large numbers—but whose construction required immense feats of human calculation both by mathematicians and increasingly necessary groups of trained assistants.
Even as recently as the Second World War, when Alan Turing and his fellows were establishing the revolutionary foundations of modern computing, the word computers still referred to dedicated human teams of experts—like those working around Turing at Bletchley Park in England.
According to the Oxford English Dictionary, it wasn’t until 1946 that the word computer itself was used to refer to an “automatic electronic device.” This was, of course, only the beginning; and since then both the sense and the compound forms of the word have multiplied vastly. From microcomputers to personal computers and, more recently, tablet computers, we live in an age defined by Turing’s digital children.
It’s important to remember, though, just how recently machines surpassed men and women in the computation stakes. As late as the 1960s, teams of hundreds of trained human computers housed in dedicated offices were still being used to produce tables of numbers: a procedure that the first half of the twentieth century saw honed to a fine art, with leading mathematicians specializing in breaking down complex problems into easily repeatable steps.
It’s a sign of how fast and entirely times have changed since then that human computation is almost forgotten. And yet, in different forms, its principles remain alive in the twenty-first century—not least under the young banner of crowdsourcing, a word coined in 2006 in an article for Wired magazine by writer Jeff Howe to describe the outsourcing of a task to a large, scattered group of people.9
From identifying the contents of complex photographs to answering fuzzy questions or identifying poorly printed words, there remain plenty of tasks in a digital age that people are still better at than most machines. We may not call it “human computation” anymore, but the tactical deployment of massed brainpower to solve some problems remains more potent than ever.