In the nineteenth century and even in the early twentieth, a theory in vogue among anthropologists had it that, in the early days of humanity, women had the upper hand in familial and social affairs. Various evidence of that supposed primitive matriarchy was put forward, to wit: female sculptures and the frequent figuration of female symbols in prehistoric art; a preponderant place granted to “mother goddesses” in the protohistoric period, in the Mediterranean basin and beyond; so-called primitive peoples observed in our own time, whose names and social status were passed down from the mother to her children; and, finally, many myths collected nearly everywhere in the world, all of which provide variations on a single theme. In ancient times, they say, women ruled over men. Men’s subjugation lasted until they managed to seize the sacred objects—often musical instruments—from which women drew their power. Having become the sole possessors of these means for communicating with the supernatural world, the men could definitively establish their domination.
Those who granted historical verisimilitude to myths misunderstood their principal function, which is to explain why things are the way they are at present. That obliges myths to posit that things were once different. In short, myths reason in the same way as those nineteenth-century thinkers in the thrall of evolutionism, who strove to arrange in a unilinear series the institutions and customs observed in the world. Starting from the postulate that our civilization is the most complex and the most evolved, they saw the institutions of so-called primitive peoples as a reflection of those that may have existed at the beginnings of humankind. And since the Western world is governed by paternal law, they concluded that uncivilized peoples must have had, and sometimes still had, a radically different law.
The advances of ethnographic observation put an end to the illusions of matriarchy, and for a time it was possible to believe that end was definitive. We came to realize that, in a maternal legal regime as in a paternal legal regime, authority belongs to the men. The only difference is that it is exercised by the mother’s brothers in one case and by her husband in the other.
Under the influence of the feminist movements and what is called “gender studies” in the United States, hypotheses of matriarchal inspiration are returning in force. But they are based on a very different, much more ambitious argument. It was in making the decisive leap from nature to culture that humanity supposedly separated itself from animality and that human societies were born. That leap would remain a mystery were it not possible to identify one or another distinctive capacity of humankind that allowed it to get off the ground. Two of these capacities were already known: the making of tools and articulate language. A third was now proposed—one, it was claimed, that was far superior. Not limited to the intellectual faculties presupposed by the first, it lay at the very heart of organic life. The first appearance of culture would no longer be a mystery but would be rooted in physiology.
Of all mammals, the human animal is the only one, according to a traditional formulation (whose importance has not been measured, however), that can make love in all seasons. Human females do not have one or several rutting seasons. Unlike other animals, they do not signal to males, by changes in coloring and the emission of odors, their periods of estrus, that is, those favorable for fertilization and gestation. And they do not reject males at other times.
We are invited to see that major difference as the factor that made possible and even determined the transition from nature to culture.
How is this thesis demonstrated? This is where things get complicated: for lack of any possible proof, the imagination enjoys free rein. Some mention the behavior of wild chimpanzees: females in heat obtain more animal-based foods from the males than do the other females. By bold extrapolation it is inferred that, when hunting became a specifically male occupation among humans, women who made themselves sexually available at all times received a larger share of the game. These women, better nourished, more robust, and as a result more fertile, were advantaged by natural selection. And there was an additional benefit: by concealing ovulation, these women would have constrained the males (in these primitive times, motivated only by the need to propagate their genes) to devote more time to them than the reproductive act by itself would have required. The women thus assured themselves lasting protection, which became increasingly useful as, over the course of evolution, the children they produced became larger and their development occurred later.
Other authors, taking the opposite view, claim that in not “advertising” (as the Americans say) their periods of estrus, women made supervision by their husbands more difficult and less effective. These men were not always the best procreators; the interest of the species, therefore, allowed women to increase their chances of being fertilized by other males.
Here, already, are two diametrically opposed interpretations of a single phenomenon: the key to monogamous marriage in one case; the remedy to its disadvantages in the other. In a highly respectable French scholarly journal (since ideas from across the Atlantic are gaining influence here as well), I found a third, no less fanciful theory, presented in all seriousness. The loss of estrus would supposedly be the origin of the prohibition of incest, which we know is practically universal in its various forms in human societies. That loss, it is claimed, and the constant availability that results, would have attracted too many men to each woman. The social order and the stability of marriage would have been compromised if every woman, via the prohibition of incest, were not made inaccessible to those who, because of a domestic life in common, were the most susceptible to temptation.
It is not explained how in very small societies the prohibition of incest would have protected women, made more desirable by the absence of estrus, from what is called a “generalized sexual commerce” with those males around them on a daily basis who were not close relatives. Above all, the proponents of that theory seem unaware of the fact that the exact opposite theory could be supported just as plausibly (or rather, implausibly).
We were told that the disappearance of estrus threatened the peace of marriages and that the prohibition of incest had to be instituted to ward off that threat. But according to other authors, it is on the contrary the existence of estrus that proved incompatible with social life. When humans began to form true societies, the ensuing danger was that every female in heat would attract all the males. The social order would not have been able to withstand it. Estrus thus had to disappear for society to come into being.
At least that last theory rests on a seductive argument. Sexual odors did not disappear entirely. In ceasing to be natural, they could become cultural. Such would be the origin of perfume, whose chemical structure even now is similar to that of organic pheromones since the ingredients composing it come from animals.
With that theory, a path opened up that some have rushed to take, once again turning the basic facts of the problem on their head. Far from positing the total loss of estrus, they assert that women could not conceal it completely because their menses, heavier than those of other mammals, often betrayed them, signaling to all that they were entering a period of fertility. Women, in competition for the males, invented a stratagem. Those who were not fertile at the time and who therefore did not attract the men’s attention tried to deceive them by daubing themselves with blood or with a red pigment imitating blood. That is supposedly the origin of makeup (after that of perfume, as we have seen).
In that scenario, women are clever calculators. Another scenario denies them any talent of that kind, or rather, turns stupidity to the advantage of women who, having remained ignorant of their periods of ovulation, would have more opportunities to propagate their genes. Natural selection would favor them at the expense of more intelligent women who, understanding the link between copulation and conception, would be able to avoid copulating during estrus in order to spare themselves the difficulties of gestation.
Depending on the whims of the theory makers, the loss of estrus thus appears sometimes as an advantage, sometimes as a disadvantage. Some say that loss made it possible to strengthen marriages, others that it mitigated the biological risks of monogamous unions. It exposed humans to the social perils of promiscuity, or it prevented them. We are overcome by vertigo in the face of these contradictory, mutually annihilating interpretations. And when you can make the facts say anything at all, it is pointless to attempt to base an explanation on them.
For the last century, and in the United States itself, anthropologists have made every effort to introduce a bit of caution, seriousness, and rigor into their discipline. How could they not be saddened to see their field of study invaded, engulfed even (especially in the United States, which is quick to repudiate the old masters, but already in Great Britain, and soon, we may fear, throughout Europe), by these genital Robinson Crusoe tales? Even supposing they really took place, these revolutions, which are discussed as if they happened yesterday, date back hundreds of thousands if not millions of years. We can say nothing about such a remote past. As a result, to find a meaning for the loss of estrus, to invent a role for it that sheds light on the social life we now lead, proponents of these theories surreptitiously shift that loss to an era that, though still unknown to us, is not so distant as to prevent them from projecting its supposed effects onto the present.
It is significant that these theories about estrus developed in the United States in the wake of another theory whose aim was also to shorten time spans. According to that theory, Neanderthal man, the immediate predecessor of Homo sapiens (and his contemporary for a few millennia), could not have possessed articulate language because of the conformation of his larynx and pharynx. The advent of language, therefore, supposedly dates back little more than fifty thousand years.
Behind these futile attempts to ascribe simple organic causes to complicated intellectual activities, we recognize a mode of thought blinded by naturalism and empiricism. When observations that could ground a theory are lacking—which is almost always the case—that mode of thought invents them. This propensity for misrepresenting gratuitous assertions as empirical data takes us back several centuries, since it characterized anthropological thought at its beginnings.
Although the anatomical structure of Neanderthal man’s throat kept him from emitting certain phonemes, it is beyond dispute that he could emit others. And phonemes of all kinds are equally suitable for differentiating meanings. The origin of language is not linked to the conformation of the speech organs. It must be sought in the neurology of the brain.
And brain neurology demonstrates that language could already have existed in remote times, long before the first appearance of Homo sapiens some hundred thousand years ago. Endocranial casts made from remains of Homo habilis, one of our distant predecessors, show that the left frontal lobe and what is called the Broca area, the seat of language, were already formed more than two million years ago. As the name given to him emphasizes, Homo habilis made tools, rudimentary to be sure, but corresponding to standardized forms. It is not immaterial in that respect that the component of the brain that commands the right hand is contiguous to the Broca area and that the two zones developed in concert. Nothing allows us to claim that Homo habilis could speak, but he had the first faculties to do so.
By contrast, there can be no doubt about
Homo erectus, our direct predecessor, who half a million years ago carved stone tools with a careful symmetry requiring more than a dozen successive operations. It is unimaginable that these complex techniques could have been transmitted from generation to generation without instruction.
All these considerations, then, push back the first appearance of conceptual thought, articulate language, and life in society to times so remote that we cannot concoct hypotheses about them without displaying a naïveté bordering on gullibility. If we seek to place the loss of estrus at the origin of culture, we must admit that it had already occurred with Homo erectus, perhaps even with Homo habilis, species about whose physiology we know nothing except that, in terms of understanding human evolution, the really interesting things occurred in the brain, not the uterus or larynx.
To anyone who would let himself be tempted by the little estrus game, I would therefore suggest that, all in all, the least absurd hypothesis would be to place the loss of estrus in a direct relationship with the appearance of language. When women could signal their moods with words, even if they chose to express themselves in veiled terms, they no longer needed the physiological means by which they had previously made themselves understood. These old means—with their cumbersome mechanics of swelling, sweatiness, flushing, and the emission of odors—having lost their primary function and become useless, would have gradually atrophied. Culture would have shaped nature, not vice versa.