CHAPTER THREE

Freedom in the Liquid-Modern Era

THE GAME GOES ON, whatever we do, noted Günther Anders first in 1956, though he kept repeating it until the end of the century in successive editions of Die Antiquiertheit des Menschen: “Whether we play the game or not, it is being played with us. Whatever we do or abstain from doing, our withdrawal will change nothing.”1

Half a century later, we hear the same concerns expressed by leading minds of our times. Pierre Bourdieu, Claus Offe, and Ulrich Beck may differ considerably in their descriptions of that world which plays games with us, compelling us by the same token to play the make-believe game of “free” players—but what they all struggle to grasp in their descriptive efforts is the same paradox: the greater our individual freedom, the less it is relevant to the world in which we practice it. The more tolerant the world becomes of the choices we make, the less the game, our playing it, and the way we play it are open to our choice. No longer does the world appear amenable to kneading and molding; instead, it seems to tower above us—heavy, thick, and inert, opaque, impenetrable and impregnable, stubborn and insensitive to any of our intentions, resistant to our attempts to render it more hospitable to human coexistence. The face it shows us is mysterious and inscrutable, like faces of the most seasoned poker players. To that world, there seems to be no alternative. No alternative, at any rate, that we the players, by our deliberate efforts, singly, severally, or all together, could put in its place.

Amazing. Baffling. Who would have expected it? One can only say that for the past two or three centuries since that great leap to human autonomy and self-management variously called “Enlightenment” or “the advent of the modern era,” history has run in a direction no one planned, no one anticipated, and no one wished it to take. What makes this course so astonishing and such a challenge to our understanding is that these two to three centuries started with the human resolve to take history under human administration and control—deploying for that purpose reason, believed to be the most powerful among human weapons (indeed, a flawless human facility to know, to predict, to calculate, and so to raise the “is” to the level of the “ought”)—and were filled throughout with zealous and ingenious human effort to act on that resolve.

In the April 1992 issue of the Yale Review, Richard Rorty remembered Hegel’s melancholy confession that philosophy is, at its utmost, “its time held in thought.” I might add: this is at any rate what philosophy tries hard to do—to hold its time, to contain its restless and capricious jolts in a riverbed carved in rock with the sharp chisel of logic firmly held in the hilt of reason. “With Hegel,” Rorty suggested, “the intellectuals began to switch over from fantasies of contacting eternity to [the] fantasy of constructing a better future.” I might add: they hoped first to learn where the river of time was flowing, and they called it “discovery of the laws of history.” Disappointed and impatient with the slowness of the current and the twists and turns of the river, they later resolved to take the decision into their own hands: to straighten the course of the river, to encase the riverbanks in concrete to prevent overflow, to select the estuary and lay out the trajectory that the river of time should follow. They called it “designing and building a perfect society.” Even when pretending humility, philosophers could hardly hide their self-confidence. From Plato to Marx, Rorty suggests, they believed that “there just must be large theoretical ways of finding out how to end injustice, as opposed to small experimental ways.”2

We believe this no longer, and few of us would be prepared to swear that they still do, though many seek desperately to cover up the humiliating discovery that we, the intellectuals, may after all be no better than our fellow citizens at holding our time in thought. The discovery that time stubbornly refuses to stay obediently in the riverbed carved by reason, that it would surely tear to pieces any thought container in which it was supposed to be held, that no map has been charted nor is likely to be charted showing its direction, and that there is no lake called “perfect society” at the far end of its flow—if, that is, there is an end to that flow.

Rorty, for once, rejoices in that loss of the intellectuals’ self-assurance and welcomes the new modesty that is bound to follow. He wishes the intellectuals to admit—to others and to themselves—that there is “nothing in particular that we know that everybody else doesn’t know.” He wants them “to rid themselves of the idea that they know, or ought to know, something about deep, underlying forces—forces that determine the fates of human communities.” And he wants them to recall Kenneth Burke’s remark that “the future is really disclosed by finding out what people can sing about”—but also to remember Václav Havel’s sober, salutary warning that in any given year one will probably not be able to guess which songs will be on people’s lips in the year to come.

If there ever was, as Jürgen Habermas insists, a “project of modernity,” it was an intention to replace collective and individual human heteronomy with collective and individual autonomy (autonomy of the human species toward hazards and contingencies of nature and history, and autonomy of human persons toward external manmade pressures and constraints). That double-pronged autonomy was hoped and anticipated to produce and ensure a similarly two-level freedom of self-assertion, simultaneously species-wide and individual. The two front lines in the war for autonomy were meant to be closely interdependent. The autonomy of humanity was to secure and protect the autonomy of the individuals, while the individuals, once they became truly autonomous and free to deploy their powers of reason, would see to it that humanity jealously guarded its newly acquired autonomy and exploited it to promote and safeguard autonomy of the individuals.

If ever there was a project of Enlightenment, it was wrapped around the idea of emancipation. Before freedom had the chance to usher humankind and all its members into the world of autonomy and self-assertion, humanity needed to be liberated from tyranny. To untie its hands and enable it to celebrate the match of human reason and human history, humanity had to be liberated from physical and spiritual slavery—from the physical slavery that prevented humans from doing what they would otherwise do, if allowed to wish freely and to freely follow their wishes, and from the spiritual slavery that prevented humans from being guided in their wishing by reason, and from wishing, therefore, what they should have wished (that is, wishing for what served best their interests and human nature). Have the courage to serve your own understanding! This is the motto of the Enlightenment, wrote Kant. The maxim of thinking on one’s own—this is Enlightenment. For Denis Diderot, the ideal human was someone who dared to think for himself, trampling over prejudice, tradition, antiquity, popular beliefs, authority—in short over everything that enslaves the spirit. And Jean-Jacques Rousseau called his readers to act according to the maxims of their own judgment. It was thought that once these calls to spiritual freedom were heard, listened to, and obeyed, the demise of physical slavery would follow, but that the condition of listening to and obeying the calls to spiritual autonomy was the abolition of physical slavery. And so the fight against the infamy of prejudice and superstition must proceed hand in hand with the struggle against the outrage of political despotism.

On that second front, citizenship, republic, and democracy are the main weapons. In Alexis de Tocqueville’s summary of the political chapter of Enlightenment-inspired emancipation, liberating individuals from the arbitrary rule of a despot while leaving them to their own, private concerns and devices (a condition described by Isaiah Berlin as negative freedom) simply won’t do; what is needed more than anything else is positive freedom: their right and willingness to associate with their co-citizens, to participate in the affairs of their shared polity, in particular the law making. Collective autonomy means obeying no rules except those that have been decided upon and made binding by those who are expected to obey them. The double victory on both fronts would usher in—or at least this is what all quoted spiritual fathers of modernity believed—a transparent, predictable, and manageable user-friendly world, one hospitable to the humanness of humanity.

This is not, however, what actually happened. Two to three centuries later, the world we inhabit is still anything but transparent and predictable. Nor is it a secure home for the human species, let alone its humanity. One is ready to agree with Habermas that the project of Enlightenment remains unfinished. But the incompleteness of the project is not, to be sure, a novel discovery. A genuine novelty is that today we no longer believe that project to be finishable. And yet another novelty is that many of us, perhaps most of us, do not particularly care. It is because of these two novelties that some of us worry that freedom, understood as the autonomy of a society of autonomous individuals, has fallen on hard times—uncomfortable and unprepossessing times.

Half a century ago, Anders worried that, quite possibly, his contemporaries were busy building a world from which they would find no exit, and a world no longer within their power to comprehend, imagine, and emotionally absorb. It is now possible that what half a century ago could be treated as an inordinately, and probably also excessively, dark premonition, has since acquired the rank of a statement of fact and commands ever wider, if not universal, support.

When first proclaimed amid the gathering revolutionary excitement in France, the slogan Liberté, Égalité, Fraternité was a succint statement of a life philosophy, a declaration of intent, and a war cry, all rolled into one. Happiness is a human right, whereas the pursuit of happiness is a natural and universal human inclination—so went the tacit, matter-of-fact assumption of the philosophy—and to achieve happiness, humans needed to be free, equal, and indeed brotherly, since for brothers, mutual sympathy and the succor and help of siblings are birth rights, not perquisites that need to be earned and shown to have been earned before being granted. As John Locke memorably argued,3 even if “there is only one” path to eternal happiness that may be chosen and pursued by men (the path of piety and virtue, leading to eternity in heaven, as the centuries of memento mori groomed people to believe), “in [the] great variety of ways that men follow it [it] is still doubted which is the right one. Now neither the care of the commonwealth, nor the right of enacting laws, does discover this way that leads to heaven more certainly to the magistrate, than every private man’s search and study discovers it unto himself.”

Locke’s insistence on the pursuit of happiness as the principal purpose, simultaneously, of individual life efforts and of association of individuals in a commonwealth was hardly ever questioned throughout the modern era. For most of that time, mankind did not question, either, the idea that freedom, equality, and brotherhood were all that men needed in order to be able to pursue their happiness unhampered and undisturbed. To pursue happiness, that is—though not necessarily to attain it; Locke’s vision was to a great extent an earthly, mundane, this-worldly version of Luther’s or Calvin’s uncertainty of the ultimate resolution of the salvation-versus-damnation dilemma. But whether in its otherworldly or this-worldly rendition, it was the pursuit of felicity itself, rather than a certain summum bonum (the greatest good) lurking at the far—and for all we know stubbornly underdetermined—end of the road that gave genuine happiness. Happiness equaled freedom of experimentation: liberty to take right and wrong steps, freedom to succeed and to fail, to invent, try, and test ever new varieties of pleasurable and gratifying experience, to choose and to take the risk of erring. Unhappiness meant being barred from that freedom; being deprived of the right to choose freely, and instead being, by hook or by crook, by force or by deceit, “protected from” wrong choices.

Two tacit (since viewed as self-evident), axiomatic assumptions underpinned the tripartite design. The program of freedom, equality, and brotherhood implied matter-of-factly that it was the duty of the commonwealth to provide and to guard the conditions favorable to the pursuit of happiness so understood. Pursuit of happiness was an individual affair, concern, destiny, and duty, it was to be conducted individually, by each and any individual deploying individually possessed and managed resources, but the call to seek happiness was addressed to individuals and society alike; whether it would be answered properly depended on the shape of the “commonwealth”—society understood as the shared home and joint concern and product of les hommes et les citoyens, humans/citizens. The other unspoken yet accepted axiomatic assumption was the necessity to conduct the battle for happiness on two fronts. While individuals needed to acquire and develop the art of living a happy life, the powers that shaped the conditions under which that art could be effectively practiced had to be themselves reshaped into something more “practitioner-friendly.” Pursuit of happiness stood no chance of rising to the rank of a genuinely universal right unless those powers took proper care of the parameters of “good society”—equality and fraternity being the most prominent and decisive among them.

It is those assumptions of the intimate and unbreakable link between the quality of the commonwealth and the chances of individual happiness that have lost, or are fast losing, their axiomatic hold on the popular thinking as well as on the products of its intellectually sublimated recycling. And it is perhaps for that reason that the assumed conditions of individual happiness are being shifted away from the supraindividual sphere of Politics with a capital P and toward the domain of individual life-politics, postulated as the field of primarily individual undertakings in which individually commanded and managed resources are mainly, if not exclusively, deployed. The shift reflects the changing living conditions resulting from liquid-modern processes of deregulation and privatization (that is, “subsidiarizing,” “outsourcing,” “contracting out,” or otherwise renouncing the successive functions previously assumed and performed by the commonwealth institutions). The presently emerging formula for the (unchanged) purpose of pursuing happiness may be best expressed as shifting from Liberté, Égalité, Fraternité to Sécurité, Parité, Réseau (Security, Parity, Network).

The trade-off called “civilization” has come full circle since 1929, when Sigmund Freud, in Das Unbehagen in der Kultur, first noted the tug-of-war and logrolling between the two equally indispensable and cherished values, which were vexingly resistant to reconciliation. In less than a century, the continuous progress toward individual freedom of expression and choice has reached the point at which the price of that progress, the loss of security, has begun to be seen by a rising number of liberated individuals (or individuals forced loose without being asked for consent) as exorbitant—unendurable and unacceptable. Risks involved in the individualization and privatization of the pursuit of happiness, coupled with the gradual yet steady dismantling of the societally designed, built, and serviced safety nets and societally endorsed insurance against misfortune, have proved to be enormous, and the resulting fear-excreting uncertainty daunting. The value of “security” is the value that elbows out liberty. A life imbued with a bit more certainty and safety, even if paid for by somewhat less personal freedom, has suddenly gained in attractiveness and seductive power.

“Modern Time,” as Albert Camus pointed out, “begins with the crash of falling ramparts.”4 And as Dostoyevsky’s Ivan Karamazov suggested (following and bringing to a summation the legacy of a long chain of thinkers, starting at least from Pico della Mirandola, the Renaissance herald of the divine omnipotence of Man), with the Divine creation declared faulty and immortality a nebulous notion, the “new man” is permitted, exhorted, and nudged “to become God.” Rehearsals of that new role proved to be inconclusive, however, and above all much less enjoyable than expected. Groping in the dark with no reliable compass or authoritatively endorsed map appeared to be fraught with acute discomforts hardly recompensed by sporadic, brief, and brittle joys of self-assertion. And so the Great Inquisitor of another Dostoyevsky tale found out that men prefer freedom from responsibility to the freedom to tell and set apart good from evil. The further that human freedom, with its requisites of risk and responsibility, progressed, the more intense grew human resentment of rising insecurity and indetermination; and as security gained in perceived attraction and ascribed value, the perks of freedom lost much of their luster. Freud would probably reverse his century-old verdict and ascribe the present psychological ailments and disorders to the consequences of trading an excessively large measure of security for the sake of greater freedom.

In the constellation of conditions (and so also the hoped-for prospects) for decent and agreeable life today, the star of parity shines ever brighter, while that of equality fades. “Parity” is, most emphatically, not “equality”; or rather it is an equality stripped down to equal entitlement to recognition, to the right to be and the right to be left alone. The idea of leveling up wealth, well-being, life comforts, and life prospects, and, even more, the idea of having equal shares in the running of life in common and in the benefits that life in common has to offer, are disappearing from politics’ agenda of realistic postulates and objectives. All varieties of liquid-modern society are increasingly reconciled to the permanence of economic and social inequality. The vision of uniform, universally shared life conditions is being replaced by that of principally unlimited diversification, and the right to become equal is being replaced by the right to be and remain different without being denied dignity and respect for that reason.

While the vertical disparities in access to the universally approved and coveted values tend to grow at a constantly accelerating pace, encountering little resistance and triggering at best only sporadic, narrowly focused, and marginal remedial action, horizontal differences multiply, vociferously lauded, celebrated, and all too often actively promoted by the political and commercial, as well as the ideational, powers that be. Wars for recognition take the place once occupied by revolutions; at stake in ongoing struggles is no longer the shape of the world to come but having a tolerable and tolerated place in that world; no longer are the rules of the game at stake but solely admission to the table. This is what “parity,” the emergent avatar of the idea of fairness, is ultimately about: recognition of the right to partake of the game, quashing a verdict of exclusion, or staving off the chance of such a verdict’s being carried in future.

Finally, the network. If “brotherhood” implied a preexisting structure that predetermined and predefined the rules binding conduct, attitudes, and principles of interaction, “networks” have no previous history: networks are born in the course of action and are kept alive (or rather continually, repetitively recreated/resurrected) solely thanks to successive communicative acts.

Unlike a group or any other kind of “social whole,” a network is individual-ascribed and individually focused—the focal individual, the hub, being its sole permanent and irremovable part. Each individual is presumed to carry his or her unique network on or around his or her own body, like a snail carries its home. Person A and person B may both belong to the network of C, though A does not belong to B’s network and B does not belong to A’s—a circumstance disallowed in the case of totalities, such as nations, churches, or neighborhoods.

The most consequential feature of networks is, however, the unusual flexibility of their reach and the extraordinary facility with which their composition may be modified: individual items are added or removed with no greater effort than it takes to type in or delete a telephone number in a cellular phone’s directory. Eminently breakable bonds connect the network units, as fluid as the identity of the network’s “hub,” its sole creator, owner, and manager. Through networks, “belonging” becomes a (soft and shifting) sediment of identification. Belonging is transferred from the “before” to the “after” of identity and follows promptly, and with little resistance, the identity’s successive renegotiations and redefinitions. By the same token, relations set by and sustained by network-type connectedness come close to the ideal of a “pure relationship”: one based on easily dissolvable one-factor ties, with no determined duration, no strings attached, and unburdened by long-term commitments. In sharp opposition to the “groups of belonging,” whether ascribed or joined, a network offers its owner/manager the comforting (even if ultimately counterfactual) feeling of total and unthreatened control over his or her obligations and loyalties.

One of the most acute and insightful observers and analysts of intergenerational change and particularly of the emergent life-styles, Hanna Swida-Ziemba, has noted that “people of past generations situated themselves in the past as much as in the future.” For the new, contemporary young person, however, she says only the present exists: “The young people to whom I talked during the research conducted in 1991–1993 asked: why is there so much aggression in the world? Is it possible to achieve full happiness? Such questions are no longer important.”5

Swida-Ziemba was speaking of Polish youth. But in our fast-globalizing world she would find very similar trends in whatever land or continent on which she focused her inquiry. The data collected in Poland, a country just emerging from long years of an authoritarian rule that had artificially conserved modes of life elsewhere left behind, only condensed and telescoped the worldwide trends, making them steeper and therefore more salient and somewhat easier to note.

When you ask, “Where does aggression come from?” what probably prompts you to ask is an urge to do something about it; it is because you feel strongly about it, wish to stem aggression or fight it back, that you desire to learn where the roots of aggression lie. Presumably, you are keen to reach those places where the impulses of aggression or aggressive schemes breed and flourish, in order to incapacitate and destroy them. And if this guess is correct, then you must resent the fact that the world is infused with aggression, and view it as uncomfortable or downright unfit for human life, and for that reason iniquitous and undesirable; but you must also believe that such a world could be made more hospitable and friendly to humans—and that if you tried, as try you should, you might become a part of that force destined and able to make it into such a world. Also, when you ask whether full happiness can be attained, you probably believe in attaining, singly or severally, a more agreeable, worthy, and satisfying way of living your life—and are willing to undertake such effort (perhaps even bear such sacrifice) as any worthy cause calls for. In other words, when you ask such questions you imply that, rather than accepting things meekly, since they seem at present to be showing little or no sign of changing, you are inclined to measure your strength and ability by the standards, tasks, and goals you’ve set for your life yourself, not the other way around.

You surely must have had, and followed, such assumptions. Otherwise you wouldn’t be bothered by such questions. For such questions to occur to you, you first need to believe that the world around you is not “given” once and for all, that it can be changed, and that you yourself can be changed while applying yourself to the job of changing it. You must assume that the state of the world could be different than it is now, and that how different it may eventually become depends on what you do; that no less than the state of the world—past, present, and future—may depend on what you do or desist from doing. In other words: you believe that you are, simultaneously, an artist able to create and shape things, and the product of such creation and shaping.

As Michel Foucault suggested, only one conclusion follows from the proposition that identity is not given: we need to create it, just as works of art are created. For all practical intents and purposes, the question, “Can the life of every human individual become a work of art?” is rhetorical; we can do without an elaborate argument. Assuming the positive answer to be a foregone conclusion, Foucault asks: If a lamp or a house can be a work of art, why not a human life?6 I surmise that both the “new young” and the “past generations” that Swida-Ziemba compares would have agreed wholeheartedly with Foucault’s suggestions, but I also guess that members of each of the two age cohorts would have something else in mind when thinking of “works of art.”

Those of the past generations would probably think of an artwork as something of lasting value and imperishable, resistant to the wear of time and caprices of fate. Following the habits of old masters, they would meticulously prime their canvas before applying the first brushstroke, and would equally carefully select the solvents—to make sure that the layers of paint wouldn’t crumble when drying and would retain their freshness of color over many years to come, if not for eternity. The younger generation, though, would seek to imitate the patterns and practices of currently celebrated artists—in the art world’s popular “happenings” and “installations.” With happenings, one knows only that no one (not even their producers and prime actors) can be sure what course they will eventually take, that their trajectories are hostage to (“blind,” uncontrollable) fate, that as they unravel, anything may happen but nothing is certain to happen. And with installations—put together of brittle and perishable, preferably self-degradable elements—everyone knows that the works won’t survive the close of the exhibition, that to fill the gallery with the next batch of exhibits, it will need to be cleared of the (now useless) bits and pieces—remnants of the old. The young may also associate works of art with the posters and other prints they put up all over the wallpaper in their rooms. They know that the posters, like the wallpaper, are not meant to adorn their rooms forever. Sooner or later they will need to be “updated”—torn down to make room for the likenesses of the next latest idols.

Both generations (past and new) imagine works of art after the patterns of their particular world, the true nature and meaning of which one presumes and hopes the arts will lay bare and make available to scrutiny. One expects the world to be made more intelligible, perhaps even fully understood, thanks to the labors of artists; but well before that happens, the generations that “live through” that world know it from “autopsy,” so to speak: from examining their personal experience and from the stories commonly told to report their experience and make it meaningful. No wonder, then, that in stark opposition to previous generations, the new young believe that one can’t really navigate one’s life along a route designed before the voyage started, and that random fate and accident decide, in the last account, its itinerary. Of some of the Polish young people interviewed by Swida-Ziemba, for instance, she says that “they note that a mate climbed high in the firm, was repeatedly promoted and reached the top, until the company went bankrupt and he lost everything he had gained. It is for that reason that they may quit the studies that went very well and go to England to work on a building site.” The others don’t think of the future at all (“It’s a waste of time, isn’t it?”) and don’t expect life to reveal any logic but instead look for the occasional stroke of luck (possibly) and banana skins on the sidewalk (equally probably)—and for that reason “want every moment to be pleasurable.” Indeed: every moment. An unpleasurable moment is a moment wasted. Since it is impossible to calculate which sort of future profits, if any, a present sacrifice may bring, why should one surrender the instant pleasure that could be squeezed out of the here and now and enjoyed on the spot?

The “art of life” may mean different things for the members of older and younger generations, but they all practice it and can’t possibly not. The course of life and the meaning of its every successive episode, as well as life’s “overall purpose” or “ultimate destination,” are nowadays presumed to be do-it-yourself jobs, even if they consist only in selecting and assembling the right type of flat-packed IKEA-style kit. Each and any practitioner of life is expected, just as the artists are, to bear full responsibility for the outcome of the job, and to be praised or blamed for its results. These days each man and each woman is an artist not so much by choice as, so to speak, by the decree of universal fate.

“Being artists by decree” means that nonaction also counts as action; swimming and navigating as much as allowing oneself to drift with the waves are, a priori, assumed to be acts of creative art and will be retrospectively recorded as such. Even people who refuse to believe in the logical succession, continuity, and consequentiality of choices, decisions, and undertakings, and in the feasibility and plausibility of taming fate—in overruling providence or destiny and keeping life on a predesigned and preferred course—even they do not sit on their hands; they still need to “assist fate” by seeing to the endless little tasks that fate decreed they will perform (as if following the drawings attached to the ready-to-assemble kit). Just like those who see no point in delaying satisfaction and decide to live “for the moment,” people who care about the future and are wary of undermining their chances yet to come are convinced of the volatility of life’s promises. They all seem to be reconciled to the impossibility of making foolproof decisions, of predicting exactly which one of the successive steps will prove to be the right one, or which of the scattered seeds of the future will bring plentiful and tasty fruit and which flower buds will wilt and fade before a sudden gust of wind or a wasp on an accidental visit can pollinate them. And so whatever else they believe, they all agree that one needs to hurry; that doing nothing, or doing something slowly and lackadaisically, is a grave mistake.

This is particularly true for the young: as Swida-Ziemba noted, they collect experiences and credentials “just in case.” The young Poles say może; the English of their age would say “perhaps,” the French peut-être, the German vielleicht, the Italians forse, the Spanish tal vez—but they would all mean much the same thing: who can know, while there is no knowing, whether one or the other ticket will win in the next drawing in life-lotto?

Myself, I belong to one of those “past generations.”

As a young man, together with most of my contemporaries, I read attentively the instructions of Jean-Paul Sartre concerning the choice of a “life project”—that choice meant to be the “choice of choices,” the metachoice that would determine, once and for all, from beginning to end, all the rest of our (subordinate, derivative, executive) choices. To every project (so we learned reading Sartre’s instructions), there would be a road map attached and a detailed briefing on how to follow the itinerary. We had no difficulty understanding Sartre’s message and found it compatible with what the world around us appeared to announce or imply. In Sartre’s world, as in the world shared by my generation, maps aged slowly if at all (some of them even claimed to be “definitive”), roads were laid once and for all (they could, though, be resurfaced from time to time, to enable yet greater speed), and they promised to lead to the same destination each time they were taken; the signs at the crossroads were time and again repainted, but their messages never changed.

I (though again in the company of other young people my age) also listened patiently, with no murmur of protest let alone rebellion, to the lectures in social psychology that were founded on the laboratory experiments with hungry rats in a maze, seeking the one and only correct and proper succession of turns—that is, the one and only itinerary with a coveted morsel of lard at the end—in order to learn and memorize it for the rest of their lives. We did not protest at this because, in the plight and concerns of the laboratory rats, as much as in Sartre’s advice, we heard echoes of our own life experiences.

Most young people of today, however, are likely to view the need to memorize the track out of the maze as the rats’ worry but not their own. They would shrug their shoulders if advised by Sartre to fix their life’s destination and to plot in advance the moves ensuring that it will be reached. Indeed, they would object: How do I know what the next month, let alone next year, will bring? I can be certain of one thing only, that the next month or year, and most certainly the years that will follow them, will be unlike the time I am living in now; being different, they will invalidate much of knowledge and know-how that I am currently exercising (though there is no guessing which of its many parts); much of what I’ve learned I’ll have to forget, and I’ll have to get rid of many (though there is no guessing which) things and inclinations I now display and boast about having; choices deemed today to be most reasonable and praiseworthy will be decried tomorrow as silly and disgraceful blunders. What follows is that the sole skill I really need to acquire and exercise is flexibility—the skill of promptly getting rid of useless skills, the ability to quickly forget and to dispose of the past assets that have turned into liabilities, the skill of changing tacks and tracks at short notice and without regret, and of avoiding oaths of life-long loyalty to anything and anybody. Good turns, after all, tend to appear suddenly and from nowhere, and equally abruptly they vanish; woe to the suckers who by design or default behave as if they were to hold to them forever.

It seems these days that though one can still dream of scripting a full-life scenario in advance, and even try to make the dream true—to hold on to any scenario, even to the most glorious, seductive, and apparently foolproof of scenarios, is risky and may prove suicidal. The scenarios of yore can date even before the play goes into rehearsals, and if they survive at all to the opening night, the run of the play may prove abominably brief. And then, having the stage of life committed to such a scenario for a considerable time ahead will be equal to forfeiting the chance for many (there is no knowing how many) more up-to-date and for that reason successful productions. Opportunities, after all, keep knocking, and there is no telling on which door and when they will knock.

Take the case of Tom Anderson. Having studied art, he probably did not acquire much engineering know-how and had little notion of how the technological wonders work. Like most of us, he was just a user of modern electronics, and like most of us, he must have spent little time meditating on what is inside the computer box and why this rather than something else popped up on the screen when he pressed this and not that key. And yet all of a sudden, probably to his own great surprise, Tom Anderson was acclaimed in the computing world as the creator and pioneer of “social networking” and the originator of what was promptly dubbed “the second Internet revolution.” His blog, perhaps mostly a private pastime in its intention, in less than a couple of years evolved into the company MySpace, swarmed by young and very young Internauts (older Web users, if they heard of the company at all, probably played it down or derided it as another passing fad or another silly idea with the life expectancy of a butterfly). The “company” was still bringing in no profit to speak of, and Anderson had no idea how (and probably no strong intention, either) to make it financially profitable. Then in July 2005, Rupert Murdoch, unsolicited, offered $580 million for MySpace, which was then operating on not much more than a shoestring. Murdoch’s decision to buy “opens sesames” in this world much more surely than the magic of the most ingenious and sophisticated spells. No wonder that fortune hunters invaded the Web in search of more uncut diamonds. Yahoo bought another Web site of the social-networking category for a billion dollars, and in October 2006 Google set aside $1.6 billion to obtain yet another, called YouTube—started up just a year and a half earlier, in purely cottage-industry fashion, by another pair of amateur enthusiasts, Chad Hurley and Steve Chene. On 8 February 2007 the New York Times reported that for their felicitous idea, Hurley was paid in Google shares worth $345 million, while Chene received shares with a market value of $326 million.

“Being found” by fate, embodied in the person of a high and mighty protector or a resourceful patron searching for as-yet-unrecognized or just not duly appreciated talents, has been since the late Middle Ages and early Renaissance a popular motif in the biographical folklore about painters, sculptors, and musicians. (This was not true in the ancient world, though, where art was seen as the way to obediently and faithfully depict the magic of Divine creation: the Greeks “could not reconcile the idea of creation under the auspices of divine inspiration with monetary reward for the work created.”7 Being an artist was then associated more with renunciation and poverty, “being dead to the world,” than with any kind of worldly, let alone pecuniary, success.) An etiological myth of being discovered by the high and mighty was probably invented at the threshold of the modern era, to account for the (still few and far between) unprecedented cases of individual artists who suddenly rose to fame and riches in a society that made birth a no-appeal-allowed life sentence and had no room for the idea of “self-made men” (and even less, of course, for self-made women)—and to account for such extraordinary cases in a way that would reaffirm rather than undermine the norm—the mundane order of power, might, and the right to glory. Being of lowly origin, if not downright outcasts, future masters of the arts found as a rule (at least this was what the myth insinuated) that even the greatest talent coupled with uncommonly dogged determination and genuinely extraordinary and inexhaustible missionary zeal was still not enough to fulfill their destiny, unless a benevolent and powerful hand was stretched forth to fetch them into the otherwise unreachable land of fame, wealth, and admiration.

Before the advent of modernity, the legend of “meeting with fate” was confined almost exclusively to artists; and no wonder, given that the practitioners of fine arts, like painters or composers, were almost the only people who managed to rise above their original lowly station and end up supping with princes and cardinals, if not kings and popes. As modernity progressed, however, the ranks of class-barrier-breakers stretched. As the numbers of parvenus multiplied, the stories inspired by their meetings with fate were democratized. These stories now inform the expectations of any and all “life artists,” the mundane practitioners of the mundane art of mundane life; and this means all of us, or almost all. We have all been decreed owners of the right to “meet with fate,” and through that fateful meeting to taste success and enjoy a life of happiness. And once a right is decreed to be universal, in no time it turns into a universal duty.

True, it is mainly the artists (or more precisely, the people whose practices, courtesy of their sudden acquisition of celebrity status, have been with no further argument classified as fine art) whose trials and tribulations are plotted in the fables of a miraculous rise from rags to riches, and who are promptly cast in the limelight and publicly celebrated. (For instance, we have the notorious story of the girl who was selling, for two-pounds apiece, fifty-pence glass ashtrays adorned with pop idols’ photographs cut from newspapers and glued, slapdash, on the bottom. She was biding her time in a little shop on a drab little street in East London—until one day in front of that shop stopped a limousine carrying a great patron of art, destined to transform the girl’s long-unmade bed into a priceless work of high art in the manner of Cinderella’s fairy godmother, who conjured up a gold-dripping carriage out of a pumpkin.) Stories of successful artists (or more precisely of boys and girls magically transformed into such) have the advantage of falling onto ground well prepared by the centuries-old storytelling tradition; they also, however, fit particularly well the mood of our liquid-modern times, because unlike the early-modern stories—for instance the legend of a shoeshine boy who became a millionaire—they omit the thorny and rather off-putting issues of patience, hard work, and self-sacrifice that success in life most commonly requires. Stories of celebrated visual and performing artists play down the issues of which kind of activity one should choose and pursue to become worthy of public attention and esteem, and how one should make this choice (anyway, in a liquid-modern world one expects, and with good reason, that few if any worthy activities are likely to retain their worthiness for long). It is, rather, a general principle on which the typical liquid-modern stories focus: that in a compound with benevolent fate, any ingredient may cause the glittering crystals of success to sediment from the murky solution called life. Any ingredient: not necessarily the drudgery, self-denial, and self-sacrifice that the classic-modern stories of success suggested.

Considering such conditions, the invention of computerized networks came in eminently handy. One of the many virtues of the Internet (and one of the principal causes of its mind-boggling rate of growth) is that it puts paid to the awkward necessity of taking sides when faced with the ancient, now out-of-fashion and barely comprehended opposition between work and leisure, exertion and rest, purposeful action and idleness, or indeed application and sloth. The hours spent in front of your computer when zapping your way through the thicket of Web sites—what are they? Work or entertainment? Labor or pleasure? You cannot tell, you do not know, yet you must be absolved of your sin of ignorance, since the reliable answer to this dilemma won’t come and can’t come before fate shows its cards.

There is little wonder, therefore, that by 31 July 2006, 50 million blogs had been counted on the World Wide Web, and that by latest calculations their numbers have since grown, on average, by 175,000 a day. On what do those blogs inform the “Internet public”? They inform us on everything that may occur to their owners/authors/operators, whatever may enter their heads—since there is no knowing what, if anything, may attract the attention of the Rupert Murdochs or Charles Saatchis of this world.

Creating a “personal site,” a blog, is just another variety of the lottery: you go on, as it were, buying tickets just in case, with or without the illusion that there are rules that enable you (or anyone else, for that matter) to predict the winning ones—at least the kind of rules you could learn and remember to observe faithfully and effectively in your own practice. As Jon Lanchaster, who examined a large number of blogs, reported, one blogger recorded in great detail what he had consumed for breakfast, another described the joys he got from the previous evening’s game, a she-blogger complained of the intimate bedroom shortcomings of her partner, another blog contained an ugly photograph of the author’s pet dog, yet another meditated on the discomforts of a policeman’s life, and another still collated the tastier sexual exploits of an American in China.8 And yet one trait was found to be shared by all blogs: an unashamed sincerity and straightforwardness in displaying, in public, the most private experiences and most intimate adventures—brutally speaking, a burning zeal and evident lack of inhibition in putting oneself (or at least some parts or aspects of one’s self) on the market. Perhaps one bit or another would prod the interest and inflame the imagination of prospective “buyers”—perhaps even some rich and powerful buyers—or if not, just some ordinary folks, but numerous enough to attract the attention of the powerful few, to inspire them to make the blogger an offer he or she wouldn’t refuse and push sky-high his or her market price. Public confession (the juicier the better) of the most personal and meant-to-be-secret affairs is a sort of substitute currency, even if an inferior one: a currency to which we may resort when we can’t afford the currencies routinely used by more “serious” (read: more resourceful) investors.

Many eminent art critics suggest that the arts have now conquered the whole world of the living. The allegedly idle dreams of the past century’s avant-garde have been fulfilled—though not necessarily in the form the avant-garde artists wished and hoped such a victory would take. In particular, and most frustrating: it looks today as though, once they are victorious, the arts may no longer need the artworks to manifest their existence.

Not so long ago, and most certainly in the avant-garde’s halcyon days, the arts struggled to prove their right to exist by documenting their usefulness to the world and its inhabitants; they needed solid and durable, tangible and possibly irremovable and indestructible, eternal proof of the valuable services they render. Now, however, not only do they manage well without leaving solid traces of their presence, but they also seem to avoid leaving traces so deep as to prevent their speedy and expedient effacement. Artists today appear to specialize mostly in assembling and promptly dismantling their creations; at least they treat the activities of assembling and disassembling as equally valid, worthy, and indispensable variants of artistic creativity. One great American artist, Robert Rauschenberg, put on sale sheets of paper on which drawings had once been made by another great American artist, Willem de Kooning, but from which they had been thoroughly erased; Rauschenberg’s own creative contribution, for which the collectors were expected to pay, was the bleak, illegible traces of his rubbing-off action. In this way Rauschenberg promoted destruction to the rank of artistic creation; it was the act of annihilating the traces left on the world, not imprinting them, that his gesture was aimed to represent as the valuable service that the arts offer their contemporaries. In sending such a message, Rauschenberg was by no means alone among the most prominent and influential contemporary artists. Obliteration of traces, covering up of tracks, was and continues to be placed on the level heretofore occupied solely by their embossing or engraving (for eternity, it was hoped)—perhaps even on an even higher, superior level, where the most urgently needed tools of life are experimented with and the gravest challenges of the human existential condition are located, confronted, and dealt with.

Everything said thus far about the recent transformation of fine arts applies in full to the arts’ most common, universally practiced genre: the art of life. In fact, the fateful departures that occurred in fine arts seem to have resulted from the artists’ efforts to catch up with changes in the art of life, at least in its most ostentatiously displayed varieties. As in so many other fields, so in this case art replicates life; in most cases, changes in fine arts lag behind changes in the mode of life, though the artistic creators do their best to anticipate these changes and sometimes succeed in inspiring or facilitating a change and smoothing its entry and settlement into daily life practices. Before the artists discovered it, “creative destruction” was already widely practiced and entrenched in mundane life as one of its most common, indeed routinely applied, expedients. Rauschenberg’s gesture could therefore be interpreted as an attempt to update the meaning of “representative painting.” Whoever wishes to lay bare, put on display, and render intelligible human experiences (in both their Erfahrungen and Erlebnisse forms), whoever wants her or his oeuvre to faithfully represent those experiences, ought to follow Rauschenberg’s example in unmasking, making salient and available to scrutiny the intimate connections between creation and destruction.

To practice the art of life, to make one’s life a work of art, amounts in our liquid-modern world to being in a state of permanent transformation, to perpetually redefining oneself through becoming someone other than one has been thus far; and “becoming someone else” amounts to ceasing to be what one has been, to breaking and shaking off one’s old form, as a snake does its skin or a shellfish its carapace—rejecting and hoping to wipe out, one by one, the used-up, worn-out, too-tight, or just not as satisfying personae, as they are revealed to be in comparison with new and improved opportunities and offers. To put a new self on public display and to admire it in front of a mirror and in the eyes of others, one needs to remove the old self from one’s own and the others’ sight and possibly also from one’s own and their memory. When “self-defining” and “self-asserting,” we practice creative destruction. Daily.

To many people, particularly to the young ones who leave behind only few and mostly shallow traces, apparently easy to obliterate, this new edition of the art of life may well appear attractive and likeable. Admittedly, this is not without good reason. This new kind of art offers a long string of joys—apparently infinitely long. It promises, in addition, that those who seek this joyful, satisfying life will never suffer an ultimate, definitive, irrevocable defeat, that after every setback there will be a chance to recover, that they will be allowed to cut their losses and start again, “begin from the (new) beginning”—and thus win back or be fully recompensed for what has been lost through being “born again” (that is, through joining another—and, one hopes, more user-friendly and lucky—“only game in town”), so that the destructive bits in the successive acts of creative destruction can be easily forgotten and their bitter aftertaste quashed by the sweetness of new vistas and their yet untested promises.

Pressures are most difficult to resist, fight back, and repel when they do not resort to blatant coercion and do not threaten violence. A command—“You must do it (or you mustn’t do it), or else . . .”—prompts resentment and breeds rebellion. In comparison, a suggestion—“You want it, you can get it, so go for it”—panders to the amour de soi constantly hungry for compliments, nourishes self-esteem, and encourages one to try—according to one’s own will and for one’s own pleasure.

In our society of consumers, the urge to replicate the style of life currently recommended by the latest market offers and praised by the markets’ hired and voluntary spokespersons (and by implication, the compulsion to perpetually overhaul one’s identity and public persona) has ceased to be associated with external (and thus offensive and annoying) coercion; the urge tends to be perceived, on the contrary, as another manifestation and proof of personal freedom. Only if one tries to opt out and retreat from the chase after elusive, forever unfinished identity—or if one is blackballed and chased away from the chase (a truly horrifying scenario) or refused admission a priori—will one learn just how powerful are the forces that manage the racetrack, guard the entries, and keep the runners running—and only then will one find out how severe is the punishment meted out to the hapless and insubordinate. That this is the case is known all too well to those who, for the lack of bank account and credit cards, can’t afford the price of entry to the stadium. For many others still, all of this may be intuited from the dark premonitions that haunt them at night after a busy shopping day—or from the warning that goes off when their bank account falls into the red and their unused credit reaches zero.

Road signs marking life’s trajectory appear and vanish nowadays with little or no warning; maps of the territory that the trajectory is likely to cross at some point are updated almost daily (albeit irregularly and without warning). Maps are printed and put on sale by many publishers and are available at any newsagent’s in profusion, but none of them is “authorized” by an office credibly claiming control over that future; whichever map you choose, you are responsible and you choose at your own risk. In short, the life of the identity seekers/constructors/reformers is anything but short of troubles; their particular art of life demands much money, unremitting effort, and, on many an occasion, nerves of steel. No wonder, then, that despite all the joys and blissful moments it promises and time and again delivers, quite a few people do not view this life as a kind of life that they themselves, given genuine liberty of choice, would wish to practice.

It is often said of such people that they are indifferent if not downright hostile to freedom, or that they have not yet grown up and matured enough to enjoy it. Which implies that their nonparticipation in the style of life dominant in the liquid-modern society of consumers tends to be explained by either ideologically aroused resentment of freedom or the inability to practice it. Such an explanation, however, is at best only partially true. The frailty of all and any identities (even their insufficiently trustworthy solidity) burdens the identity seekers with the duty of attending to the job daily and intensely. What might have started as a conscious undertaking can turn, in the course of time, into a no-longer-reflected-upon routine, whereby the endlessly and ubiquitously repeated assertion that “you can make yourself into someone other than you are” is rephrased as “you must make yourself into someone other than you are.”

It is this “must” that for many people does not sound tantamount to freedom, and it is for that reason that they resent this “must” and rebel against it. As the pressure of the “must” remains steady and overpowering, whether or not you possess the resources that “doing what you must” would require, the “must” sounds more like slavery and oppression than any imaginable avatar of liberty. One reader complained, in a letter to a highly regarded and widely read British daily, that “the four key items that are a must-have” for a respectable man in Spring 2007, recommended in the paper’s “fashion” section (khaki trench, collarless shirt, V-neck sweater, and navy jacket), would cost a total of 1,499 pounds sterling. So—meal for some, poison for some (many? most?) others? If “to be free” means to be able to act on one’s wishes and pursue the chosen objectives, the liquid-modern, consumerist version of the art of life may promise freedom to all, but it delivers it sparingly and selectively.

“As the need for public services has increased, American voters have come to favor reducing the supply of care that government provides, and many favor turning to the beleaguered family as a main source of care,” notes Arlie Hochschild.9 They find themselves, however, falling out of the frying pan into the fire.

The same consumerist pressures that associate the idea of “care” with an inventory of consumer commodities like orange juice, milk, frozen pizza, and microwave ovens strip the families of their social-ethical skills and resources and disarm them in their uphill struggle to cope with the new challenges—challenges aided and abetted by the legislators, who attempt to reduce state financial deficits through the expansion of the “care deficit” (cutting funds for single mothers, the disabled, the mentally ill, and the elderly).

A state is “social” when it promotes the principle of communally endorsed, collective insurance against individual misfortune and its consequences. It is primarily this principle—declared, set in operation, and trusted to be working—that recasts the otherwise abstract idea of “society” into the experience of a felt and lived community by replacing the “order of egoism” (to deploy John Dunn’s terms), bound to generate an atmosphere of mutual mistrust and suspicion, with the confidence- and solidarity-inspiring “order of equality.” It is the same principle that lifts members of society to the status of citizens—that is, makes them stakeholders in addition to being stockholders, beneficiaries but also actors—the wardens as much as wards of the “social benefits” system—individuals with an acute interest in the common good, which is understood as the network of shared institutions that can be trusted, and realistically expected, to guarantee the solidity and reliability of the state-issued “collective insurance policy.”

The application of such a principle may, and often does, protect men and women from the plague of poverty; most important, however, it can become a profuse source of solidarity, able to recycle “society” into a common good, shared, communally owned, and jointly cared for, thanks to the defense it provides against the twin horrors of misery and indignity—that is, of the terrors of being excluded, of falling or being pushed overboard from a fast-accelerating vehicle of progress, of being condemned to “social redundancy,” denied the respect owed to humans and otherwise designated to “human waste.”

A “social state” was to be, in its original intention, an arrangement to serve precisely such purposes. Lord Beveridge, to whom we owe the blueprint for the postwar British welfare state, believed that his vision of comprehensive, collectively endorsed insurance for everyone was the inevitable consequence, or rather indispensable complement, of the Liberals’ idea of individual freedom, as well as a necessary condition of democracy. Franklin Delano Roosevelt’s declaration of war on fear was based on the same assumption. The assumption was reasonable: after all, freedom of choice can’t but come together with uncounted and uncountable risks of failure, and many people are bound to find such risks unbearable, fearing that they may exceed their personal ability to cope. For many people, freedom of choice will remain an elusive phantom and idle dream, unless the fear of defeat is mitigated by the insurance policy issued in the name of community, a policy they can trust and rely on in case of personal failure or a freak blow of fate.

If freedom of choice is granted in theory but unattainable in practice, the pain of hopelessness will surely be topped with the ignominy of haplessness—as the daily test of one’s ability to cope with life’s challenges is the very workshop in which individuals’ self-confidence and also their sense of human dignity and self-esteem are cast or melted away. Besides, without the collective insurance there would hardly be much stimulus for political engagement—and certainly not for participation in a democratic ritual of elections, as indeed no salvation is likely to arrive from a political state that is not, and refuses to be, a social state. Without social rights for all, a large and in all probability growing number of people would find their political rights useless and unworthy of their attention. If political rights are necessary to set social rights in place, social rights are indispensable to keep political rights in operation. The two rights need each other for their survival; that survival can be only their joint achievement.

The social state is the ultimate modern embodiment of the idea of community: that is, it is an institutional incarnation of the idea of community in its modern form—an abstract, imagined totality woven of reciprocal dependence, commitment, and solidarity. Social rights—rights to respect and dignity—tie that imagined totality to the daily realities of its members and base that imagined view on the solid ground of life experience; those rights certify, simultaneously, the veracity and realism of mutual trust and of the trust in the shared institutional network that endorses and validates collective solidarity.

The sentiment of “belonging” translates as trust in the benefits of human solidarity and in the institutions that arise out of that solidarity and promise to serve it and ensure its reliability. Quite recently, all those truths were spelled out in the Swedish Social Democratic Program of 2004:

 

Everyone is fragile at some point in time. We need each other. We live our lives in the here and now, together with others, caught up in the midst of change. We will all be richer if all of us are allowed to participate and nobody is left out. We will all be stronger if there is security for everybody and not only for a few.

Just as the carrying power of a bridge is measured not by the average strength of its pillars but by the strength of the weakest pillar, and is built up from that strength, the confidence and resourcefulness of a society are measured by the security, resourcefulness, and self-confidence of its weakest sections, and it grows as they grow. Contrary to the assumption of the “third way” advocates, social justice and economic efficiency, loyalty to the social state tradition and the ability to modernize swiftly (and, most significantly, with little or no damage to the social cohesion and solidarity), need not be and are not at loggerheads. Rather, as the social-democratic practice of the Nordic countries amply demonstrates and confirms, “The pursuit of a more socially cohesive society is the necessary precondition for modernization by consent.”10

Contrary to the grossly premature obituaries of what was promoted and heralded as the third way, the Scandinavian pattern is nowadays anything but a relic of past and now-frustrated hopes, or a blueprint dismissed by popular consent as outdated. One can see just how topical and how alive its underlying principles are, and how strong are its chances of inflaming human imagination and the inspiration to act, in the recent triumphs of the emergent or resurrected social states in Venezuela, Bolivia, Brazil, and Chile, which are gradually yet indefatigably changing the political landscape and popular mood of the Latin part of the Western Hemisphere, and are bearing all the marks of that “left hook” with which, as Walter Benjamin pointed out, all truly decisive blows have tended to be delivered in human history. However hard it may be to perceive this truth in the daily flow of consumerist routines, this is the truth nevertheless.

To avoid misunderstandings, let it be clear that the social state in the society of consumers is neither intended nor practiced as an alternative to the principle of consumer freedom—just as it was not meant, nor did it act, as an alternative to the work ethic in the society of producers. The countries with firmly established social-state principles and institutions in the society of consumers also happen to be the countries with impressively high levels of consumption, just as the countries with firmly established social-state principles and institutions in the societies of producers were also countries with thriving industry.

The purpose of the social state in the society of consumers is, just as it was in the society of producers, to defend society against the “collateral damage” that the guiding principle of life would cause if not monitored, controlled, and constrained. It is meant to protect society against the multiplying of the ranks of “collateral victims” of consumerism—the excluded, the outcasts, the underclass. Its task is to salvage human solidarity from erosion and to keep the sentiments of ethical responsibility from fading.