“Where have all the intellectuals gone?” asked Melvin Lasky, the editor of the now defunct Encounter, at a conference in 1967.1 His question merely echoed the plaintive query put forth by Harold Stearns in 1921: “Where are our intellectuals?”2 The problem, then, is hardly new; it seems to reemerge with every new generation.3 And yet, though the question stays the same, the answer does not.
Harold Stearns thought that America’s intellectuals had wandered off to Europe. But today’s intellectuals have not gone AWOL to Paris, London, and Berlin. If they did, they might not find a critical mass of colleagues over there. Nor would they know whom to search out once they got past, say, Isaiah Berlin in Oxford (who died in 1997), Jürgen Habermas in Munich, Umberto Eco in Bologna, or the not so philosophical nouveaux philosophes in Paris. The point here is not the disappearance of Europe’s intellectuals but their parochialism. There is one critical exception, the French “deconstructionists” or “postmodernists” like Michel Foucault, Jacques Derrida, and Jacques Lacan, whose influence has probably been stronger in the United States than in France, let alone in the rest of Europe. Each national group addresses its national audience, and if they publish abroad, they would rather do it in the New York Review of Books than in Commentaire, Merkur, or Granta. At the “low-brow” level, there is the same phenomenon. Europeans would rather watch American (and so some extent, British) movies and read American books than each other’s.
Certainly, Berlin is no longer the cultural capital of the world, and neither is London or Paris, pace Derrida and Lacan, who have scored their greatest triumphs in American lit-hum departments. Today’s culture capitals are more likely New York, Los Angeles, and Cambridge, Massachusetts, if one accepts a broad definition of culture that encompasses not only the literate arts but also cinematography, television, fashion, architecture, and painting. It is Harvard and Hollywood, the great university presses and DellBantamWarner, MOMA and Microsoft, DKNY and Tommy Hilfiger, the Met and Michael Jackson, the New York Review of Books and Calvin and Hobbes that shape the terms of the global culture. And these producers of icons and ideas, up-market or down, are American. George Orwell and T. S. Eliot, Karl Jaspers and Ernst Jünger, Karl Kraus and Arthur Koestler, Jean-Paul Sartre and Raymond Aron, Benedetto Croce and Ignazio Silone, Fritz Lang and Federico Fellini, Europeans all, are the past, glorious as it was.
So let us talk about America first and foremost—for some obvious reasons. First, the United States has inherited the mantle of cultural dominance from a long line of predecessors ranging from Athens to Berlin. The tilt in the balance of cultural power goes back to the 1930s—to the forced exodus of talent and ambition from its previous locus that was Central Europe: Berlin, Prague, Vienna, Budapest, and beyond, to Vilna and Czernovic. The rise of Hitler and Harvard, not to put too fine a point on it, constituted two sides of the same coin. This author sometimes muses that, without Hitler, his four most important teachers at Harvard—Stanley Hoffmann, Judith Shklar, Karl Deutsch, and Henry Kissinger—would have taught, respectively, at Vienna, Riga, Prague, and Berlin. Would Kissinger have ended up running the Wilhelmstrasse? Perhaps. Walther Rathenau, a Jew and a public intellectual par excellence, became foreign minister in 1922. He was murdered twenty weeks later.
Second, the influx came at the right time: when America’s rise to world power, both militarily and economically, acted like an insatiable sponge that continues to sop up talent to the present day. The anti-Nazis, the Jews, and the victims of communism were just the beginning. Even in the absence of persecution and revolution, they are being followed by the best and the brightest from all four corners of the globe, and with no end in sight.
Third, this is not an accident. America, like Rome and Berlin (c. 1871-1933) before, is a culture that not only draws but also liberates the genius of the outsider, whether from abroad or from below. How much talent goes untapped in the banlieux of Paris or the Turkish ghettoes of Berlin? European societies seem peculiarly unwilling to harness the energies of the newcomer. The move from Orchard Street to Columbia, accomplished in one generation, is not the European way; the son of a Turkish greengrocer in Kreuzberg will hardly teach at Berlin’s Humboldt University thirty years after his father’s arrival from Anatolia. How much skill and ambition, the raw material of all creativity, lies fallow in the non-, indeed, anticompetitive culture of Europe while they continue to flourish in America?
Add to this, in America, the best universities in the world, the biggest libraries, a vast array of private and public research facilities. Mix in a system of tertiary education that encompasses half of all high school graduates, as compared to about one-third in the large countries of Europe. Blend with a culture that thrives on novelty and debate (the scurrilous as well as the earnest), whereas Europe seems to cherish stability and predictability. Count the innumerable forums of published discourse—from the New York Review of Books to the New England Journal of Medicine—which are diligently read abroad whereas Les temps modernes barely radiates beyond the Left Bank.
The point of this excursion need not be labored. To expatiate on the place of intellectuals at the turn of the millennium is to talk about their role in America. For good or bad, the center of gravity of Western culture has shifted across the Atlantic, and the process is speeding along with the help of a worldwide lingua franca that is English, more precisely, its American-accented version. Also, there is the unwritten “five-year rule” that says: whatever happens in the New World will establish at least a bridgehead in Europe five years later. Only sixty years ago, and certainly up to the end of World War II, the key cultural forces traveled east to west.
So, “where have all the [public] intellectuals gone,” if they are no longer ensconced on the Left Bank, in Bloomsbury, or in the cafes of Vienna and Prague? Melvin Lasky’s five-word reply is: “into the groves of Academe.”4 Thirty-five years later, that is still one of the best answers.
Surely, this is a paradoxical solution to the puzzle. One might think that the enormous postwar expansion of tertiary education in the United States (as elsewhere in the West) would have triggered Marx’s fabled leap from quantity to quality. Between 1920 and 1970, the U.S. population doubled from 106 to 203 million, but the number of college and university teachers multiplied tenfold, rising from 50,000 in 1920 to a bit less than 500,000 in 1970. A quarter-century later, there were 870,000 such teachers, an increase of 75 percent, which is still more than twice as much as population growth.5 The student population has exploded as precipitously. In 1900 there were 232,000; in 1940, 1.4 million; in 1946, 2.4 million; in 1960, 3.2 million; in 1970, 7.5 million. At the end of the twentieth century, the number approached 10 million.6
The “old” public intellectuals—say, Lewis Mumford, Dwight Macdonald, Edmund Wilson, born around the turn of the past century—made their mark outside the academy. So did the next generation, the likes of Daniel Bell, Nathan Glazer, Irving Howe, Michael Harrington, Alfred Kazin, Bill Buckley, Irving Kristol, and Norman Podhoretz, who were born in the 1920s. But they were already part of the cohort that left behind Brooklyn and Greenwich Village and traveled to tenured positions in Columbia and Harvard (as did Bell and Glazer).
Liberation from economic uncertainty, one might think, should have unleashed intellectual creativity on a grand scale. Also, as the platoons bound for academia burgeoned into battalions, and thence into divisions, “more” should have begotten “better.” But this does not seem to be the case, though we should beware the oldest trap of them all when looking back at the recent past. “Things aren’t what they used to be,” is the universal complaint of the middle-aged who saw only giants walking the earth when they were young. Having grown in age and stature, they compare themselves with their contemporaries and discover only stunted growth. Karl Marx, writing in 1859, tried to put it more objectively: “Just as our opinion of an individual is not based on what he thinks of himself, so can we not judge . . . a period of transformation by its own consciousness.”7
Let us then stay with Marx for a while and look at the “material life” and the “modes of production” that characterize the vocation of the “New Class.” This term is often attributed to Daniel Bell. Bell, though, refuses to accept the honor: “It was initiated by David Bazelon and popularized by Irving Kristol” (in a letter to the author, April 21, 1997). (This is, of course, not the “New Class” of communist functionaries and power holders in the Soviet realm described by Milovan Djilas.) These are the “brain workers” who populate the universities, the think tanks, the consulting outfits, the planning staffs of governmental and private bureaucracies. They have grown into the millions as not only the higher education sector but also America’s role in the world expanded with a vengeance in the postwar period. Hungry for expertise and analysis, public and private bureaucracies recruited an ever-increasing army of information producers and managers who manipulate not tools and matter but words and symbols. What must they do to excel and advance? How do they acquire status, income, and power?
First, they must consider their “objective position.” They are sheltered from the market in many ways—by tenure or public employment. Ironically, that is both liberating and enslaving. Ensconced in vast bureaucracies, they cannot celebrate their “alienation” or glory in nonconformism. By contrast, ponder Christopher Lasch’s definition of the intellectual as “critic of society” whose role “is presumed to rest on a measure of detachment from the current scene.”8
The members of the “New Class” must obey “professional standards” and heed the rules and rituals of their institutions. They are recruited by committees representing the consciousness and consensus of the field; to acquire peer status, they must show credentials that indicate appropriate socialization. That will ensure efficiency but not necessarily originality.
Second, they must secure a high level of proficiency in their field. Ideally, the academically trained expert wants to capture a monopoly on information. He wants to sit on a pile of knowledge that only he controls—just like any businessman who would dearly love to corner a part of the market where he, and only he, can sell what others want. In that position he can reduce output and raise prices to extract maximal profit from his enterprise.
Third, to scale that exalted position, the expert will be drawn to ever more specialized knowledge. By definition, he can extract maximal rent from a product that only he can offer. That will surely cut down on the competition. A general practitioner, up against many of his kind, will not command the fees of a surgeon who excels in the excision of pachydermal kidney cancers. A political scientist who offers the full range of comparative government may land a modestly paid job in a junior college. But to make the tenure track in a great research university, he is better off with “The Political Economy of Health Regulation in the Food-Processing Industry of the Developing World,” especially since there are so few in the field who could poke holes in his expertise. Stanley Hoffmann satirizes this as follows: “What s/he has to do is ‘compare’ a given attribute (say public policy concerning the health regulation of noodles) in 77 countries, none of which s/he has ever visited, in order to ‘explain’ her/his dependent variable (noodle policy) through such hypotheses as principal-agent theory, bureaucratic politics, electoral cycle theory etc. All this through equations and regressions, and preferably rational choice.”9 Hence the tendency toward ever greater specialization that is only counterbalanced by the imperative of marketability. The product cannot be so specialized that it finds no takers.
Fifty years ago, “political science” broke down into five subdisciplines: American government, public law and administration, political theory, comparative government, international relations. By 1996, there were 104 such subfields, according to the APSA (American Political Science Association). APSA’s official program for the 1996 convention covered 120 pages, each listing about fifty events, paper givers, and discussants. Some of the paper topics may reveal how specialized things have become, for example, “Openly Gay and Maverick: The Activist Roles of Canadian MP Svend Robinson,” “The Diaristic Films of JFK: An Inaugural Event in Campaign Film and Elite Control,” and “The 6.7% Solution: An Analysis of Theories of Representation as It Applies to African-American Women Legislators.”10
With the specialization of the field comes the specialization of the vocabulary. Since time immemorial, any priesthood—shamans, physicians, or management consultants—has used special garb, vernacular, and ritual to command deference and to armor itself against the intrusive scrutiny of laypersons. “You shall bring Aaron and his sons forward . . . Put the sacral vestments on Aaron . . . then bring his sons forward, put tunics on them.... This, their anointing shall serve them for everlasting priesthood.”11 To speak of “dilutional natremia,” and to do so in the white vestment of the physician, is more impressive than saying: “You should have drunk less water and eaten more salt.”
At least “dilutional natremia” can be translated with a bit of effort. But how should the intelligent layperson interpret the title of a paper delivered at the 1996 APSA convention: “World Politics and the Internationalization of Ethnicity: The Challenge of Primordial and Structural Perspectives”? Thirty, forty years ago, an educated person could read much of what was contained in the American Economic Review or the American Political Science Review, the two premier journals of these two disciplines. Today he will be stumped and, worse, not too interested as he faces an endless array of mathematical models that try not so much to elucidate economic events as to find the best fit between algebraic functions and a set of data frequently chosen for their heuristic rather than explanatory value.
Similarly in political science where only the twenty-five-hundred-year-old field of political philosophy seems reasonably immune against the “numbers crunchers” and “rational choicers.” As models matter more and more, and politics less and less, political science climbs from one meta-level to the next into an ever more rarified atmosphere. Up there, the basic question of politics (Who gets what when where and why?) is lost in the fog of factor analysis and multiple regressions. And so, the race for theory and terminology in the humanities and the social sciences tends to replace rather than explain literature, politics, economics, and so forth.
Given the exponential expansion of academia and hence the competitive quest for differentiation and specialization, more and more is asked about less and less in ever more arcane ways. Professionalization is the watchword, and this has led Stanley Hoffmann, Harvard’s doyen of international relations, to muse: “Today, I would not get tenure.”12 Nor would Daniel Bell get a Ph.D. for a series of magazine articles and academic papers that came together in The End of Ideology. Bell recalls that this was not unique in those days. “Robert Lynd got his Ph.D. for Middletown a number of years after he came to Columbia.”13 Surely, nobody would earn a Ph.D. today if the dissertation were written in the spirit of Lionel Trilling, who recalled his “determination that the work should find its audience not among scholars but among the general public.”14
But the modern academic—the descendant of Kazin, Wilson, Trilling, et al., or in Europe, of Koestler, Camus, and Croce—does not write for the general public. Their successors may still know “real” English (or German, French, Italian), but a nice turn of phrase, a powerful metaphor, a gripping dramaturgy will not serve them well with their “reference groups.” Instead, it will earn them the epithet “high-class journalist.” They have to write for refereed journals; they have to put out the tightly circumscribed monograph that fits into just as narrow an open niche. And why? Because too many like them crowd the field, because advancement and income depend on the respect and goodwill of specialists just like themselves.
To mark the difference between the Then and Now, there is the wonderfully revealing story of the war of the Modern Language Association against Edmund Wilson, yesterday’s man of letters par excellence. After Wilson faulted the scholarly editions sponsored by the MLA as compilations of pedantry and pettifoggery, the MLA shot back with a booklet of replies. Its gist was that Wilson represented yesterday’s amateurism. And so Wilson’s attack “derives in part from the alarm of amateurs at seeing rigorous professional standards applied to a subject in which they have a vested interest. ... [A] similar animus . . . has been discredited in field after field from botany to folklore. In the long run professional standards always prevail.”15
That it is the long and the short of it: professional standards will prevail. Since these standards imply—and enforce—ever higher specialization and differentiation (cf. APSA’s 104 subdisciplines), the forums grow more insular, and the language more arcane. Inevitably that does not favor the public intellectual. By definition, the public intellectual must speak a public language and address the public at large. When self-contained (or worse, self-referential) expert communities define the supply side of the market, there will be a dearth of those either polyglot or capable of transcendence. In a world of such archipelagoes, the public intellectual literally has no ground to stand on. Either he remains on his little island or he drowns.
What about the public, the demand side, so to speak? By definition, a public intellectual requires an intellectual public. What are we to make of the demise of Encounter, Preuves, and Monat, the waning of once powerful reviews with names such as Partisan, Edinburgh, Westminister, the nonbirth of a “Berlin Review of Books,” and the failure to establish Transatlantik, a German version of the New Yorker, which folded after a few years?16 All this suggests two possible explanations: Either the “intellectual public” has also contracted, or it, too, has “specialized.”
Again, there is the paradox of quantity already noted in the context of an exploding tertiary education sector. On the producer side, as was argued, the exponential expansion of the professoriat surely has not made for more public intellectuals. Similarly, the new mass-educated public, emerging in the late 1950s in the United States and in the late 1960s in Europe, apparently has not lifted, pari passu, the demand for the wares of the traditional intellectual. Bemoaning that fact, here is Melvin Lasky’s classic Kulturkritik in a new guise: “In our mass-literate environment, saturated with words and images, appetites are being constantly whetted, minds prepared, tastes cultivated.” But what does the consumer really buy? “The mediocre fare of the runway bestseller, the easy-to-read digest, the high-priced serialization, the with-it art movement, the talked-about show.” And so, we may well have “reached a point where culture will be forced to exist without a coherent intellectual community.”17
True enough—as far as it goes. Just take a walk through the Frankfurt Book Fair, the largest in the world. Each year in October, it will display even more acres of books. But a quick sweep will also reveal an increasing proportion of self-help and coffee table tomes, pulp literature, and the fads of the day between covers—books that have a shorter shelf life (at home) than had the Westminster Review of yore. The complaint that high culture is going to hell is of course as old as Plato’s familiar invective against the ignorance and insolence of the young. Hence, beware of arguments that would descry secular descent where there is only generational recurrence. Nonetheless, here too a larger market has not bred more discriminating takers. Or actually, it has—in a different meaning of the term. Just as the producers of intellectual goods have differentiated, so have the consumers.
To make the point in all its baldness, look at the fate of the middle-brow magazines in the United States. Look, Life, and Saturday Evening Post, which used to sell millions, have gone to the Great Shredder in the Sky. They have been shouldered aside by countless specialty and subspecialty magazines—just as the once dominant national networks are being crowded out by special-audience channels that will soon number in the hundreds. The same phenomenon obtains at the top of the high-brow market.
Fifty years ago, there were only two magazines dealing with international affairs: Foreign Affairs for the general up-market audience, World Politics for the academics. Today, there is the National Interest, Foreign Policy, Washington Quarterly, International Studies Quarterly, just to mention the better-known ones. And on the academic side, there is International Security, International Organization, Security Studies, the SAIS Review, Survival, the Strategic This and the Military That. Nary a university institute or think tank does not have its own periodical, and where there was once only “The Quarterly of X,” there are now the “Southern,” “Western,” or even “Southwestern Quarterly of X.” In the 1970s, notes George Will, four hundred journals were founded just in modern languages and literature to accommodate the “publish or perish” pressures of modern academia.
At a minimum, think tanks and university centers will put out a newsletter by mail, fax, or Internet. Each will cater to a slightly different audience, differentiated by ideology, interest, taste, or region. The audience has “deconstructed,” to use an expression of the day. And so, the public intellectual has no “agora” in which to hold forth, as more and more separate audiences congregate in ever more—and smaller—public squares. In the age of the specialist, when we would rather go to a nephrologist or at least to an internist; the GP is a vanishing breed. And the public intellectual is the general practitioner of the mind.
Yet the problem of “deconstruction” goes deeper than the segmentation of the culture. Here is another paradox: Though the traditional public intellectual was a freelancer and Bohemian (in the sense of standing apart from the behavioral and intellectual conventions of his time), his vocation was predicated on a regulated culture. Plato took on the Sophists, Jesus the Pharisees, Melanchton the popists, Galileo the geocentrists, Voltaire the foes of reason and of le bon sends,18 Burke the revolutionaries, Marx the bourgeoisie, Keynes the classical economists, and Milton Friedman the Keynesians.
To persuade in a public language in a public place, there has to be a paradigm asking to be cracked. For the outsider to bash in the gate, there has to be a locked portal in the first place, and something worth overturning in the realm beyond. In smiting the controllers—philistines and schoolmen, clerics, kings, and capitalists—the manifesto wielder and movement monger wants to dethrone the reigning authorities so as to become a controller himself. After the bourgeoisie is smashed, the protagonists of the novus ordo seclorum—the “vanguard” of the proletariat, Fauvists, Aquarians—want to set the rules for the greater good of all.
No capitalism, no Communist Manifesto; no David, no pointillism. And without a Culture (in the sense a “canon” flanked by a set of binding standards and tastes), no Culture Wars. But after the long run of antitraditional-ism, beginning with the quattrocento, we are stuck in the dragonless world of postmodernity—for the time being, at least. There are no barriers to be smashed with rousing manifestoes that would ring in the new dawn in arts and politics. Transient agitation has shouldered aside the revolution; anybody can join the fight because “anything goes,” as the postmodern creed has it.
Add to this the other mainstay of postmodernity: the ancient temptation of relativism that has reappeared in the guise of multiculturalism and deconstructionism. If my “text” is just as good as your “literature,” if your invocation of “reality” or your interpretation of history is but a mask that conceals your gender-, class-, or race-based hold on power (even from yourself), then there is no debate. For a debate, the alpha and omega of the intellectual life, presupposes common rules—“objective” criteria that help us to discern Truth, Beauty, and Justice, even as we fight each other.
If there are no barriers and no criteria, if everybody can wade in and anything goes, then the public intellectual has lost his forum and his foundation. If Dostoevsky were still among us, he would be flummoxed. What would enable postmoderns to debate the “eternal questions” that have tortured the intellectuals of all ages? As he put it in the Brothers Karamazov, they have always been “talking about the eternal questions . . . What do you believe, or don’t you believe at all? ... of the existence of God and immortality. And those who do not believe in God talk of socialism or anarchism, of the transformation of all humanity on a new pattern.”
If the public intellectual is declining, the pundit is on a roll. While the waning of the latter remains, and will always remain, a matter of inconclusive debate, the ascendancy of the latter can be quantified. Fifty years ago, the New York Times had two columnists: Arthur Krock and James (“Scotty”) Reston. In 1994 it had eight: Anthony Lewis, Bob Herbert, Thomas Friedman, Frank Rich, William Safire, Maureen Dowd, Abe Rosenthal, and Russell Baker. That is an increase of 400 percent, and a similar pattern holds in the Washington Post as well as in most American papers from the Arizona Republic to the Wichita Eagle.
The columnist is not quite as old as the public intellectual. If we define Plato as the original public intellectual, then the first columnist, literally, was Simeon Stylites of Syria, who spent thirty years preaching from a column until his death in 459 A.D. So his craft is about a thousand years younger. But as the explosion of numbers indicates, it has flourished most in the past forty years.
Indeed, the “modes of production” in both fields—academia and journalism—have engendered a reversal of fortunes for its protagonists. As the “New Class” grew in response to surging demand for its expertise, the number and/or importance of public intellectuals has dwindled in relation. On the other side, as American newspapers were being decimated, and town after town succumbed to the “one-paper” syndrome, fewer papers meant more columnists.
Notes Karl E. Meyer: “The surviving dominant newspapers in bigger cities found it both equitable and expedient to adopt a more ecumenical policy on opinion features. Conservative papers like the Chicago Tribune and the Los Angeles Times sought greater balance, as did less conservative survivors like the New York Times and Boston Globe. When the Washington Post absorbed its morning rival, the very conservative Times Herald in 1954, the new owners kept . . . right-wing columnists like George Sokolsky in the combined paper.”19
And thus forty-odd years later: Abe Rosenthal versus Anthony Lewis in the Times, George Will and Charles Krauthammer, on the one hand, and William Raspberry and E. J. Dionne, on the other, in the Post. This both-and phenomenon is more than just “equitable and expedient.” It fits in very nicely with the mood of the times and the requirements of the readers.
Open the op-ed page and behold a supermarket of the zeitgeist. There is no need to burn with indignation or to engage your mind in a battle of wits. “You need not commit, you can have it all,” is the medium’s message—much like the 1996 acceptance speech of Bill Clinton, the first postmodern president. He offered to conservatives more police protection, fiscal probity, and discipline in the schools; to the center more middle-class entitlements; and to the left more social spending and more war on pollution.
Just as this shopping basket of political goods allows the voter to pick and choose, “left and right together” on the op-ed page spares both readers and editors the necessity of commitment. If you don’t like Bill Safire’s contrecoeur conservatism, here is Tony Lewis’s bleeding-heart liberalism. And if you like neither, go to the Living, Home, or Arts section. If “anything goes,” then nothing matters. You can literally believe “six impossible things before breakfast,” as the Black Queen told Alice.
Ideology has not ended, as the title of Daniel Bell’s 1960 book suggested; it has scrambled. And the bigger the omelet, the more cooks can, and must, stir the pan. It isn’t just that papers want to balance left and right. “Right” breaks down into cultural conservatism, religionism, and market liberalism. “Left” encompasses statism, environmentalism, lifestyle choice politics. But we also need black voices, women’s issues, different sexual preferences. There have to be isolationists of the left and the right, and interventionists from both camps. Let neo- and paleoliberals speak. And the elder statesman, but only if we can also find a voice from the “new generation.” So the twice-weekly regular is bracketed by the ad hoc opinionist of the day. But this is not all. Add a legion more to account for the pundits ensconced in the weeklies and monthlies—from Newsweek to the Nation, from Harper’s to George. And let’s not forget a few score specialty magazines where opinion leaders on trucks, computers, and sidearms hold forth.
Yet the New Catholicism of ideology, lifestyle, and consumerism is but one growth factor of the punditry industry. As noted earlier, the public intellectual requires an intellectual public. That is an audience which can suffer an argument of some length and complexity—five thousand or ten thousand words. Such willing victims have not multiplied along with the number of pundits and magazines. Even classic stemwinders like the New Yorker have cut down on length and increased the number of short takes. Eight hundred words, the attention span demanded by a column, seems to be the coin of the intellectual realm.
George Will once said about column writing: “The amazing thing is that something this much fun isn’t illegal.”20 Actually, he works quite hard at his stuff (without snorting coke between paragraphs), and so does William Safire, when he pens his disquisitions “On Language.” But “fun” is not a bad word to describe the mind state of writers and readers. The author does not have to sweat footnotes and chisel a sustained argument. The reader does not have to run a three thousand-meter course or scale Mount Rushmore. He can hop on the elevator for a short ride. After all, as Walter Lippmann has put it, a column is produced by a “puzzled man making notes . . . drawing sketches in the sand, which the sea will wash away.”
It is fun and futility, and not too much toil and trouble—and yet there are morsels of meaning in between. Perhaps this is the spirit of our age, the age of journalism. Caught off balance between the pap and sound bites of television and the enamel-breaking fare of academia, even the intelligent and educated are only too happy to gorge on the finger food laid out on the pundit’s buffet. This is also the age of grazing, and though journalism may be the fast food of the mind, make no mistake about it: it is filling and nourishing.
And yet. Just as the sparse prettiness of nouvelle cuisine has given way to lean but heartier stuff, the ebbing of the public intellectual discourse may well leave a void asking to be filled. For those who would grieve about the decline of the public intellectual, there is the consoling voice of Harold Rosenberg, himself an emblematic representative of the species. “Rosenberg did not share the worry that intellectuals might disappear; he believed that intellectuals assumed various guises and disguises and that they regularly showed up after being consigned to the historical dustbin.”21
Rosenberg predicted salvation in 1965. Has Phoenix risen again? Perhaps, and if so, in a different guise—as is his wont. The classic paradigm of the public intellectual in the past century, as represented by the Wilsons, Trillings, and Sontags was (literary) criticism, but these protagonists brought two qualities into the arena. They had something to say, and they knew how to say it. Analysis, judgment, and prescription came with a distinctive sensibility; not only did they see things differently, they also saw different things. And they described them in a language that transcended the ordinary.
A tour d’horizon of the contemporary American scene reveals a changed landscape. First, the public intellectual has shifted from “criticism to cultural studies,”22 or from literature as thing-in-itself to literature as emanation-of-something-else. To sharpen the point, the center of gravity has moved toward the ground occupied by political and social theorists and their commercially much more successful imitators, the pop sociologists and psychologists. On the “left” there are Richard Rorty, Ronald Dworkin, Charles Taylor, Michael Walzer, Martha Nussbaum, Catharine MacKinnon, Albert Hirschmann, Amitai Etzioni, Robert Putnam—academics all, but known to a larger audience outside the university. The discourse ranges from serious political philosophy to sheer ideological agitation. On the “right” there are (or were) Allan Bloom, E. D. Hirsch, Leszek Kolakowski, Milton Friedman, Samuel Huntington, William Bennett, Martin Feldstein, Francis Fukuyama, Thomas Sowell, plus the academics and think-tankers who write for Commentary, National Review, and occasionally the New Republic.
The locus has shifted, too. Some of the most interesting contributions to the public debate come not from the universities but from research institutions—the Brookings Institution, the American Enterprise Institute, or the Manhattan Institute. Indeed, as the universities have succumbed to relentless “scientifiza-tion,” these institutions, with their different ideological colorations, have offered a home and a salary to those who continue to deal with the “big issues” in a public language. There is something missing, though, when we compare them to the two previous generations. Let’s call it “sensibility” or the “aesthetic element”: the originality of style, perception, and language that even today distinguishes, say, the political reportage of a V. S. Naipaul from the best efforts of scholars and journalists.
Au fond, Harold Rosenberg had it right two generations ago: Phoenix always rises, in one way or another. But what if journalism, the newly dominant currency in the market of ideas, continues to rise? Then we might take heart from J. B. Priestly, the novelist, playwright, and public intellectual par excellence: “We are always led to infer that [the journalistic enterprise] is a new and reprehensible practice, the mark of a degenerate age. The truth is ... that all the best essays in the language have first seen the light in the periodical press.”23
True enough. Karl Marx was a relentless pundit, and so was Mark Twain. Marx also wrote The Eighteenth Brumaire, and Clemens published Huckleberry Finn—classics both. But these two set formidable standards for journalists who would want to transcend their craft. They expanded our understanding of the world: this is how it is. To the meaning, they added a message: this is how it should be. And finally they enclosed both in a “memorable form,” as Jacques Barzun put it when musing about the task of art. At its highest, the challenge is to bond the meaning, the message, and the medium—the last implying the ability to rise above the vernacular and sensibility of the day. The models are in place. Are their should-be successors too? Not yet. But then let’s await tomorrow’s prophets and profiteers of hindsight who might cheer the giants of this generation and ask once more: Where have all the intellectuals gone?
The author is indebted to the following people for their critical input: Daniel Bell, Stanley Hoffmann, Ronald Rogowski, Robert Silvers, George Will, and Fareed Zakaria.
“The Idea of an Intellectual Public,” opening address at the conference on “The University and the Body Politic,” University of Michigan, 1967. Reprinted in Melvin J. Lasky, On the Barricades, and Off (New Brunswick, N.J.: Transaction, 1989), 335.
Thus the heading of a chapter in his book America and the Young Intellectuals (New York: George Doran, 1921), 46–51.
See, twenty years after Lasky, Russell Jacoby, The Last Intellectual: American Culture in the Age of Academe (New York: Basic Books, 1987); all subsequent citations are from the 1989 paperback edition published by Noonday Books/Farrar, Straus and Giroux, from which I have profited greatly.
Lasky, On the Barricades, and Off, 335.
National Center of Education Statistics, Digest of Education Statistics, 1995 (Washington, D.C.: U.S. Department of Education, 1995), 13. The figure for 1997 represents an estimate by the Center.
Anne Matthews, Bright College Years: Inside the American Campus Today (New York: Simon & Schuster, 1997), 127.
Preface to The Critique of Political Economy, in Karl Marx and Frederick Engels, Selected Works (New York: International Publishers, 1968), 183.
The New Radicalism in America, 1889–1963: The Intellectual as Social Type, p. ix, as cited in James Seaton, Cultural Conservatism, Political Liberalism: From Criticism to Cultural Studies (Ann Arbor: University of Michigan Press, 1996), 4.
In a correspondence to the author, April 28, 1997.
American Political Science Association: APSA (San Francisco, 1996), 165, 96, 135, 101.
Exodus 40:12–15.
His more science-minded colleagues in the field of course call him a “high-class journalist,” as he has published widely in the New York Review of Books, the New Republic, and the New York Times.
In a correspondence with the author, April 21, 1997.
“Some Notes for an Autobiographical Lecture,” in The Last Decade: Essays and Reviews, 1965–75 (New York: Harcourt Brace, 1979), 239. As quoted in Jacoby, Last Intellectual, 18.
Gordon N. Ray, Professional Standards and American Editions: A Response to Edmund Wilson (New York: Modern Language Association, 1969), p. i. As recounted and cited in Jacoby, Last Intellectual, 194–95; emphasis added.
Interestingly, there is a new British journal, Prospect, which deals with “Politics, Essays and Reviews.” It is a lively, controversial publication that covers a wide ideological spectrum. For a favorable review of the magazine’s first year in existence, see John O’Sullivan, “Prospect,” Times Literary Supplement, March 7, 1997.
Lasky, On the Barricades, and Off, 341.
Isaiah Berlin’s take on Voltaire is perhaps the best definition of the public intellectual. “As a philosophe he is part moralist, part tourist and feuilletoniste, and wholly a journalist, albeit of incomparable genius.” “The Sciences and the Humanities,” in Berlin, Against the Current (London: Hogarth, 1979), 92.
Karl E. Meyer, Pundits, Poets, and Wits: An Omnibus of American Newspaper Columns (New York: Oxford University Press, 1990), xxxix.
In a column for the Washington Post, December 18, 1983.
Jacoby, Last Intellectual, 111, summing up Rosenberg’s “The Vanishing Intellectual.” First published in 1965, the essay was reprinted as “The Intellectual and His Future,” in Jacoby, Discovering the Present (Chicago: University of Chicago Press, 1973).
Thus the subtitle of James Seaton’s Cultural Conservatism, Political Liberalism.
As quoted in Karl E. Meyer, Pundits, Poets, and Wits, xii.