By the mid 1970s, theoretical linguistics, especially for Lakoff and Langacker, had lost its way, and a reboot was required.
—Vyvan Evans (2016 [2014]:283)
The problem of the evolution of language arose at once in the mid-twentieth century when the first efforts were made to construct accounts of language as a biological object, internal to an individual, and capturing what we may call the Basic Property of human language: each language yields a digitally infinite array of hierarchically structured expressions with systematic interpretations at interfaces with two other internal systems, the sensorimotor system for externalization and the conceptual system for inference, interpretation, planning, organization of action, and other elements of what is informally called “thought.”
Chomskyan linguistics bifurcated after Aspects of the Theory of Syntax, and then branched widely, with offshoots on offshoots on offshoots, blossoming widely. Chomsky, not so much.
George Lakoff turned his back completely on the entire Generative tradition. Generative Semantics, he said, made
fundamental formal mistakes. The most obvious one was taking from generative grammar the idea of phrase-structure trees and derivations. This led to endless patching: output conditions, a theory of exceptions, global rules, transderivational rules, syntactic amalgams. (1977:284)
Tossing out “phrase-structure trees and derivations,” Lakoff said of the innovations that launched Chomsky’s career, was the way forward, and he moved toward the approach that became known as Cognitive Linguistics.
Chomsky turned more and more toward a single defining goal, which had been central to his platform since the historico-philosophical setting he created for the Aspects model, Universal Grammar. This turn took him first toward the broadly cross-linguistic, abstractly genetic framework, Principles and Parameters, and then toward a stripped-down but richly suggestive enterprise, the Minimalist Program, itself somehow blurring into a latterly favored label, Biolinguistics, shedding methods and instruments as he moved on, and apparently losing all interest in the specificity and precision that were the calling cards of his revolution. As always, the cast of linguists developing his ideas changed and changed again, though with some lines of continuity and, overall, decreasing animosity.
The Wars left their imprint on both of these trajectories, Lakoff’s and Chomsky’s, Cognitive Linguistics and the Biolinguisticky Minimalist Program, and therefore on the linguistics of the twenty-first century. That is perhaps not so surprising, given the power and reach of these two men’s gifts to innovate and synthesize, and especially to influence and inspire others. What might be surprising, however, is that both turns, Chomsky’s as well as Lakoff’s, bear the unmistakable imprint of Generative Semantics.
To oversimplify (why stop now?), there are two rough patterns of the Generative Semantics legacy, with Chomsky deeply enmeshed in both. One is defined in terms of wholesale opposition to Chomsky and most things Chomskyan. This one aligns with the Cognitive Linguistics framework. The other is defined in terms of adherence to most things Chomskyan.1 In the first instance, Generative Semantics served as the thin edge of the wedge which brought context, variation, and the slippery, pragmatical pig of moment-to-moment language-making back into the field, its central legacy on this front being Linguistic Pragmatics, Cognitive Linguistics, and Construction Grammar, though it also fostered the (re-) emergence of sociolinguistics, functionalism, and other varieties of in vivo linguistics. For scholars in these fields, when they are given to historical reflection, Generative Semantics serves as something of a Thermopylae or an Alamo, the honorable massacre.
In the second instance, the Generative Semantics legacy to in vitro Chomskyan linguistics (in the rule-based, grammar-modelling sense of Chomskyan linguistics), the contributions are more specific and technical. They are, consequently, also quite a bit easier to chart. GPSG, for instance, unequivocally adopted Ross’s auxiliary analysis, and has an explicit, logic-influenced level of semantic representation, as does LFG. But the interesting wrinkle here, is that GB, the direct descendant (insofar as direct descendant makes sense in linguistics, or any science) of Chomsky’s (Revised) Extended Standard Theory, also makes very liberal use of the technical proposals of Generative Semantics, including a good many proposals that Extended Standard Theorists expended great energy attacking. For this group, Generative Semantics represents irrationality overcome, but once that bastion of error had been torn down, there was apparently no contradiction in pillaging and looting and taking their prizes home—lexical (de)composition, for instance, or logical form—to help build their own truths.
On the one hand, that is, we see the return of the oppressed, Generative Semantics positions that have ended up in the Chomskyan fold. Postal (2004 [1988]) calls this “the right of salvage.” On the other hand, the up-from-the-grave Lakovian hand, there is a prodigious reworking of linguistic assumptions in continuity with late Generative Semantics trajectories. Bruce Fraser calls this “the greening of linguistics” (Ross 2000).
Along with these broad legacies, too, our dramatis personae all have individual legacies to consider.
The depth of his understanding and the quickness of his mind were awesome. His ability to see connections between all the disparate things he knew was humbling. He could instantly see the deep ramifications of some new theoretical proposal, as if standing on a great hill of knowledge, surveying a vast landscape of thought. . . . He was [a] remarkably kind and modest human being. He was the least egotistical person I ever met. He pursued his intensely personal investigations and produced his gigantic oeuvre not out of any desire for personal aggrandizement, and certainly not with ill will toward anyone whose views he found wanting. He just needed to continually advance his own understanding of the things that fascinated him and to share what he understood with as many people as possible.
—Jerry Sadock (in Mufwene et al. 1999)
McCawley, in his post-bellum work, much more recognizably followed the early Generative Semantics paths than any of the other horsefolk, right up until he was unfortunately felled by a massive heart attack on the University of Chicago campus, in 1999, at the age of sixty-one. He was, as Bever called him, “the truest of the true GSers” to the end; his New York Times obituary paints him chiefly as someone who “helped instigate the first big ideological rift in modern linguistics” (Fox 1999). His entire career was midway between Cambridge and Berkeley, in Chicago, and linguistically as geographically, once Lakoff’s and Chomsky’s divergences had run to their dialectical extremes, he was somewhere toward the midpoint between those two polarities.
He leaned toward Lakoff and Berkeley on questions of context, meaning, and the importance of general-purpose, non-encapsulated cognitive principles for understanding language. He also may have leaned further than any of the Generative Semanticists in his reverence for linguistic peculiarities. Lakoff, Ross, and Postal all reveled in facts—the harder they were to accommodate theoretically, the more beloved they seemed to be—promulgating such genres as squibs and anarchy notes and creature features. But McCawley gets the nod as the most dedicated curator, renowned for the vast assortment of weird and wonderful examples he had gathered, hither and yon (with hither being his television set, a disproportionate number of examples coming from Spanish telenovelas, and yon being such places as Shakespeare’s plays, Mike Royko’s Sun-Times newspaper columns, Chomsky’s papers and books, and a sign in a shop window at the corner of 23rd and Wabash). He printed up and distributed his yearly acquisitions at conferences, and linguists sparkle when they talk of the eagerness with which they awaited them. He called his collection The Linguistic Flea Circus.2
He didn’t just collect the data, however. He looked for opportunities to explain it wherever he could. He was “a consummate sweeper-up of data” that most “others prefer to step over or sweep under the carpet” (Rosta 2000:717). That fascination with linguistic intricacies, fueled by his boundless curiosity, took him deep into many languages; by his death he had “learned to speak (at least) Dutch, German, Yiddish, Swedish, French, Spanish, Portuguese, Russian, Hindi, Hungarian, Mandarin, and Japanese” (Lawler 2003: 614).
Toward Chomsky and Cambridge—chiefly Aspects-era Cambridge, Aspects-era Chomsky—he leaned on questions of generativity and many descriptive mechanisms, prominently including transformations and derivations (albeit in a more explicitly metaphorical way—McCawley 1998:23–24).
One would think, midway between these intellectual antitheses, that McCawley would be conflicted. He was not. He had no use at all for theoretical flag waving by the early 1970s. He cultivated a homegrown market-driven philosophy of science, where every proposal is available for a price, no matter from where or from whom it comes, and no matter what auxiliary beliefs or methods might be thought by others to be essential to its adoption.3 “When I teach a course on Relational Grammar or Montague Grammar,” he said, “I describe it as a tour, not a sight-seeing tour but a shopping tour” (Cheng and Sybesma 1998:1). He quite cheerfully adapted -syntax and such notions as c-command in his work, for instance, which came out of Chomsky’s program, as well as Emonds’s distinction among structure-preserving, root, and local transformations. But he also maintained a full catalog of transformations, a close relationship between the deepest syntactic level and semantics, and a steadfast refusal to concede any usefulness to the notion of grammaticality. If he hadn’t grown so allergic to labels, Extended Generative Semantics might have worked for him. It captures the way in which he could make elements of the Extended Standard Theory, the ones he found productive, compatible with what he retained as fruitful from Generative Semantics. Another label for his work, reflecting its diversity, insatiability, and malleability, might be the one entitling a tribute volume to him, Polymorphous Linguistics (Mufwene, Francis, & Wheeler 2005).
But of course McCawley resolutely declined to use any name at all for his approach, shucking even his unsyntax label before the ink was dry on the Milwaukee Conference proceedings. “If you feel you need a name for [the work I do],” he put on his department webpage, “go and make one up” (McCawley c.1996, also 1998:xix). He did, however, offer a description for that work, sketching out his largely stabilized framework as
a revisionist version of transformational grammar . . . which exploits what I regard as the fruitful ideas of transformational grammar (constituency, multiple syntactic strata, the cyclic principle) and chucks out what I regard as counterproductive ideas (the metaphor of a “base” structure, the idea of categories and structures as remaining constant throughout derivations, the fetish for keeping syntax and semantics separate). (McCawley c.1996)
McCawley is viewed as too idiosyncratic to have a broad influence on the field, but even if he had an airtight, comprehensive, and attractive framework, he was never the evangelist that Ross was for a while and that George Lakoff has never stopped being. McCawley’s distaste for labels followed closely from the “principled aversion to proselytizing for a reified theory” that he developed as the Wars raged (Rosta 2000:18). He refused to promote any position as if it threw all other positions into shadow. Everybody with a serious interest in syntax and semantics reads his work, or should; everybody recognizes his brilliance—those who knew him personally remain in awe of him; and everybody finds something challenging and rewarding they can take from his insights. But nobody does linguistics in his image.
“As an epistemological anarchist,” he said in a late interview, “I am delighted to have multiple frameworks being used.” Further: “the thought of everybody working in the same framework, using the same theory, . . . [is] as disgusting a prospect as everybody espousing the same religion or everybody speaking the same language” (Cheng and Sybesma 1998:1, 28). The attitude is strikingly (and no doubt, at least in some measure, reactively) different from Chomsky’s.
All the same, McCawley’s anarchic broad-mindedness did not prevent him from leaving an important legacy—not a monolithic package, to be adopted or rejected en bloc, but an emporium of insights, analyses, and otherwise unseen connections, coupled with an attitude of wholesome plurality, suffused with inspiring glee. That legacy is not just visible in the many students and colleagues he personally touched, but in his wealth of publications.4 In the latter category, the two most significant (and, alas, the only two still offered by his publisher5) are The Syntactic Phenomena of English and Everything that Linguists Have Always Wanted to know About Logic,* both having been refined and bolstered into definitive second editions before McCawley’s death (respectively, 1998 and 1993b).
The final word goes to John Lawler, one of his many close friends and the author of his Language obituary:
Jim, as he was universally known to his students, colleagues, and admirers—comprising together a large percentage of the world’s linguists—was one of the great figures of twentieth century linguistics, a recipient of practically every honor possible, past president of the LSA, and a genuine original. He was greatly loved, and he is greatly missed. (Lawler 2003:614)
It is, alas, a false promise to give someone the last word when death is in the room, and chronicling an episode whose peak activity happened about forty years ago now, it would be surprising if McCawley were the only one who has left us. He is not. Among the noteworthy actors in various parts of our story, we have also lost Zellig S. Harris (1909–1992), Dwight Bolinger (1907–1992), Robert L. Lees (1922–1996), Jerrold Katz (1932–2002), Rudolf de Rijk (1937–2003), Michael K. Brame (1944–2010), Ellen Prince (1944–2010), Ivan Sag (1949–2013), Jeffrey Gruber (1940–2014), Charles Fillmore (1929–2014), and Morris Halle (1923–2018).
* But were Ashamed to Ask
Zellig Harris . . . was one of the half-dozen linguists, since the beginning of the serious study of language a little after 1800, whom anyone conversant with the field would label a genius. He was the first (in 1947) to adumbrate the notion that linguistics could accept the responsibility of synthesizing or “generating” the sentences of a given language (say, English), as in an algorithm or computer program, from some explicit set of rules—and in so doing he exercised a deep and abiding influence on his best-known student, Noam Chomsky; on his many other students; and on all future researchers who yearn to understand language, surely our most distinctively human attribute. [Watt] The work of his first phase as a general linguist is now relegated to the history of our subject, and is sometimes cruelly traduced. That of his second phase was eclipsed, after 1960, by Chomsky’s. That of his last phase has, for twenty years, been widely ignored. [Matthews]
—W. C. Watt (2005:3), P. H. Matthews (1999:116–17)
[Dwight] Bolinger was the genuine article—one of the most distinguished semanticists of the age, with an uncanny ear for the nuances of words. [He] never lost faith that the remedy for abuses of speech was more speech. He wrote that people had to reassert the public ownership of language; . . . [that it should] “take its place alongside of diet, traffic safety, and the cost of living as something that everybody thinks about and talks about.”
—Geoffrey Nunberg (1992)
Bob [Lees] was a close and dear friend from more than forty years ago, also a very highly valued colleague, even long before he became our first “student”—technically. . . . He’s left a wonderful legacy, from his personal as weIl as professional life, both rich and productive, and I hope as rewarding to him as they surely were to his family and friends, and others lucky enough to have contact with him.
—Noam Chomsky (1997:7)
Jerry [Katz]’s public face was fierce. In philosophy colloquia, he was trenchant and tenacious. He could always be counted on to ask an incisive question, a trait much appreciated by his colleagues if not always by the speaker. It was lots of fun to talk philosophy with him, though also challenging. . . . But in his personal relations, he was the sweetest of men, warm, generous, and affectionate.
—David Pitt (2002:193)
Rudolf de Rijk’s untimely death in June 2003 put a sudden end to the work of one of the most brilliant figures in the area of Basque linguistics. He left behind groundbreaking contributions to Basque phonology, morphology, syntax, philology, lexicography, and history . . . , and the unanimous admiration and respect of Bascologists of different ages and theoretical persuasions.
—Jon Ortiz de Urbina (2009:86)
I first met both professors Michael [Brame] and [his wife] Galina in Carmel many years ago, and was truly amazed when they told me that between them they were fluent in 40 languages [Charlton]. . . . Michael taught in North America, Europe, Asia, and Africa and had been a professor at the University of Washington for more than three decades. He is the author of several technical books and the [founding] editor of Linguistic Analysis, a linguistic periodical with international circulation. . . . Mike was a brilliant linguist, great colleague, and respected teacher as well as a beloved husband and father [Kim].
—Derran Charlton (2010), Soowon Kim (2010)
A pioneer in linguistic pragmatics, Ellen [Prince] worked on her own and with many colleagues and students on various aspects of the subject. . . . Ellen was an inspirational and caring teacher, imparting high intellectual standards while at the same time providing solid support and mentoring to her many students.
—Gillian Sankoff & Tony Kroch (2010)
Ivan Sag was a force of nature. His work in many subareas of linguistics was spread over more than four decades and was respected around the world by the thousands of linguists he taught or befriended or influenced. . . . Forever evolving intellectually, in the last decade of his life Sag found that the degree of detail necessary in an empirically adequate formal treatment of virtually any area of grammar, for example so-called “wh-movement” phenomena, required incorporating into an explicit grammar something like the traditional notion of grammatical construction, along with the mechanism of multiple inheritance. This led him to begin drawing closer to work on the Berkeley variety of construction grammar as developed by Charles Fillmore, Paul Kay, and others.
—Geoffrey Pullum (2013)
Jeff [Gruber] was an important figure in the field of linguistics. . . . [His] thesis includes extensive discussions of goals, locations, sources, themes and agents, as well as many insightful observations on the syntax and semantics of prepositions. Jeff introduced and explored the notion of theme for the first time, which is now standard in the linguistics literature on thematic relations.
—Chris Collins (2014)
Charles J. Fillmore, one of the world’s greatest linguists—ever—was the discoverer of frame semantics, who did the essential research on the nature of framing in thought and language . . . whose importance stretches well beyond linguistics to social and political thought—and all of intellectual life. The world has lost a scholar of the greatest significance.
—George Lakoff (2014a)
Morris and I were very close for almost seventy years, working together, sharing much else. . . . His contributions to modern linguistic science are incalculable, not least right here at MIT, where even apart from his groundbreaking work, he was primarily responsible for creating what quickly became, and has remained, the center of a research enterprise that has flourished all over the world, far beyond anything in the millennia of inquiry into language. [He was] a wise and compassionate person, more than anyone I’ve known, whose kindness, warmth, and care touched many lives.
—Noam Chomsky, quoted in Dizikes (2018)
Haj Ross is a Treasure.
Haj Ross is, if anything, more of an independiste than was even McCawley. He aligned with G. Lakoff on some questions (fuzziness and cognition), with McCawley on others, especially maintaining transformations (he calls himself “a dyed-in-the-wool (i.e., a non-recovering) transformationalist” [2011b:243]). But he may have moved further from their common generative starting point on some questions than either of them, venturing into literature and poetry.6 Returning to our territorial theme, Ross’s post-bellum trajectory was very different geographically and intellectually. While McCawley was reigning as the guru of Chicago and Lakoff was restlessly searching for an Anti-Chomskyan Shangri-La (physically until Berkeley, theoretically until Cognitive Linguistics), Ross was back in the castle keep; MIT, Building 20; not exactly a prisoner, but something like a perpetual defendant. He maintained a remarkable citational prominence in Chomskyan work, on the basis of his thesis, but the veneration one would expect to attend that prominence was nonexistent. Take van Riemsdijk and Williams’s textbook, Introduction to the Theory of Grammar. Ross’s “Constraints on Variables in Syntax” was not only one of the defining texts of the grammatical theory van Riemsdijk and Williams champion in their book, it was (and remains) hugely influential in linguistics generally; every syntactic model is forced to come to terms with the phenomena and principles Ross explored in his thesis, or it simply wouldn’t be taken seriously. Yet (as John Goldsmith shows in a penetrating review) the image of Ross that emerges from their book is of a lumpen functionary who had only the vaguest of clues about what he stumbled onto. Most curiously, there is a strong suggestion in the book—a text which, by its very nature is meant to define future generations of linguists—that Ross was only concerned with a few petty details of English, when in fact “Ross’s thesis was the first (and, at the time, mind-blowing) massively crosslinguistic study of an abstract grammatical property, and his conclusions were stated at the level of theory, not that of [a] language-particular property” (Goldsmith 1989:151; also Davison 1993, Dry 1993, and Ingria 1993).
On a more personal level, many appallingly negative and misinformed notions floated around linguistics about him for years; in many Interpretivists’ retrospections, and in some Generative Semanticists’ retrospections, Ross was regarded, unkindly, as an intellectual tragedy, the fair-haired boy who did promising work under the watchful eye of Chomsky but then fell into bad company and went astray, the rising star who rose too fast and burned out. He certainly went through a period of intellectual depression, trying to survive in what he calls the “Black Hole,” the linguistics department at MIT. He achieved full professorship in the early 1970s—how could he not, as the one linguist whose citations rivaled Chomsky’s?—but his research failed to get a toehold among the graduate students there, the only market for ideas that really counts in science. More disquietingly, he also encountered some harassment. One of his teaching assistants, for instance, recalls that at least three times a semester,
the department head would call me up at home and ask me if Haj was showing up for classes on time, if he taught the curriculum, bla, bla, bla . . . And of course, he got no graduate courses, and they’d “forget” to invite him to faculty meetings, and on and on. The jokes about him abounded.
The harassment was reluctant, surely—Ross is well liked on a personal level by everyone, though only Kenneth Hale is singled out as someone who regularly went to bat for him—but good intentions probably didn’t make the situation any less unpleasant or debilitating.
His interests had moved so far from MIT’s center of gravity that he looked like he had just spun himself off into the outer reaches of research. How far from the center of gravity? Chomsky defines linguistics in a way that leaves recalcitrant data on the extreme periphery, and Ross is a shambolic data-monger, “a lovable bear who has found a cache of honey in a hollow part of the language tree and is continually astonished at the wonder of it” (Bolinger 1991 [1974]:29), the characteristic dataphilia of Generative Semantics. Chomsky defines linguistics in a way that leaves the aesthetic elements of language on some other planet, beyond the reach of rational inquiry, and Ross became convinced “that trying to do linguistics which has had all its aesthetics siphoned off is ultimately pointless.” (Ross 2000:181) Chomsky defines linguistics in a way that draws on literary criticism as a negative example, the style of analysis and thought that linguists should avoid like a case of the clap, and Ross embraces poetics.
His academic style is also about as far from Chomsky on a curve of self-representation as it is possible to get, asymptotically approaching the ultimate Not-Chomsky. The description is his:
Many times, my invisibly structured (not to say chaotic) “methods” . . . leave many of my fellow conversants dismayed, angry, confused, thinking that I am a flake, that nothing is going on except a waste of their time. (Ross c.2000:1)
Whatever else one thinks of when reading such a description, it does not bring to mind Chomsky’s penchant for itemizing every argument to the logical equivalent of the final decimal point or Chomsky’s utterly crushing certainty. But Ross epigrammatizes the paper where he offers this self-characterization with a passage from Melville: “There are some enterprises in which a careful disorderliness is the true method” (Melville 1992 [1851]:300). The style is one with the inquiry.
In the mid-1980s, Ross left MIT in unpleasant circumstances, holding a variety of positions before settling in at the University of North Texas, plying his unique, meandering, frequently brilliant linguistico-poetic trade. The frenzied-output days are long gone, but he continues to work on syntax alongside, and often inside, his poetics, occasionally publishing technical work with a depth that one would not predict from a self-description of invisibly structured chaos. But there is a certain inevitability to his progression through deep transformational semantics into pragmantax into poetry, a continuum between formal linguistics and the aesthetic use of language, and Ross has traveled it:
The broad questions that I started with—the hope for a purely formal grammar, sharp grammaticality judgements, strong universals—these all crumbled, and I found myself trying to imagine something squishier, rubberier, something more like a poem than like a set of axioms. What I started with was fine but it had to give way pretty soon to an apparently aimless kind of ambling, sashaying towards poeticity. (Ross 2011a)
Paul Postal blazed trails in the wilderness of syntax and thereby gave the field a promise and a vitality that I could not resist.
—David Perlmutter (1968:2)
Postal has no nostalgia for Generative Semantics. The other principals in the movement have all written and talked about the good times, about the virtues, the impact, and the overall salubriousness of Generative Semantics. Not Postal. It was a technical proposal within a framework he now utterly rejects, end of story.7
He, too, has diverged from their common origin point, going in the opposite direction, entrenching his work even deeper into generative abstraction. If we take three data points, Chomsky, Postal, and G. Lakoff, this situation provides a textbook case of how differing value commitments can govern (and perhaps bind?) intellectual developments.
The defining mandate of generative grammar (definition courtesy of Noam Chomsky) is to build a quasi-mathematical “device that generates all of the grammatical sequences of [a language] and none of the ungrammatical ones,” a device whose job it is to describe a natural language construed as “a set (finite or infinite) of sentences” (1957a:13). By the early 1960s, that device had become an assemblage of “rules that specify the well-formed strings of minimal syntactically functioning units” as realized in an autonomous system modeling human cognition (Chomsky 1965 [1964]:1), and it is with these two values, fidelity to syntactic autonomy and fidelity to human cognition, that we can plot our three data points, with a side order of grammaticality (“well-formed strings of minimal syntactically functioning units”) seen as the product of some device. All three of our data points (Chomsky, Postal, Lakoff) embraced all three values in 1965: autonomy, cognition, grammaticality.
Lakoff, we know (with Robin, Ross and McCawley, to different degrees), pledged his fidelity to cognition, for which he took his lead from empirical psychology and research into general-purpose mental dispositions (analogic thought, prototypes, fuzzy categorization), leading him to discard both autonomy and the notion of a specialized language device (with the grammaticality it would define). Chomsky’s fidelity goes overwhelmingly to autonomy, which in turn governs his idea of cognition, which sets aside general-purpose mental dispositions in deference to a specialized, encapsulated computational device (grammaticality is a story for another chapter, but it was criterial for a long time).
Postal, an absolute master of quasi-mathematical grammaticality engines, embraced autonomy even more tightly than Chomsky, but that has led him to discard cognition altogether, general or special. The proper objects of study for linguistics in Postal’s view are natural languages—period, full stop, end of discussion.
What then are these objects, natural languages, if not the products of cognition? Sets of sentences, with an ontological status for Postal precisely that of the objects with which mathematics and formal logic concern themselves. “The objects treated in these latter fields are not physical objects,” Postal says. “And so it is with those treated in linguistics proper.” Natural languages (that is, in the linguist’s mania for initials, NLs),
just like numbers, propositions, etc. are abstract objects, hence things not located in space and time, indeed not located anywhere. They are also things which cannot be created or destroyed, which cannot cause or be caused. Rather, NLs are collections of other abstract objects normally called sentences, each of which is a set. (Postal 2009:105)
There is, you will not be surprised to learn, this one unmistakable point of complete convergence with the trajectories of the other horsefolk: Postal’s ontological position “contrasts utterly with Chomsky[‘s]” (Postal 2009:106).
Postal is all in on the language-as-abstract-object view—the view that linguistics is not an empirical discipline, like psychology, but a formal discipline, like geometry—but it comes most directly from our old friend, Jerrold Katz (1981). Linguistic Platonism, as Postal calls the position (2009b:105–106n6), gives us a lovely symmetry for the alliance of Katz and Postal. They first teamed up to pour the foundations of what became the Aspects theory (i.e., Katz & Postal 1964), which Chomsky merged with a compelling rationalism and a famous methodological abstraction, the competence/performance distinction. With their Platonist alliance, Postal and Katz leave Chomsky’s rationalism in the dust.
Linguistics is now “a branch of mathematics, as independent of psychology and biology as number theory,” for Postal and Katz, a position they argue “is the logical next step in the process Chomsky began when he (1965:3–18) first distinguished between [competence and performance]” (Katz 1998:135). In short—shades of Abstract Syntax!—Katz and Postal are merely following out the program of Aspects to an inevitable conclusion, which circles around to bite Chomsky. Just as Generative Semantics followed the semantic program of Aspects to the point of rejecting its cornerstone, Deep Structure, Platonist linguistics follows the rationalist program of Aspects to the point of rejecting its Aspects cornerstone, Universal Grammar.
Competence concerns grammatical objects called sentences, which have no such concrete dimensions (unlike the utterances of performance). As Postal puts it:
[Utterances] exist in time and space, have causes (e.g., vocal movements), can cause things (e.g., ear strain, etc.). Tokens have physical properties, are composed of ink on paper, sounds in the air, electrical impulses, require energy to produce, etc. Sentences have none of these properties. Where is the French sentence Ça signifie quoi?—is it in France, the French Consulate in New York, President Sarkozy’s brain? When did it begin, when will it end? What is it made of physically? What is its mass, its atomic structure? Is it subject to gravity? (Postal 2009:107)
If French sentences don’t have mass, aren’t subject to gravity, don’t inhabit Sarkozy’s brain, what can we say, Postal wants to know, about the machinery responsible for that infinitude of sentences, French? Nothing coherent if that machinery is Chomsky’s “brain-state” definition of competence, since that account entails an infinite “collection of brainbased expressions . . . [each of which taking] time and energy to construct or, in his terms, generate . . ., store, process, or whatever” (Postal 2009:109). The arguments are intriguing, but the Katz-Postal, languages-as-abstract-objects Linguistic Platonism has not caught fire.8
No doubt there were other reasons for Postal to adopt Linguistic Platonism, but one clear recommendation is that it gives him another stick with which to beat Chomsky, Postal’s most cherished hobby. His Skeptical Linguistic Essays, for instance, is a series of exposés of “Junk Linguistics”which includes over 450 citations to works by Chomsky, none invoked favorably, and scores more to works by Chomsky’s supporters, who don’t receive much better treatment. One gets the impression that no one, not even his most devout acolytes, reads more Chomsky than Postal does, political writings as well as linguistic writings—which must give him some pleasure, but which has also clearly cost him much tooth enamel. One can almost hear Postal’s teeth grinding behind any passage in which he names Chomsky.
Postal’s own characterization of one of his most prolonged Chomsky-thumpings is, accurately, “a vitriolic and shocking accusation” (Postal 2009:105). That piece is a feverish close reading of what Postal calls “the most irresponsible passage written by a professional linguist in the entire history of linguistics” (Postal 2004:296; his italics). So, it’s bad. Here is a sampling of the adjectives Postal uses as explicates that claim, line by line, word by word: incoherent, perverse, truly deceptive, hypocritical, largely empty, offensive, facile, trivially incoherent, contradictory, inept, grotesquely false, truly outrageous, scandalously unscrupulous, ludicrous, and pointless. But adjectives and their intensifiers cannot carry the full weight of Chomsky’s intellectual and moral transgressions. We also get: a travesty, empty name-calling, sheer desperation, toxic irrationality, sheer absurdity, a crude and arguably deliberate distortion of the truth, an extraordinary failure on the author’s part to take even his own assertions seriously, an ever steeper descent into absurdity, without the slightest plausibility, and revealing levels of shoddiness orders of magnitude beyond even the ominous levels already documented. Here’s the summing paragraph:
This concludes my analysis of [this passage] and [the] justification for my belief that it is the most irresponsible passage in the history of linguistics. Find worse if you can. My own suspicion though, is that not only will it not be possible to come close to [it] in this regard but also most linguists would, blessedly, be incapable of inventing anything worse, even if they set out with that goal in mind (Postal 2004:205).
Postal is not timid on the topic of Chomsky.
To be fair, a few other miscreants and pervaders of error also populate the pages of Skeptical Essays. Even a younger Postal has made mistakes that come in for criticism. Mostly, though, these others are either Chomskyan functionaries or honest knuckleheads (the younger Postal, for instance, was “wrong without being disgraceful”—2004:232). Chomsky is a force of irredeemable malevolence—often, ridiculously so. Postal can be brutally funny. One attestation (concerning a piece included in Skeptical Essays):
Paul Postal is the only academic who nearly killed me. He nearly did, really.
. . . It’s a good thing I read [his article] at the university library late in the evening, when I was the only person [there]: I was laughing SO hard that the noise level would have gotten me expelled had there been people within earshot. . . . An ambulance might have been required had the article been but a few pages longer. (“Etienne” 2012)
Not everyone is appreciative. Here is another response to Postal’s anti-Chomsky writings, from a blog entry by Norbert Hornstein entitled “Going Postal”:
Postal is clearly a very angry man. His discussions are crude and, in my view, deeply offensive. I believe that he should apologize for the way that he has conducted himself. He has proven to me that Chomsky is actually a nicer man than I already believed he was. (Hornstein 2013)
Late-career Postal is not all bite and bile, however. His positive contributions are deep and lasting, if somewhat more tangled in the work of others than is common in linguistics. Even Hornstein prefaces his remarks on Postal with “I wish that I had made even a tenth of the contributions to syntax that he has made” (Hornstein 2013), and one review of his last monograph calls it “a very impressive work by one of the giants of the field” (Reeve 2012).
Postal has had relatively few students and many diverse collaborators (including books with at least eight partners, articles with a dozen more, multiple publications with the same partners, several of them heavyweights in their own right)—both relatively unusual patterns for someone deserving, as Postal is, of the designation, giant. The chief reason for his lack of students, despite an acknowledged gift for teaching and directing research, is that he spent bulk of his career at IBM’s Thomas J. Watson Research Center.9 The chief reason for his wealth of collaborations is, it would seem, a good eye for talent, along with a fair dose of humility and a more genial side to his character than his polemics might reveal.
While putting it this way might suggest a shortage of originality to some, a major element of Postal’s genius has been in recognizing the power of someone else’s ideas and in joining them to articulate, strengthen, and extend those ideas, starting with Chomsky and continuing through all of his other theory allegiances. But he is not a sidekick. His solo production is staggering (nine books and over seventy articles at press time, and counting).
He is at his best as a depth linguist, drilling far down into the specifics of particular phenomena. His solo books include dedicated studies of cross-over phenomena (1970), raising (1974), passives (1986), French causatives (1989), extraction phenomena (1998), and grammatical objects (2011), and his articles, many of them forty or fifty pages long, are exempla of linguistic analysis and argumentation in the Generative idiom.
Postal, still active, has settled into the framework he calls Metagraph Grammar, a much more lineal descendant of Relational Grammar than are most current frameworks of anything from thirty-five years ago, albeit under a formal-science construal, but the work is presented—as Postal has always presented his work—in a clearly formulated challenge to other linguists: here is a bunch of facts, mostly unnoticed, underappreciated, or misunderstood heretofore; here is what my machinery can make of these facts; here too, by the way, are my assumptions, some more strongly motivated than others, all motivations signaled; join me, prove me wrong, or do me better.
The pioneer in linguistic research on language and gender, [Robin Lakoff] provided a starting point for me, and for a generation of scholars, by blazing a trail that has since branched into many diverging paths of inquiry. . . .[She is] a scholar who pursues theoretical research without losing sight of its practical implications.
—Deborah Tannen (1990: 11)
Robin Lakoff has had an active, distinguished, brilliant career, in formal linguistics and predominantly in sociolinguistics, and as a public intellectual. But her major contributions, and major might understate them a bit, are to the growth of pragmatics and the virtual creation of feminist linguistics. In the 1970s, as second-wave feminism was cresting, she started poking around to see if English had deeper signatures of gender than just the surface-skimming lexical critique feminists had generated, something more subversively oppressive than the patriarchal vocabulary bias for words like chairman and manhole cover, or the diametrically different cultural valences of word pairs like bachelor and spinster or gigolo and whore, something shaping systemic patterns in syntax, and prosody, and conversational practices. There was anthropological work on languages like Arawak, Japanese, and Koasati, going back decades, that revealed distinct gender effects—morphology exclusively used by female or male speakers, different honorifics and particles, even alleged mutually incomprehensible genderlects. But English? Absurd! “Everyone told me not to do it,” she recalls. “Nevertheless, I persisted.”
She wrote a long, speculative paper (1973c) for the new journal, Language and Society, then an abridged popularization (1974) for the new feminist magazine, Ms., which attracted enough attention to earn a contract for a mass market trade book (1975), coming out with the same title as the original article, Language and Woman’s Place. She was not immediately hoisted on the shoulders of language scholars and given a triumphant ride through the hallways of academia. The book was greeted with complaints about her impressionistic data, her judgments, and her methods—all of which come directly from the paradigm she was in the process of extracting herself from at the time. The methodology of Transformational Grammar is invented data, subjected to one’s own judgment, and Generative Semantics had both expanded the reach of the invention and relaxed those already lax methods. Lakoff’s reviewers hollered for attested examples, and they hollered for statistics; real data.
But Lakoff’s idea was almost uniformly cheered by those same reviewers, and her motives. Her overall project was embraced, and many of her observations, however produced, were recognized as important insights. As a proof of concept, Language and Woman’s Place was a huge success. It was also—with a dramatic cover image of a woman’s face, half shrouded in darkness, looking hesitant and fearful, mouth taped shut—a cultural and scholarly hit. It was widely read, widely cited, and remained in print for nearly twenty-five years before engendering a second edition (2004).
The second edition is especially revealing, no aspect of it more so than the cover, a reproduction of the first cover, but this time the woman is ripping off the tape. By the 2000s, things had changed, in society and in linguistics—with Lakoff among the scholars and activists who were getting credit for the former, and she was awarded virtual copyright for the latter, feminist sociolinguistics. Oxford University Press now had a Studies in Language and Gender series, which brought out the second edition in a way that treated it like holy writ. The full title of the second edition is Language and Woman’s Place: Text and Commentaries, Revised and Expanded Edition. The original text was in fact neither revised nor expanded. It was left undisturbed, with the label, THE ORIGINAL TEXT, while rigging and regalia flowered around it. Two introductions were added, one by Lakoff, who also provided fourteen pages of annotations for the text. Twenty-five other scholars, virtually all of whom helped build the field of gendered language studies on the foundation of Language and Woman’s Place, provided the commentaries.
The roots of Language and Woman’s Place in Generative Semantics were not missed at the time, and surely contributed to the attention it received, among onlookers as well as insiders. One review begins “Robin Lakoff is one of the major contributors to a new linguistic theory, generative semantics,” one review begins, “which represents the most nearly adequate grammar for English which linguists have been able to devise.” The author follows this by complimenting Generative Semantics on its chief virtue, rejecting the artificial separation between syntax and semantics, which widened the scope of linguistics to the point where it could include “such considerations as language used by and about women” (Scott 1976:74).
Some of the complaints, like Showalter’s characterization of the book as “embarrassingly self-indulgent” (1975:450), recall the times and the ethos more than the issues, and Lakoff connects the revolutionary dots between the times and the research, in her introduction to the second edition. “Generative semantics and the women’s movement,” she says, “both arose as protests against the status quo.” After spelling out some of the Chomskyan status quo, she finishes the picture:
Both generative semantics and the linguistic arm of the women’s movement started on a small scale, looking at smaller and more concrete aspects of language . . . generative semantics and the women’s movement had similarly revolutionary origins and a similar need to question and subvert established belief. For me at any rate they came together in another way as well. My interest in the intersection of language and gender arose on two fronts: my political involvement in the women’s movement and my academic engagement in the transformational dispute. (R. Lakoff 2004:18)
Ray Jackendoff is a monumental scholar in linguistics who, more than any scholar alive today, has shown how language can serve as a window into human nature.
—Steven Pinker (2015)
Two characteristic sorts of intensity defined Ray Jackendoff’s involvement in the Linguistics Wars. One sort, for lack of a better word, we can call political; the other, for the same reason, scientific. He had an intense partisanship in the emerging Interpretivist framework; he had equally intense commitments to meaning, to cognitive science, and to the facts. As the positions and the battlegrounds shifted around him, as his own interests grew and branched, the political commitments receded. The theoretical and methodological commitments did not. If anything, they increased as the mists of partisanship lifted and he could see more clearly. Those commitments had much in common with several prominent Generative Semanticists, especially those of his old Nemesis, George Lakoff. It is no coincidence that they are both now at least as prominent in cognitive science as they are in linguistics.
Jackendoff is a named professor (like McCawley, like Lakoff, like, of course, Chomsky). He is the Seth Merrin Professor of Philosophy at Tufts University, as well as co-director of the Center for Cognitive Studies (with Daniel Dennett), and he is decorated in both fields, having won the Jean Nicod Prize in Cognitive Philosophy (2003), and the highest honor in cognitive science, the David E. Rumelhart Prize (2014). There is also a special issue of Cognitive Science dedicated to his work (41.1), and another one of Behavioral and Brain Sciences (26.6), and a special issue of The Linguistic Review (24.2–4) on “The Role of Linguistics in Cognitive Science” uses his work to set the agenda; there is a tribute volume, Structures in the Mind (Toivonen, Csúri, & Van Der Zee 2015); a handful of honorary doctorates; another handful of assorted awards; yet another of prestigious fellowships; presidencies of the Society for Philosophy and Psychology and of the Linguistic Society of America; even enrollment in his high school’s Hall of Fame. He doesn’t have the widespread fame of Lakoff, but among the very accomplished principals of the Linguistics Wars not named Chomsky (who is so astronomically celebrated that it isn’t fair to include him), Jackendoff is the most decorated, for good reason.
He has been busy. Perhaps you were wondering, back in Chapter 7, where Jackendoff had gone when Brame and Dougherty and other frontline Interpretivists were putting the boots to George Lakoff and crowing over the corpse of (undead) Generative Semantics. Especially given the contretemps he had undergone with Lakoff, one might have expected Jackendoff to be first in line. But he had long tired of the conflict; in fact, had tired of syntax. Once the lid came off the box of meaning with Katz and Fodor, its tentacles went in many directions—deeper into the lexicon, throughout syntax, into social contexts, with the emergence of linguistic pragmatics, and into an abstract formal system, most notably with Montague Semantics. All the while, Chomsky and his Interpretivists fought to keep meaning closely tethered to syntax, Jackendoff as hard as any of them; perhaps too hard. He wearied of what was really, from the Interpretivist’s vantage, the syntax wars. He turned to the lexicon, and the theory of consciousness, and he turned to music.
Jackendoff is a classical clarinetist who had a parallel career for twenty years as the principal clarinet of the Civic Symphony Orchestra of Boston and appearances with many other orchestras, including the world-famous Boston Pops, and who remains an active performer. He has a catalogue of compositions, a couple of CDs, and two of those honorary doctorates are in music. In the mid-1970s, he turned his scholarly interests toward music as well, from a cognitive and a generative perspective, ultimately resulting in a book with composer Fred Lerdahl, A Generative Theory of Tonal Music (Lerdahl & Jackendoff 1983). It was heralded at the time as “a virtuoso piece of thinking and reasoning about music,” and is viewed retrospectively as “a landmark in music cognition”; in fact, as dating the moment when “music cognition came of age.”10
Jackendoff’s turn to consciousness was the more surprising move. Many linguists are talented musically. A few, like Jackendoff, prodigiously so. The great comparativist, Joseph Greenberg, played Steinway Hall at fourteen. McCawley’s apartment was dominated by two pianos and he played the harpsichord whenever he found one. Ivan Sag founded and managed a band, the Dead Tongues (not as punk as that sounds; blues-rock), of rotating size and instrumentation, which sometimes swelled to include a full brass section. The personnel, except for a few rare exceptions, were almost exclusively linguists. Geoffrey Pullum even shared the stage with the Rolling Stones and had some UK chart success with the soul group he co-founded, Geno Washington and the Ram Jam Band, before linguistics claimed him. And the interrelations between music and language don’t require a PhD in linguistics to sketch out. We speak in rhythms.
But consciousness? That’s a country for philosophers and mystics. His turn to the lexicon was the least dramatic—he was still in linguistics—but symptomatic all the same of his search for meaning outside syntax.
Maybe, with all these other concerns, you thought he was a goner. But the cat came back.
While he was abroad, he certainly heard the drumbeats from home, and got the occasional Christmas card. He knew alphabetic generative models were proliferating. He knew that Chomsky was toying with switch boxes, and then not. He saw the rise of formal semantics and the distinctly non-Generative influence that Metaphors We Live By was having, in apparent consilience with Frame Semantics, Construction Grammar, Cognitive Grammar, and the rise of connectionism. He can take up the story now:
After many years toiling in the terra incognita of lexical semantics, with detours into musical cognition and the theory of consciousness, I returned during the 1990s to syntax, where I had begun my life as a linguist. From the perspective gained in the interim, it struck me that some traditional basic assumptions about the overall roles of syntax and the lexicon in the grammar were mistaken. In 1965 these assumptions looked altogether reasonable. Without them it is unlikely that the field could have progressed with the exuberance it did. However, as such things often do, these assumptions first hardened into dogma and then disappeared into the background, there to be maintained through the many subsequent incarnations of transformational generative syntax: the Extended Standard Theory, Principles and Parameters Theory (more or less alias Government-Binding Theory), and the Minimalist Program. (Jackendoff 2002:xi)
He came back, that is, with a mission, a mission to restore and refurbish Generativism, to shape a more flexible and broader-minded framework, one that takes its lessons not just from the interaction of its own mechanisms in relation to largely canned data and legacy assumptions, but from non-generativist approaches as well, welcoming, on its own terms, “research in language processing, language acquisition, language use, spatial cognition, social cognition, evolutionary psychology, and neuroscience” (Jackendoff 2002:xi). The 2002 summa for his approach—the arguments, the evidence, the case, what Dennett calls, “Jackendoff’s masterpiece” (2003:673)—is his Foundations of Language: Brain, Meaning, Grammar, Evolution. There’s not much missing from that title in terms of twenty-first century themes in linguistics and cognitive science.
Jackendoff has quietly—quietly in contrast to G. Lakoff, the band leader; quietly in contrast to Chomsky, whose every murmur or harrumph circles the linguistic world twice before tea time—developed a Grand Unification framework that in a crucial way satisfies the blueprint laid out in Aspects and does so with a generative semantics.
You read that right: Jackendoff, veteran of shouting matches over Generative Semantics, ends up with a generative semantics. The big difference between his generative semantics and the upper-case movement is that he doesn’t just move generativity around; doesn’t, in particular, displace it from syntax. He keeps a generative syntax too. And—not done yet—he folds in a generative phonology. His framework, called The Parallel Architecture (Figure 9.1), is characterized by a triadic arrangement with three generative domains: syntax, semantics, phonology.
It’s the phonology part that really distinguishes Jackendoff’s framework. The Aspects model is tripartite, as befits a truly mediational grammar, with phonology getting equal status to syntax and semantics, sort of. “The syntactic component of a grammar must specify for each sentence,” we all remember Chomsky saying, “a deep structure that determines its semantic interpretation and a surface structure that determines its phonetic interpretation” (1965 [1964]:15): semantics, syntax, phonology. But we also remember that Chomsky followed this architectural overview fairly quickly with “my concern in this book is primarily with deep structure” (1965 [1964]:16), a syntactic conduit for semantics with no implications for phonology. His and Halle’s monumental Sound Patterns of English was in manuscript circulation at the time and was published a few years after Aspects (Chomsky & Halle 1968 [1964]), giving phonology the heft, almost, of syntax, and the heft that the brewing Linguistics Wars was bringing to semantics. So, it briefly looked like all three legs of the Aspects grammatical tripod were equally sturdy, at least in principle. But Chomsky never again gave much time to phonology, an attenuation in his linguistics he often attributes—very reasonably; something had to give—to his growing political activism after Aspects. Nor did the many alphabet grammars that arose over the subsequent decades allocate more than lip service to phonology. Generative Semantics, for its part, had nothing to say about phonology, and Cognitive Linguistics rose from its ashes with a similar gaping oversight.
A quick backward glance, then, suggests that phonology was a casualty of two wars—the Linguistics Wars, which made everything about syntax and semantics (and pragmatics), and the vastly more consequential Vietnam War, which drew Chomsky’s attention away from phonology. But no. Look a little longer. Go out the opening, through the discarded popcorn boxes and candy-apple sticks, over to the next tent: if linguistics at the close of the twentieth century was a three-ring circus (and there are worse analogies), syntax and semantics gamboled in the big rings of the main tent. Phonology was in the little tent out back. But it was a very active tent, especially after John Goldsmith proposed an important restructuring in the 1970s. Phonology went through a major surge of theoretical development under which representations lost their flat structures and their linear assembly models, to take on the shape of syntactic and semantic (Logical Form) trees.11
The great thing about trees is that they show simultaneity. They have layers, or, as Goldsmith calls them, tiers. At the same time, we know at one tier that Floyd has travelling companions like broke, the, and glass. At another, we know it is a noun, and its companions are a verb, a determiner, and another noun; at another, that it is the head of a Noun Phrase that has a Verb Phrase companion; at another, that it is the first element of a Sentence. This is logical simultaneity, representing structures of linguistic information to which language users need access. Phonology needs that kind of information, too (that “f” is a consonant at one tier; the beginning of a syllable at another, and so on), but autosegmental trees can represent temporal simultaneity. At the very same time when we are speaking, with partial overlaps, our articulators (tongue, larynx, velum, jaw, . . .) are going through motor programs every bit as complex as our hands, arms, feet, legs, and so on, go through when we are turning a somersault or catching a bus or bidding on an inside straight. Goldsmith’s compelling image is of a musical score, setting out how the various instruments need to play different parts, but play them together. (Jackendoff’s musical interests must have made him a sucker for that image.) Autosegmental phonology was a stroke of genius.
There was a problem, though. Autosegmental Phonology is not particularly tractable for any theory which put syntax in the driver’s seat, like every theory Chomsky has ever offered or endorsed, and most of the theories in the Generative tradition. In Aspects, he says “this relation [between sound and meaning] is mediated by the syntactic component of the grammar, which constitutes its sole ‘creative’ part” (1965 [1964]:145), a position from which he has never budged a nanometer (see, e.g., Chomsky 2016a:25). A model in which syntax is the sole generator means the other representations—those of sound and meaning—must be derived from, or otherwise sponsored by, the syntactic representations. Autosegmental phonology doesn’t work that way; not even all the various tiers in a tree can be derived from other tiers. Crucially, for instance, intonation cannot be derived from segments.
In short, a richer, tiered phonology must be generative in the same way that the Lakoffs, Postal, Ross, McCawley, and company argued semantics must be generative. Jackendoff, for one, couldn’t have been happier with that position.
Jackendoff’s architecture also finally breaks away fully from what Archibald Hill called the “upward-looking technique” of the Bloomfieldians: the analytic division of language from phonology at the bottom, up to semantics at the top, through various self-governing “levels” (Hill 1991:34). The revolutionary Chomskyans broke away from the injunction against “mixing levels” in the early 1960s, but there remained a stiff bureaucracy among the domains formerly known as levels, and much of the intervening model building that we have looked at involves different ways of setting up a reporting structure among the components—lexicon to syntax, syntax to semantics, semantics to syntax, lexicon to semantics. . . . “Drawing flow charts” of these various bureaucracies, Jerry Sadock noted, “proposing alternate hierarchizations of grammar,” occupied Generative model-builders for decades, various arguments favoring one arrangement, disfavoring others, collectively showing that no single arrangement “is capable of capturing all of the interactions of components.” The only sensible conclusion to draw from all of these conflicting exercises? “A system of parallel, highly autonomous components operating independently” (Sadock 1991:10).12
These parallel autonomous components allow for a compelling solution to “lexical insertion,” one of the issues that most defined the Linguistics Wars. Dennett explains the solution nicely:
In the syntactocentric picture, a word is a simple, inert sort of thing, a sound plus a meaning sitting in its pigeonhole in the lexicon waiting to be attached to a twig on a syntactic tree. In Jackendoff’s alternative vision, words are active: “little interface rules” . . . with lots of attachment prospects, links, constraints, affinities, and so on, carrying many of their combinatorial powers with them. . . . The epicycles of syntactocentric theories largely evaporate, as the division of labor between syntax, semantics, and phonology gets re-allotted. (Dennett 2003:673)
The little interface rules link structures that have mutual affinities. So, the word cat is triple-generated as a phonological structure, with certain phonemic, syllabic, and metrical properties (a pretty boring set, to be honest, since it is monosyllabic, but don’t blame me; it’s a favorite example of Jackendoffs), as a syntactic structure, with classificational and agreement properties (boring again; it’s a singular count noun, end of story), and as a semantic structure, with conceptual properties (it’s a certain sort of thing, a token of the type: cat). The interface rules put these structures together. They don’t “derive” intonation from syntax, or vice versa. They just say they belong together. Nor do they “derive the meaning [cat] from the sound /kæt/ or vice versa.” They just bring the two into linkage “when each appears in its appropriate structure” (Jackendoff 1999:397).
So Jackendoff returned to Generative Grammar from wilds of lexical semantics, music, and consciousness, to forge an architecture in order to . . . what? Duke it out with all the other models? Batter George Lakoff about the ears? Unseat the Great and Terrible Chomsky? Find a way, as in his EST work, to honor the Great and Terrible Chomsky’s promissory notes? Yes. And no. Jackendoff didn’t come back in the same fighting mood that possessed him when hostilities broke out in the post-Aspects years, but he does claim to provide a framework which accomplishes more things than most other frameworks manage, and most things better than every other framework does. But he’s not at war. When other frameworks do something better or broader than the Parallel Architecture, he simply adopts their insights; when they do something compatible, he celebrates the convergence, even when it is with Lakoff.13 We’re all in this together, says he. I don’t know everything, says he. Life is short, says he. If you’re unhappy with the gaps in Foundations, says he, “write more chapters” (2002:xiii).
At the same time, though, he does not let questionable claims lie. He clinically delineates the failures and presumptions of other frameworks, in contrast to the corresponding successes of his architecture. In one 2007 article, for instance, he takes on (among others), Cedric Boeckx, Joan Bybee, Terrence Deacon, Tecumseh Fitch, Adele Goldberg, Kenneth Hale, Marc Hauser, Jay Keyser, Philip Lieberman, Alec Marantz, Massimo Piattelli-Palmerini, Michael Tomasello, and Wendy Wilkins, finding agreement where available but disagreeing sharply and strategically as well. Some of these theorists advocate for Chomsky’s program, which he finds wanting in many respects. And, if he doesn’t actually take on Chomsky directly, he does take on Chomsky’s charisma:
I really think that a major obstacle to unification is this insistence on viewing Chomsky’s linguistics as the only linguistics worth discussing, which one would think we could get over by now. (Jackendoff 2007b:348n1)
Jackendoff doesn’t come to unseat Chomsky, therefore, but to show him the error of his ways, many of them connected to the Minimalist Program (in at least one case, for a lack of constraints!—2007b:356). But he also comes, not with the epaulettes of Chomsky’s lieutenant he wore in the 1960s and 1970s, but with the air of a grand old Generative Grammar general, to fulfill a few of the new promises. The stripped-down Minimalist Program, for instance, includes a wholly unspecified Sensorimotor Interface, for which the Parallel Architecture provides considerably more detail, including the division into an auditory interface (for hearing) and a motor interface (for speaking). He also provides a fleshed out syntax with the reductionist impulses of the Minimalist Program (“Simpler Syntax”—Culicover & Jackendoff 2005), and, equally, one that is compatible with Construction Grammar (Goldberg & Jackendoff 2005; Jackendoff & Audring 2020), along with a much richer semantic theory (“Conceptual Semantics”—Jackendoff 1990).
The meticulous concern for details is still there, working in service of the originary Generative vision: a profoundly mentalist substructure, combinatoric descriptive mechanisms, and Universal Grammar. These are “the genuine insights of generative grammar” (Jackendoff 2002:xii), which, if one takes them as seriously as Jackendoff, requires attending to issues of parsing, memory, attention, neural insult, evolution, developmental psychology, philosophy of mind, and learning theory, as a rough list. Unification is his mantra. The Parallel Architecture, Jackendoff believes, offers the best “hope of restoring some degree of much-needed unity to the field of linguistics” (Jackendoff 2002:xii), not just internally but with the cognitive sciences more generally.
Oh, and there’s no Deep Structure; nor even a D-Structure.
While the scars [of the Linguistics Wars] have not healed for Lakoff—“It was a nasty period, and it has remained nasty,” he says—he nevertheless emerged as a major force within the discipline. Deciding to get “as far away from Chomsky as possible,” he went . . . to the University of California at Berkeley, where he is still a professor of cognitive science and linguistics. There he helped make the West Coast the epicenter of cognitive linguistics, which extends far beyond linguistics’ traditional focus on overt and observable linguistic structures into the broader realm of cognition.
George Lakoff is a very ambitious man. Maybe not personally or economically ambitious, though it would be difficult not to be both in some degree, twenty-first century North American hominid that he is; and he has done well for himself on both scores. But he is certainly unapologetically ambitious for the ideas he cherishes. And most of those ambitions have been realized. The CliffsNotes version of Lakoff’s career is that his reputation in linguistics suffered for a while as a result of the scapegoating campaign we witnessed, as a result of his own eclectic and frenzied style of theorizing, and as a result of his thrashing so publicly against the restrictiveness tide ineluctably raised by Chomsky’s gravity, abetted by the Peters-Ritchie proof. But he is back on top in linguistics, and not just there. His intellectual reach now rivals Chomsky’s in some fields, surpasses it in others. Almost all of this reach traces straightforwardly to Metaphors We Live By, and through that book back into Generative Semantics; it also owes more than a little to Lakoff’s relentless self-promotion.
“For 2,500 years,” he told a New York Times reporter doing a feature article on his political influence in the mid 2000s, “nobody challenged Aristotle [about metaphors], even though he was wrong.” Variations of this theme—that the entirety of scholarship has suffered from a fundamental error, but Lakoff is here to set things right—are familiar in his writings and his talks, as are his blithe claims about the empirical underpinnings of his own work (snidely captured in Cosma Shalizi’s slur that “George Lakoff uses ‘as cognitive science shows’ to mean ‘as I claimed in my earlier books,’ ” Shalizi 2002). It may genuinely be that Lakoff is so ebullient about his own insights that he doesn’t realize claims like the one he made to the Times reporter puts him above not just the most towering intellect in Western thought, Aristotle, but also above everyone else who ever considered metaphor in the intervening two and a half millennia. Lakoff may still not even realize he is wrong when he makes such claims about his ‘discovery’ of the role of metaphor and analogic thought in the history of ideas, or realize how outrageous it is to claim he surpasses the entire history of Western thought on metaphor. Not everyone in his audience is equally blind to these points, however. “Humility,” wrote the reporter, Matt Bai, immediately after quoting Lakoff’s nobody-but-me-challenged-the-sadly-mistaken-Aristotle remark, “is not his most obvious virtue” (Bai 2005).
Lakoff even has an origin story, a Eureka Moment, when his California classroom welled over with “conceptual metaphor” talk:
In 1978, I was teaching an undergraduate seminar at Berkeley and I came upon evidence of conceptual metaphor. . . . [A student] came in late, very upset, sat down. After a minute she said “I’m sorry, but I’m not going to be able to function in the class today. I’ve had a metaphor problem with my boyfriend. Maybe you can help.” . . . We all said “yes.” After all, it was Berkeley in the 1970s. And it was a true call for help. She said: “On the way over here, my boyfriend said that our relationship had hit a dead-end street.” She said: “I don’t know what this means.” So someone said: “Look, if it’s hit a dead-end street, you can’t keep going the way you’ve been going.” Someone else said: “Yeah. You might have to turn back.” And then we realized that there was a whole set of expressions in English for conceptualizing love as a journey. Expressions like The marriage is on the rocks, or The relationship is off the track, or Look how far we’ve come, or We may have to turn back, We are at a crossroads in the relationship, We are going in different directions, We may have to bail out. . . .
[We were] reasoning about love using the inferential structure of the concept of a journey. (Lakoff 1997:39–40)
From one angle, it’s actually a bit charming that a scholar as thoroughly enmeshed in the study of language and mind, from philosophical, psychological, and linguistic perspectives, could be shocked to discover that people reason in analogical frames. But this apparently was a revelation to Lakoff, however solipsistic; as well, I would venture, as yet another indication that he should have read his Aristotle a little more closely, and one or two other scholars over the intervening millennia before his ‘discovery.’
To be fair, one needs quickly to note that Lakoff’s self-importance does not seem ill-placed alongside the importance others now assign him and his ideas. He is a major figure in cognitive science. He is undoubtedly the best-known linguist in current political circles, reaching celebrity status in the mid 2000s; Bai (2005) calls him “the father of framing” (an epithet Lakoff is happy to propagate, 2014b), remarking that his slim volume, Don’t Think of an Elephant! (Lakoff 2004), “became as ubiquitous among Democrats in the Capitol as Mao’s Little Red Book once was in the Forbidden City.” He has a notable presence in philosophy, even a voice in the theory of mathematics. The bulk of his reputation in these fields leverages the incredible reach of so-called “Conceptual Metaphor Theory.” Steven Pinker’s label for him is The Messiah of Metaphor (Pinker 2007:245, and yes, the phrase is meant to cut as well as to characterize).
If not quite a messiah, if not quite of metaphor, Lakoff has certainly become the patron saint of something called “metaphor” across the academic and popular landscape. In English studies especially, and in the cognitive humanities more generally, citational obeisance is routine and explicit mention of him declares Lakoff the great prophet of metaphor, with something of a sidekick role for Mark Johnson.
The year 2004 was a particular watershed for him. That’s when he published Don’t Think of an Elephant!, a bestseller loosely applying his “metaphor” ideas to the left/right divide in American politics. It is chock-full of advice to Democrats on how to reclaim the rhetorical ground that Republicans had dominated during the George W. Bush years with their “strict father” analogic framing, in contrast to the “nurturing parent” framing of the left. Democrats loved it, bringing to Lakoff “sold-out speaking gigs; invitations to dinner with Democratic strategists, Hollywood liberals and former president Bill Clinton; the label “guru”; and, at one memorable gathering of Senate Democrats, hugs” (Shesol 2014). More books followed, video tapes, speaking tours. He founded a consultancy think tank. The language of political framing was all the rage, with some people—among them, the bashful Lakoff—crediting Lakoff with helping Barack Obama win the presidency in 2009. It also brought attacks from the right, of course, none more irksome to him, I’m sure, nor more revealing about how shallowly the attackers were willing to look in their search for insults, than an editorial in the New York Sun that identified him as a “disciple of the notoriously anti-American Massachusetts Institute of Technology professor Noam Chomsky” (Ferguson 2006); a theme that propagated (Nunberg 2006).
Now the august Richard and Rhoda Goldman Professor of Cognitive Science and Linguistics at Berkeley, Lakoff remains as good-natured and open-hearted as ever. The New Republic describes him as “an affable and generous man; his doughy cheeks and close-cropped beard exude a warm-and-fuzzy . . . liberalism” (Scheiber 2005). “In public meetings,” a reporter from The Guardian noted, “he greets every question with: ‘That is an extremely good question’ ” (Williams 2014). But along with the affability, he retains feistiness, particularly on the topic of George Lakoff.
Steven Pinker brought Lakoff’s public pugnacity back to the fore with a highly dismissive review of Lakoff’s Whose Freedom? The Battle Over America’s Most Important Idea (G. Lakoff 2006a). The review is right on a number of scores, including Lakoff’s utter disregard for the scholarly tradition, his tendency to reduce opposing positions to a row of tin cans along a fence top, so as to make his pot shots easier, and a tone that can be infuriatingly presumptive; or, in Pinker’s words, “his relentless self-congratulation, his unconcealed condescension, and his shameless caricaturing of beliefs.” Pinker’s criticism is certainly not without foundation—Whose Freedom? is pretty shallow—but the review overall is a two-bit mugging, offering outrageous misrepresentations of Lakoff and his ideas as if they are devastating critiques.
Pinker makes the early allowance, “There is much to admire in Lakoff’s work in linguistics.” But one does not have to wait long for the blows to start raining down. The concession is a feint setting up the next clause’s walloping right cross: “Whose Freedom?, and more generally his thinking about politics, is a train wreck.” After that combo, the punches come in flurries. He jabs Lakoff’s “faith in the power of euphemism,” cuffs his understanding of cognitive science and of neuroscience, clouts his “tips in the political arena. . . [which merit only] howls of ridicule,” slugs his “considerable ignorance.” In one of Pinker’s more brutal distortions, he proclaims that “in Lakoff’s view . . . Philosophy . . . is not an extended debate about knowledge and ethics, it is a succession of metaphors: Descartes’s philosophy is based on the metaphor ‘knowing is seeing,’ Locke’s on ‘the mind is a container,’ Kant’s on ‘morality is a strict father.’ . . . Mathematics, science, and philosophy are beauty contests between rival frames rather than attempts to characterize the nature of reality” (Pinker 2006). These are flimsy, fallacious dichotomies—er, Steven, is it not possible to have a debate among positions that realize broad analogic frames, or to use aesthetic criteria, like elegance and scope, in building and weighing characterizations of reality?—and it is unsettling to see how easily a linguist, and a cognitive scientist, and an authority on style (Pinker is all three), can fall back on the old “mere rhetoric” charge, the accusation that if you are talking about the way language frames understanding you can’t possibly be talking about concepts and thought.
Pinker knows better, but here he is happy to divide things as either debate or metaphor, beauty pageant or serious discussion. The assault, on its own, is of little moment. It is a shallow review accusing a shallow book of shallowness. But it kicks up a bit of dust, providing a moderately recent opportunity to see that the Lakoff of today is the Lakoff of the Linguistics Wars. He puts up his dukes and wades in with his own misrepresentations and innuendo. He does a reasonable job of refuting Pinker’s more egregious cheap shots. But he can’t resist bringing in Pinker’s damning affiliation with Chomsky and “the old theory,” slinging anti-feminist aspersions at him in the bargain, and, Pinker being well known for his incorporation of evolutionary psychology, offering up this explanation for the hatchet job: “[Pinker] is threatened and is being nasty and underhanded—trying to survive by gaining competitive advantage any way he can.” Maybe this is not quite the Lakoff of the Linguistics Wars. There is a touch more restraint on the page. But one can almost hear a “Nyaah! Nyaah!” behind the keyboard.14
Frankly, though, it is Lakoff who has the competitive advantage between the two combatants here, especially back home in linguistics, where Pinker is highly respected as an excellent psycholinguist, a careful theorist, and Chomsky’s most successful expositor, but where Lakoff is a bigwig, bigger now than at any point in his career. He is so prominent in Cognitive Linguistics that he can claim naming rights. The label is a slight mutation from one of the mid-1970s projects Lakoff championed, the one he was too ill to present at the Milwaukee Conference. You might recall that Lawler, in a diagnosis he probably wishes he could have back, pronounced it “dead” and “stillborn” (Lawler 1980:54). He couldn’t be blamed at the time, since Cognitive Linguistics was very sketchy, since there was no reason to believe this most recent Lakovian labelling would last very long, and since the cognitive of that label bore little resemblance to the dominant way that word was understood in linguistics at the time; that is, in Chomsky-defined terms, a modular arrangement of complex autonomous devices, highly structured, computational, universal and almost wholly pre-wired, with only slight variations possible, as constrained by a few genetically specified parameters.
A few things happened on the way to the twenty-first century, however, markedly changing the fortunes of Lakoff’s program and accomplishing the prodigious reworking of linguistic assumptions that Lawler was so skeptical about. Those things can be tallied easily by way of publications, starting with what Lawler called the “other thing . . . [Lakoff had] gone on to” from Cognitive Linguistics (Lawler 1980:53). From Lawler’s vantage, and everyone else’s at the time, Metaphors We Live By signaled yet another new program for Lakoff, Cognitive Linguistics being no more than the most recent whistle stop. The book barely mentions cognition, or, for that matter, grammar, let alone Cognitive Linguistics, and despite Lakoff’s fondness for self-citation, he cites neither of the papers from that whistle stop (Lakoff & Thompson 1975a, 1975b; fair enough, I guess, as neither of them mentions metaphor). The flag Metaphors We Live By flew was not cognitive, but experientialist, the buzzword du jour for Lakoff’s evolving framework (chosen in part for its anti-rationalist, therefore anti-Chomskyan, flavor). Perhaps most notably for linguists, that framework seemed to be leaving linguistics behind in favor of philosophy.
But Metaphors We Live By raised Lakoff’s stock considerably. It gave only counterindications about a Cognitive Linguistics, but there was a broad appetite for his next book, also whetted somewhat by its attention-grabbing title, Women, Fire, and Dangerous Things (G. Lakoff 1987). That book was explicitly cognitive, cognitive in a way both very different from Chomsky and more fully articulated than in Lakoff’s earlier work. Women, Fire gave the colligation Cognitive Linguistics sticking power and provided a kind of antonym to Generative Grammar; or rather, fulfilling the arc of Lakoff’s career from the mid-1970s forward, to provide an antidote to Generative Grammar. Most importantly, Cognitive Linguistics leveraged the phenomenal success of Metaphors We Live By into a much broader approach to language, one which prominently incorporated the “conceptual metaphor” theory but also much more.
Women, Fire, and Dangerous Things concludes with an autobiographical flourish. The final section, entitled “Generative Semantics Updated,” runs through the methods and instruments he retains in his research program from the movement (“conceptual metaphor” theory, for instance, prototypes, radial categories), itemizes the bits he has discarded (transformations, generativity, other trappings of Chomskyana), adds in the guiding principles forged in the debate (a reliance on general cognition, primary allegiances to meaning and communicative function, incorporation of pragmatics): “and the result is cognitive linguistics” (G. Lakoff 1987:582–85). That result looks very different from the floor-to-ceiling trees and extravagant derivations that initiated the movement, or the fairly modest probing of Deep Structure that triggered the debates that triggered the war. If some Generative Semanticist—say, George Lakoff—in the 1963 of “Toward Generative Semantics,” or in the 1967 of “Is Deep Structure Necessary?” or in the 1971 of “On Generative Semantics,” was presented with a copy of Women, Fire, and Dangerous Things, one imagines that linguist would be very challenged indeed to connect it naturally to any of those works, perhaps to the notion of linguistics, even using the scorecard Lakoff assembled. It would look like it came not from another time, but from another planet.
Retroactively, however, the course Lakoff charts at the dénouement of Women, Fire is a coherent and sensible map of one linguist’s route from the Generative 1960s to the Cognitivist now. In fact, Lakoff’s program may well be closer to the one his first book followed, the Abstract-Syntax-bordering-on-Generative-Semantics, Irregularity in Syntax (1970 [1965]), than Chomsky’s current program is to his first book, Syntactic Structures. There is no clear-cut way to make such comparisons, but again we are talking of something akin different planets.
Many people see a powerful aspect of the emotional impetus behind the Generative Semantics movement, especially strongly for Lakoff, as the impulse to out-Chomsky Chomsky—to go deeper than Chomsky, to go further than Chomsky, to be more cognitive than Chomsky, to explain more about language and thought and being human than Chomsky. If so, Lakoff certainly put his money where his mouth was. He worked repeatedly, steadily, to revolutionize linguistics, with, eventually, remarkable success. There has been no complete overturning of the previous paradigm, but his work was instrumental first in contributing to the pluralization of that paradigm (there is no longer one generative model), and latterly in the formation of a compelling complementary paradigm, Cognitive Linguistics, with a resulting division of intellectual and institutional resources—certainly not the near-wholesale domination of resources we saw with Chomsky’s rise, but pretty damn impressive.
But linguistics is only the tip of the Lakovian revolutionary iceberg. He has campaigned to revolutionize philosophy (“to radically change Western philosophy,” G. Lakoff 1997:47; “ to rethink philosophy from the beginning,” Lakoff & Johnson 1999:15), and psychology (“to change not only our concept of the mind, but also our understanding of the world,” G. Lakoff 1987:9), and English studies (where his work saves us from “two millennia . . . [of the erroneous view] that metaphor plays no role in the serious matters of life; [that] it can at best serve as entertainment or perhaps play a role in irrational persuasion,” Lakoff & Turner 1989:215), and politics (“We need a new understanding of American Politics,” Lakoff 2002:143). In mathematics, he is going for the creation of an entirely new field, with another rethinking from the beginning of Western thought. In typical Lakovian everyone-else-has-had-it-wrong-forever-but-we-have-the-key-to-unlock-all-mysteries framing, he and Rafael Nuñez begin their Where Mathematics Comes From with
As specialists within a field that studies the nature and structure of ideas, we realized that despite the remarkable advances in cognitive science and a long tradition in philosophy and history, there was still no discipline of mathematical idea analysis from a cognitive perspective—no cognitive science of mathematics.
With this book, we hope to launch such a discipline.
A discipline of this sort is needed for a simple reason: Mathematics is deep, fundamental, and essential to the human experience. As such, it is crying out to be understood.
It has not been. (G. Lakoff & Nuñez 2000:xi)
Oh, heck. Let’s get it all over with in one fell swoop: “We need a new, updated Enlightenment,” he has proclaimed, “a new understanding of what it means to be a human being; of what morality is and where it comes from; of economics, religion, politics, and nature itself, . . . science, philosophy, and mathematics” (G. Lakoff 2008:13–14). Most of these core areas are famously in Chomsky’s wheelhouse, including (lest we forget Cartesian Linguistics) an updating of the Enlightenment.
Lakoff’s approach in every one of these areas is antithetical to Chomsky’s approach, as is the prescription for his New Enlightenment. What we need, Lakoff says, is an approach to “freedom, equality, fairness, progress, even happiness” based on an understanding of the interpenetrating importance of “frames, prototypes, metaphors, narratives, images and emotions” (G. Lakoff 2008:14–15). The prescription is most desperately needed by liberals. “They don’t understand their own moral system or the other guy’s,” he says:
they don’t know what’s at stake, they don’t know about framing, they don’t know about metaphors, they don’t understand the extent to which emotion is rational, they don’t understand how vital emotion is, they try to hide their emotion. They do everything wrong because they’re miseducated. And they’re proud of that miseducation. (qtd in Williams 2014)
Lakoff is very, very far from the dissident that Chomsky is. He tries to work from the inside. Chomsky is a congenital outsider. But Lakoff also strives, like Chomsky, to make a political difference. Like Chomsky too, since his “spectacular ascent” in the 2000s (Bai 2005), Lakoff has become a very coveted public speaker.
In some fields Lakoff’s revolutionary ambitions have been remarkably successful, others less so, but he has unquestionably dominated stretches of the intellectual landscape in the twenty-first century, in ways not unlike Chomsky’s impact in the 1960s. I don’t think anyone would say—even taking a comprehensive reading of his Generative-Semantics era allegiance to “the idea of not being like Chomsky”—that Lakoff’s career was planned out in strategic opposition to Chomsky’s. But such systematic, field-by-field, axiom-by-axiom, commitment-by-commitment, yin-by-yang opposition is extraordinarily rare. The intellectual trajectories of Lakoff and Chomsky could not be more antithetically symmetrical if they were plotted by Dickens. Of course, there is one very notable asymmetry. In every instance, Chomsky got there first.
Examples of Chomsky’s late conversion to some of the cardinal tenets of Generative Semantics abound.
—Geoffrey K. Pullum (1996:139)
For Generative Semanticists, Chomsky’s most egregious ethical breach was not his hostility to their program. It was what came next, his largely furtive adoption of many of Generative Semantics proposals once the hostilities were over. Postal explains this in nautical terms: the right of salvage.
With the wreck of Good Ship Generative Semantics, great amounts of ideas, data, mechanisms, and perspectives were cast to the seas. Some of it was lost, probably for good; probably, in fact, for the best. But much else made its way into the holds of other theories; most notoriously, into the hold of Chomsky’s commissioned frigate, seen most recently flying the colors of Biolinguistics and bearing the name Good Ship Minimalism. This fate is one of the two tragedies that ex-Generative-Semanticists recurrently cite as having befallen their model, that their work has been stolen.
The other purported tragedy, that vast quantities of their data are completely and systematically ignored, is confused and beside the point. It is certainly true that some material is gone, but there is a great deal more which has had a profound impact on the way linguists look at language, as we will see. What Generative Semanticists seem to mean by this complaint, actually, is that Chomsky and his kith systematically ignore the data they so lovingly turned up. But could it be otherwise? Chomsky has never had more than a peripheral interest in pragmatics, figurative language, or functional explanations, and he has been unrelentingly hostile to empiricist research strategies in linguistics, such as corpus analysis. It is virtually inconceivable that he would pen a paper on syntactic and semantic horrors you can find in your medicine chest (Sadock 1974b), or on the logic of politeness (R. Lakoff 1973b), or spend much time charting out the syntactic implications of verbs for bitching (McCawley 1973b). So, there’s that: utter disinterest. But there’s this, too: the problems and mysteries question. Syntactic and semantic horrors are by their nature data that not only resist theoretical incorporation, but confound theoretical incorporation. That’s the point of Creature Features. They give theories the heebie-jeebies.
Much data of this cast was never accommodated by the people who brought it to light, either, nor even pursued. So why should anyone else feel responsible to it? And, truthfully, by the time these papers started to come out, the Generative Semanticists were no longer really talking to Chomsky anyway, nor to folks working in his framework. They were talking to each other and to the field at large. The lost-data tragedy is mostly irrelevant, then: (1) much of it is not really lost at all, though it is “lost on” Chomskyan linguists, just as much Chomskyan data is “lost on” people with other theoretical dispositions; and (2) much of it is not really “data” at all, just heaps of oddities that no one can structure into a theoretical argument. Data is only really data when it has structure, and structure is provided by frameworks.
Back, then, to the pilfered-ideas tragedy. How does it fare? Unbelievably better. Let’s start with Newmeyer’s list of Generative Semantic contributions to formal linguistics, a list that has often been sneered at by ex-Generative-Semanticists as dismissively brief. It is, in fact, quite brief, brief enough to quote in full, but it is not dismissive:
While Generative Semantics now appears to few, if any, linguists to be a viable model of grammar, there are innumerable ways in which it has left its mark on its successors. Most importantly, its view that sentences must at one level have a representation in a formalism isomorphic to that of symbolic logic is now widely accepted by interpretivists, and in particular by Chomsky. It was Generative Semanticists who first undertook an intensive investigation of syntactic phenomena which defied formalization by means of transformational rules as they were then understood, and led to the plethora of mechanisms such as indexing devices, traces, and filters, which are now part of the interpretivists’ theoretical store. Even the idea of lexical decomposition, for which Generative Semanticists have been much scorned, has turned up in the semantic theories of several interpretivists . . . Furthermore, many proposals originally mooted by Generative Semanticists, such as the nonexistence of extrinsic rule ordering, post-cyclic lexical insertion, and treating anaphoric pronouns as bound variables, have since turned up in the interpretivist literature, virtually always without acknowledgement. (1980a:173; 1986a:138)
There was certainly more associated with Generative Semantics that also showed up downstream in Chomsky’s program than Newmeyer lists—even the return of Generalized Transformations was proposed in a late article (G. Lakoff 1974:343), and they have come back to Chomsky (Williams 2015)—but part of that is a function of just how much data was churned up in the movement and how many corresponding mechanisms and tentative suggestions were bruited to deal with that data. Newmeyer gives a realistic impact-assessment statement of how Generative Semantics affected Chomsky’s program: in innumerable ways, virtually always without acknowledgment.
Chomsky’s attitude to intellectual property is cavalier at best—his own as well as others’—and it is an attitude that rubs off very quickly on his students; indeed, transitively, even on students of his students. Their own work, and each other’s work, is all that matters. No one else gets too much attention, let alone discussion and acknowledgment. The most obvious, earliest, and longest-lasting example of this quiet appropriation is Chomsky’s adoption of logical form (LF)—a representation in terms of the categories and operators of predicate calculus, which G. Lakoff and McCawley argued for in the early 1970s, and which has occupied a central place in all of Chomsky’s models since the latter 1970s:
[This adoption] not only served to answer the charges by Generative Semanticists that he had been insufficiently specific about what semantic representations were and how they were to be arrived at . . . but it also demonstrated that the domain of data which Generative Semanticists had argued should be considered grammatical would not be entirely ignored in the evolving Interpretive theory. . . . by the late 1970s that conception began to bear an unmistakable likeness to the semantic representations that McCawley had earlier argued for. (Huck & Goldsmith 1995:45)
With characteristic modesty, Lakoff stakes sole claim both to the representation itself and to the entire theory housing it, and insinuates plagiarism— “Government and binding, following my early theory of generative semantics, assumes that semantics is to be represented in terms of logical form”—and then dismisses all of it as inadequate for the genuinely important facts about language, “the phenomena covered by the contemporary theory of metaphor [that is, ‘conceptual metaphor theory’]” (G. Lakoff 1993:240).
Chomsky has a rather different view. As far as he appears to be concerned, Logical Form comes directly from Robert May, who, not coincidentally, completed a thesis under Chomsky exploring semantic representation in logical notation (later revising it substantially for publication—May 1977; 1985). May cites G. Lakoff only once, very briefly, and only negatively, in order to deny that there is any connection between their respective suggestions (1985:158n4). McCawley gets nothing from May (no more, we might notice by the way, than McCawley gets from Lakoff). Not even a comma goes his way, despite the essential part McCawley played in developing a representational level based on logic, despite even the criterial place in May’s work for the rule he dubs simply QR, ostensibly for Quantifier Rule, but many readers assumed stood for Quantifier Raising (a more mnemonic name, since the rule moves quantifiers up trees—see, e.g., DeCarrico 1983, Stepanov & Stateva 2009). With very minor wrinkles, QR is the rule that McCawley proposed much earlier (1976b [1972]: 294), albeit moving constituents in the other “direction,” Quantifier Lowering.15
Next on the list of notorious borrowings is lexical decomposition, which also started to show up in Interpretivist work in the mid-to-late-1970s and remains key in the Minimalist Program. Then comes Mark Baker’s Chomsky-endorsed Universal Theta Assignment Hypothesis (UTAH), which resembles the Universal Base hypothesis in key ways, followed by a host of small developments, like the global properties of the trace convention and the main-verb analysis of auxiliaries (which, McCawley later pointed out, as the reading lists of him and other narrowly groomed MIT graduates expanded, was fundamentally Jespersen’s analysis in any event—McCawley 1984a:xv). In the early 1990s, Chomsky jettisoned grammaticality, with no more than a dismissive footnote saying that it was never important anyway, just a bit of a sop to informal exposition, presumably for those who couldn’t follow the corresponding formal exposition. Deep Structure, too, was dropped. These were notions that Generative Semanticists argued strenuously against. Chomsky simply leaves them on the curb, with minimal commentary.
The defining issue is not whether Chomsky is “allowed” to abandon favored notions that Generative Semanticists had denounced, or to incorporate Generative Semantics innovations into his program—McCawley, for instance, incorporated x-syntax in his work without anyone complaining of theft, and nobody seemed to mind when Gazdar and Pullum adopted Ross’s main-verb analysis of auxiliaries—or even whether Chomsky should acknowledge the precedents of such abandonments and incorporations.
The defining issue is that Chomsky denounced Generative Semantics so contemptuously for so many of the tendencies and mechanisms he now embraces, a denunciation—curiouser and curiouser—he still maintains. Take UTAH. Chomsky notes (rather unusually) that Baker’s proposal is similar to one “explored within Generative Semantics”; namely, “that deep structures represent semantic structure quite broadly, perhaps cross-linguistically.” The earlier proposal, however, the Universal Base Hypothesis, “proved unfeasible”; indeed, worse than that, “more or less vacuous,” because of various problems with Generative Semantics having to do with its vast descriptive latitude. However, with the tremendous restrictiveness built into Government and Binding theory, the same proposal “becomes meaningful, in fact extremely strong” (1988b:66–67). Presumably he found similarly welcoming notions about lexical decomposition and Predicate-raising—the former of which had “little empirical content,” the latter of which was “quite unnecessary,” at the Texas Goals conference (Chomsky 1972b [1969]:142–43)—once his model mutated coincidentally in ways that accommodated them.
Not all ex-Generative Semanticists were outraged at such incorporations. In his review of Baker’s book, which not only clones the Universal Base, but also adopts a version of Predicate-raising, and, very interestingly, compromises the once-sacred cow of Interpretive Semantics, Lexicalism, Jerry Sadock remarks,
As one who lived through the anti-Generative Semantics pogroms of the 70s, and experienced the bloody vengeance that was visited upon those who would pollute the modules, I find this laissez-faire attitude toward the Lexicalist Hypothesis both refreshing and astonishing. Baker’s work is actually more “generative semantic” than the Generative Semantics of the late 60s and early 70s. (Sadock 1990:130)
Sadock seems amused, not threatened, by these developments, grateful to have the ideas back in circulation, to have them cogently advanced and to see them integrated into other programs, and, whatever the lack of acknowledgment, to see them comparatively free of dogma.
The most striking Generative-Semantic homology of Chomsky’s current program, by far, is its architecture. He outlines the model, termed the Minimalist Program, this way: “there [are] representations of meaning, representations of form, and relations between the two” (1979 [1976]:150). Oh, wait a minute. Sorry. That’s how he outlines Generative Semantics in his account of how it was “virtually empty.” My bad. But it is the reduction to these three elemental components (sound, meaning, computational linkage, period) that makes his program minimalist. I think I’ve got the right quotation now:
Computation maps LA to <PHON, SEM> piece-by-piece. [Where LA is Lexical Array (words), PHON is a phonological representation, SEM a semantic representation.] (Chomsky 2004a [2001]:107)
Shades of Lakoff and Ross! There is no Deep Structure in this architecture, no single point of lexical insertion. Shades of Postal! This looks like what he called “the best grammatical theory a priori possible” (Postal 1972a [1969]:136), a mediational account of grammar with two and only two endpoints, sound (PHON) and meaning (SEM), and a homogeneous rule system that links the two (computationally maps a Lexical Array). Between form and meaning, there is “a single operation for building the hierarchical structure required for human language syntax” (Berwick & Chomsky 2016:10)—an operation whose genealogy takes it back to the one that had a starring role in Generative Semantics, the transformation, and which likewise built lexical as well as phrasal structure into hierarchical units. This operation works on these primitive semantic notions, “call them ‘atomic concepts’ . . . word-like objects but not words” (Chomsky 2016:41); like, er, cause, become, not, and, just guessing here, alive?
PHON and SEM have their respective destinations in the Minimalist Program, as they did in Generative Semantics, to SM and to C-I, and, as in Generative Semantics, these destinations implicate general (or, in any case, not strictly linguistic) cognition: “SM is the link to organs of externalization (typically articulatory-auditory) and C-I is the link to the systems of thought and action” (Chomsky 2015a:ix). Figure 9.2, a depiction of this model, should look familiar.
D(eep)-structure finally follows phlogiston and ether and the dodo into oblivion; S(urface)-Structure too. The old dream of a homogeneous theory with only two representations—one of form and the other of meaning—is revived.
Chomsky does not, despite its great resemblance to the blueprint Postal provides (Figure 3.3, page 103), call this architecture Homogeneous I. He calls it “the most basic property of human language” (Chomsky 2016:4), frequently throwing it into the steaming alphabet soup bowl of linguistics with the designation BP. Nor does he use the label “The Best Theory,” but only because he goes one better than Postal’s superlative: his favorite adjective for the Minimalist Program is perfect. We can now formulate such an architecture he says, because theory has advanced to the point where we can ask, “How closely does language. . . approximate a ‘perfect solution’ to the boundary conditions set by SM and C-I?” (Chomsky 2015a:xvi). Indeed, he goes Postal two or three better (how to count?), and ups the superlative stakes further yet to imply that this architecture describes an “organ of extreme perfection” (Berwick and Chomsky 2016:i). Along with a simple picture, of course, comes a methodology embracing simplicity. We recall that Chomsky once said, “It is often a step forward . . . when linguistic theory becomes more complex,” in his response to Postal’s “Best Theory” (Chomsky 1972b [1969]:126). But complexity had run its course for Chomsky by the 2000s, and we are back, once again, to “naturally we seek the simplest possible account of the Basic Property, the theory with the fewest arbitrary stipulations,” an approach that (like Postal) he regards as “standard scientific method” (Chomsky 2016a:16; see Postal 1972a [1969]:135).
The primacy of meaning in recent versions of Chomsky’s framework also looks hauntingly familiar. Remember how Lakoff based his claim that “semantics may be generative” on “the intuition that we know what we want to say and find a way of saying it” (1976 [1967])? Chomsky now sings in sentimental harmony with his former adversary:
language . . . is fundamentally a system of meaning. Aristotle’s classic dictum that language is sound with meaning should be reversed. Language is meaning with sound. (Berwick & Chomsky 2016:101)
Chomsky actually pushes well beyond Generative Semantics in the meaning-first story of language, often suggesting that cognitively as well as evolutionarily, communication is a derivative function of language (e.g., 2000:30, 2005:3). The “real job” of language is in private representations and reasonings.
The simple polar design of the Minimalist Program, however, obscures much about its realization in any given grammar, again pushing us more toward Homogeneous II than Homogeneous I. The wiring and the plumbing and the ductwork for Chomsky’s basic-property architecture when it is realized in any given language includes descendants of mechanisms Chomsky has championed over the decades. The regime of filters, conditions, constraints, principles and their parameters—the instruments drafted to ensure the right linguistic structures “surface,” while the bad ones don’t—have been largely replaced in the Minimalist Program by “least-effort” and “last-resort” principles of “natural economy” that optimize computation. Some derivations are more costly than others, so they are avoided. But optimize computation is just another way of saying preserve the derivation, ensuring the correct output. As Newmeyer points out, some of these principles implicate “what have been called ‘transderivational’ principles: the derivation of one sentence involves globally scanning other possible derivations from a given set” (Newmeyer 1998:311). To take this similarity from the odd to the truly bizarre, Chomsky’s labels for these principles also have the distinct cast of Generative Semantics’ naming conventions. One of them is Procrastinate; another is Greed.
By this point, one knew not to expect acknowledgment of the remarkable convergence his program had with the one he and his kith denounced for a decade, but the almost total lack of argumentation for the changes is striking. A torrent of articles and books were taken up with denunciations and counterarguments—the cost Generative Semanticists incurred for their proposals—but the adoptions apparently come for free, maybe through a principle of natural economy. The lack of justifications has not gone unnoticed by linguists with a sense of history. Pullum, for instance, noted that the Minimalist Program meant
abandoning the cherished level of deep structure (known as “d-structure” in the last two decades). This is exactly what Postal 1972a [1969] suggested was “the best theory.” But the names of linguists like Postal, Ross and McCawley, who in the late 1960s tried to argue for the elimination of deep structure, are completely absent from Chomsky’s bibliography. There is no belated nod in the direction of the literature he resolutely resisted for 25 years but whose central thesis he now adopts. Nor is there any real effort to supply intelligible reasons and arguments for his abrupt conversion to the tenets of generative semantics. . . . The implied epistemology is one of miraculous revelation. (Pullum 1996:138)16
And a pair of veteran Interpretivists slyly credit the “eerily prescient” practitioners of Generative Semantics for somehow producing work “in many ways anticipating” developments in the Minimalist Program (Culicover & Jackendoff 2005:98, 96). While Chomsky has nothing to say on this prescience, one of his expositors and apologists notes that the problem with the original proposals from Postal et al. is that “in the 1970s [they were] premature” (Boeckx 2006:72n7); he does not mention whether the attack on those earlier proposals might not also have been premature.
The Minimalist Program is not identical with vintage Generative Semantics. There are many features distinguishing the two of them (and, as numerous observers have noticed, many places where the Minimalist Program is lacking in specifics that would allow people to appraise it with respect to other theories). But these moves within the Chomskyan fold do strengthen the impression that the Wars were not primarily about theories or methods.
Chomsky’s denounce-then-adopt policy was a thorn in the side of many ex-Generative-Semanticists for years, and remains a point of amusement for exogenous observers. George Lakoff complained about it bitterly to me, and Postal features it in his most famous polemic, the satirical guide to “Advances in Linguistic Rhetoric.” Postal gives poker-faced advice in the piece to would-be Chomskyans on how to coöpt ideas successfully. “Suppose some proponent, like McCawley, “ Postal says,
of the unquestionably wrong and stupid Basic Semantics (BS) movement has, accidentally, hit on one or two ideas you need to use, say hypothetically, the notion that surface quantifiers are connected to logic-like representations by transformational movement operations sensitive to syntactic constraints, or something like that.
When adopting this idea, assuming that you wish to do so, it would be an obvious rhetorical error to cite any proponents of BS. Not only would this waste a lot of serious linguists’ time if they were persuaded to actually read such misguided stuff, it might mislead less sophisticated thinkers than you into thinking something about BS was right.
So the correct procedure is to proclaim and get others to proclaim, over a long period, many times, that BS is totally wrong, misguided, unscientific, etc. Then, quietly, simply use whatever BS ideas you want without warning and without any tiring citational or attributional material. A well-known principle of scholarly law known as Right of Salvage guarantees that you cannot be held accountable for this. (2004 [1988]:292; Postal’s italics)17
We are dealing with people, in real, murky, often conflictual, squishy situations, where there is rarely anything like black and white to guide us. We are dealing with negotiated, improvised, always-being-born language.
Haj Ross (2000:179)
One of the ways some detractors look at Chomsky’s impact on linguistics is as the last gasp of Bloomfieldianism (for instance, in the best articulation of this position, Moore & Carling 1982:19–47, which has not dated despite the many changes of specifics in Chomsky’s program). Bloomfield—particularly as vitrified in followers like Bloch and Trager—kept meaning at bay. Chomsky is a little more optimistic about getting at meaning than Bloomfield was, and has been responsible for a good deal of semantic headway, but never directly, always through what he has set in motion with others. He retains Bloomfield’s cautiousness and he retains the general Bloomfieldian dogma that meaning will never get into the driver’s seat, even in the Minimalist SEM ⇒ PHON architecture. He was arguing the Bloomfieldian party line on how to practice safe linguistics as early as 1955—that meaning can’t be allowed to contaminate analyses of form—and he has only occasionally wavered from that position in the more than half century since. If form will help him get at meaning, he’s happy for the opportunity, but the reverse is unthinkable. He will never use meaning to get at form.
Bloomfield banished the mind, making mentalism an umbrella term for aspects of language that couldn’t be approached scientifically. Chomsky broke the taboo about discussing the mind, but he has similar firewalls up for preventing exposure to the mental life of speakers— performance, and its partial descendant, E-Language. Just as Bloomfield’s anti-mentalism was one way of keeping meaning away from form, by consigning it to psychology and sociology, so Chomsky’s competence and I-Language are ways to keep meaning and other contaminants, like context and belief, away from form, by consigning them to “memory limitations, distractions, shifts of attention and interest” as well as to “the physical and social conditions of language use” (1965 [1964]:3; 1977:3; 1986:31–32)—that is, to psychology and sociology.
In this dying-Bloomfieldian-wheeze interpretation, the Chomskyan hegemony that arose in the 1960s was just a new face on the old unease over meaning and mind. Chomsky belongs, this interpretation goes, not to the true vanguard in linguistics, but to the progressive elements of the old guard, with scholars like Charles Hockett. Where Hockett adopted an evolutionary, steady-accrual-of-knowledge rhetorical stance, however, Chomsky adopted a revolutionary, follow-me-while-I-burn-it-all-down rhetorical stance.
Bruce Fraser’s allusion to Charles Reich’s The Greening of America surely did not have a one-to-one allegorical intent—Noam Chomsky as The Man—but the more general suggestion is unavoidable, that a transformational hegemony was being overcome. Reich’s book ends on a crescendo extolling the promise of the counterculture:
The extraordinary thing about this new consciousness is that it has emerged out of the wasteland of the Corporate State, like flowers pushing up through the concrete pavement. Whatever it touches it beautifies and renews, and every barrier falls before it. . . .We have all been induced to give up our dreams of adventure and romance in favor of the escalator of success, but [the new consciousness] says that the escalator is a sham and the dream is real. . . . For one who thought the world was irretrievably encased in metal and plastic and sterile stone, it seems a veritable greening of America. (Reich 1970:341–42)
Here is the higher point of Fraser’s allusion: that a new consciousness was taking hold of linguistics in the late 1970s—new perspectives, new methods, new fields of data, new flowers pushing through the Chomskyan pavement.
Generative Semantics certainly does not get sole credit for the greening. As Ross puts it (verbing is a rough synonym of greening for him): Generative Semantics “was only one part of a general process of the verbing of linguistics, which all began in the 1960s or shortly thereafter.” Among the other contributors to the verbing, he includes ethnomethodology, discourse studies, sign language studies, “the rise of variation theory . . . a variety of different flavorings of functional grammars . . . different kinds of phonologies, . . . the rise of second-language acquisition as a field . . . language and law, language and therapy, language and arbitration, the huge expansion of aphasia studies, the invention of the term neurolinguistics, and so on” (Ross 2000:178)—that is, a welter of interests, approaches, and extensions, some coming in response to Chomsky’s framework, some attempting to expand that framework, all of them benefiting, like Generative Semantics, from the energy and resources Chomsky’s framework attracted to linguistics.
But “generative semantics was right in there in the delivery room for the birth,” Ross adds:
Paul Postal was the syntactician who started it off, by letting syntax and semantics interbe [not a typo]. George Lakoff, of course, gave the birth an immense impetus in his boundary-crossing forays into philosophy and cognitive science. Jim McCawley (and Lakoff, too, to a somewhat lesser extent) did a vast amount of detailed analyses and carried on with the serious working out of an interfield [also not a typo] between linguistic and philosophical semantics. (Ross 2000:179)
Some linguistic sub-disciplines and para-disciplines that once sought out Chomsky’s work for investigation, application, and validation have since decided they can get along well without him (psycholinguistics, sociolinguistics, computational linguistics, second language acquisition). Linguistics is more vibrant, pluralistic, and daring than it has ever been, and Generative Semantics gets much of the credit in this greening. Certainly, it deserves some retrospective general credit as the thin edge of the wedge that brought into linguistics a good crop of phenomena that Chomsky was content to ignore, along with a range of methods and goals he discounted or despised.
Just the act of breaking the ice was crucial. Take corpus linguistics. It’s not so much that it wouldn’t be flourishing without Generative Semantics. The computational developments over the last two decades make its growth virtually inevitable, and proposing alternate histories is a sketchy pastime at best. But it seems safe to suppose that if Chomsky’s program had grown into the inevitable hegemony rearing in the 1960s, and if Chomsky’s data isolationism had not been shaken by the dataphilia of Generative Semantics, that his scorn for corpus studies would have had a much tighter grip on the field than it now has; that corpus language studies might be a somewhat marginal pursuit, driven more by computer scientists than by linguists, that usage-based theorizing might be a fringe activity.
So, Genterative Semantics broke the ice. There’s that. But it also deserves more specific credit for the kinds of perspectives and the ranges of data it opened up.
In the first instance, there is pragmatics. While it may not seem on the surface reasonable to give credit for the birth of pragmatics to a movement whose most prominent spokesperson once made the notoriously arrogant pronouncement that he had reduced the whole domain to garden variety semantics (G. Lakoff 1972b:655), Lakoff’s cheery arrogance about pragmatics probably did more to break down the barriers of context and use in linguistics than any other factor. He was wrong, of course, but it opened up the domain, previously a gated community of philosophers, to linguists.
Linguistic Pragmatics is now a thriving subdiscipline; which dates largely from Ross’s and Robin Lakoff’s early performative work; which includes Sadock’s extensive explorations of speech act theory and George Lakoff’s investigations of presuppositions and of Grice’s conversational research and Robin Lakoff’s charting of politeness and deixis and gender and appropriateness and context generally; among whose chief landmarks are Gazdar’s formal text (1979), which began life with a transderivational analysis, Levinson’s informal text, which specifically attributes the pragmatic infusion of linguistics to Generative Semanticists (1983:36), and Green’s Pragmatics and Natural Language Understanding (1983; 2nd edition, 1996). All of these books bristle with references to Ross, Sadock, the Lakoffs, Davison, Green, McCawley, Horn, and assorted other Generative Semanticists. They take Generative Semantics as their chief methodological and theoretical starting point (along with acknowledging ordinary language philosophy as the source of foundational insights).
In the second instance, there is a thick and interpenetrating flora of developments which may not have begun with Generative Semantics, but most of which received support and recruits from its ranks, and all of which benefited hugely by the break-up of the growing Chomskyan hegemony that began with the disagreements over the Katz-Postal Principle and Deep Structure. Again the 1979 Milwaukee Conference is a useful barometer. The conference featured various refugees from the Aspects tradition, from both warring camps and from the sidelines, but also functionalists, European linguists, and even representatives from pre-Chomskyan programs who likely would not have been welcomed just a few years earlier, should they have ventured out from their bunkers.
The most obvious payoff of this greening was the emergence of a new paradigm, distinctly counter to the Chomskyan program: Cognitive Linguistics.
Cognitive Linguistics [is] a rejuvenated and invigorated form of generative semantics
—Anna Siewierska (2013:486)
There was a recurrent mixing of the designators, Cognitive Grammar and Cognitive Linguistics, in the 1980s and early 1990s, partly fueled by Lakoff’s use of them as synonyms in Women, Fire, and other publications, but they disengaged into a loose subset-superset relation. The former is a specific model of a linguistic system, a theory, closely associated with Ronald Langacker (1987, 1990, 1991, 2008). The latter is a broad approach to language with a range of themes, methods, goals, and exemplifying texts (including Langacker’s); that is, a research program.
As we saw with Lawler’s reaction at the Milwaukee conference, evidence in the late 1970s suggesting a healthy future, or any future at all, for something called Cognitive Grammar or Cognitive Linguistics was neither plentiful nor promising. But the remarkable work of Langacker to carve out a place for the one, and the incubator of Berkeley to foster the other, changed everything. Nor did the phenomenal success of Metaphors We Live By hurt. Langacker’s Grammar was important in the early days of the program for giving intellectual and methodological credibility to a general-purpose, integrated cognitive approach to language, distinct from Chomskyan special-purpose modularity, at a time when grammatical models were still the gold standard in linguistics. The Berkeley loam nourished a seedbed of figurative, conceptual, usage-based, symbolically governed, semantically encyclopedic, syntactically profuse arguments, analyses, and proposals that flourished into Cognitive Linguistics as the field was greening.
Langacker’s Cognitive Grammar offers a rich, coherent, challenging, and carefully articulated model rigorous enough to earn the honorific surname grammar, but one that used its forename, cognitive, in a very different way from generative grammar’s traditional use of cognitive. Langacker—briefly but formatively a Generative Semanticist—was invested equally in the overall program as he was in his own particular model, and a number of its other supporters and fellow travelers figure prominently among his inspirational sources. Bolinger, Chafe, and Fillmore all get singled out (1987:5), and G. Lakoff is particularly praised for opening up several lines of research defining his model. But the important connections are more theoretical and methodological than sociological, and he also warmly acknowledges the congenial research of Jackendoff and Bresnan, along with a range of other scholars working on the interrelations of meaning, syntax, words, and morphology, and yet others investigating cognitive phenomena like categorization, imagery, and processing.
Meanwhile, Berkeley became home to one of the most fruitful scholarly collocations for twenty-first century linguistics, Fillmore and Lakoff, and—no coincidence—emerged as the first, and so far only, genuine rival in agenda-setting linguistics to MIT. Collocation, because their combined influence is far more a matter of simultaneous residency than of active collaboration (they never published together). Until Fillmore’s death in 2014, they reciprocally propelled each other, and jointly shaped the field, from the beginnings of Generative Semantics and Case Grammar, but more dramatically with the development of Cognitive Linguistics. Lakoff is known in the media as the “Father of Framing,” because of the way he has integrated ideas about framing into his views of cognition and also his social and political commentary, and especially for the way he has popularized it. But in linguistics, that paternity is rightly recognized as Fillmore’s. Take it from Lakoff. While he is happy to take ownership in popular and political contexts (2014b), in a linguistic context he is clear that Fillmore “was the discoverer of frame semantics, [and] did the essential research on the nature of framing in thought and language” (G. Lakoff 2014a). Conversely, Fillmore is often seen as the founder of Construction Grammar, on the basis of a series of foundational papers in the late 1980s which established its first formalisms and articulated its theoretical underpinnings, because of its obvious indebtedness to Frame Semantics, and because his Case Grammar is seen as the closest thing to a precursor. But Lakoff’s (1974) “Syntactic Amalgams” and (1977) “Linguistic Gestalts” make perhaps the earliest recognizable proposals in the direction of Construction Grammar, and the approach took its basic shape through discussions in which he was a heavy contributor, along with Paul Kay.
Both Fillmore and Lakoff were heavily influenced by prototype theory (Kay is important here as well), both were encouraged by the prospects of connectionism for linguistics, both convinced of the interpenetration of concerns often labeled pragmatic with concerns often labeled semantic, and both were certain that the generative tradition had outlived its usefulness— a kind of linguistic solid rocket booster that got cognitive science off the ground, if not quite into orbit, and fell back to earth of its own dead weight. They each habitually acknowledge and cite the other’s work, sharing almost completely a core of common interests, presumptions, and inclinations.
While most of the early development of Cognitive Linguistics seems to go through those two, the Berkeley circle had plenty of other heavyweights. Kay, for instance, is a cognitive anthropologist whose influential cross-linguistic work on color terms with Brent Berlin inspired both Fillmore and Lakoff. He became an active interlocutor with them, and a co-creator of Construction Grammar. A watershed event in 1975—the Berkeley equivalent of the 1956 Cambridge Symposium on Information Theory that brought Chomsky and Miller together, except it went on much longer and the coffee was better—brought the three of them together among an especially rich party of thinkers from a range of disciplines. “We had no official courses and no official teachers,” Lakoff remembers:
What we did was set up a schedule of lectures from 10 in the morning to 10 at night in various rooms around the university, for 6 days a week, with the rule that anyone could lecture with three days’ notice. Anyone who came and wanted to talk could simply talk. You just had to give us three days’ notice to put you on the schedule. (G. Lakoff 1997:37)
Fillmore, Kay, and Lakoff all had sessions. Prototype theorist Eleanor Rosch conducted an especially influential session. Larry Horn, Lauri Kattunen, David Lewis, Leonard Talmy, and Terry Winograd all participated; Ivan Sag helped organize it and ran the logistics. While many of the ideas were in the air before the workshop, the central elements of Cognitive Linguistics really coalesced in the years after this workshop.
Other scholars through the early years, established and junior, on the Berkeley faculty, or at neighboring institutions, or who took up research residencies for a spell, in linguistics or in other cognitive sciences, collectively formed a climate in which a new cognitive approach could flourish—Wallace Chafe, Bernard Comrie, Barbara Dancygier, Mark Johnson, Robin Lakoff, Rafael Nuñez, as well as various itinerant linguists, philosophers, psychologists, computer scientists, mathematicians, musicians, and court jesters. UC San Diego, with its thriving linguistics and cognitive science communities, with scholars like Langacker and Gilles Fauconnier, was a kind of SoCal sister city; faculty and students and visitors at one institution or the other moved between them, shopping their ideas. And, as with any movement, the young were crucial for developing and propagating it. Many Berkeley students grew quickly into influential linguists and cognitive scientists, notably including Claudia Brugman, Adele Goldberg, Knud Lambrecht, Srini Narayanan, Eve Sweetser, and Henry Thompson.
But at the epicenter of the epicenter were Lakoff and Fillmore, Fillmore and Lakoff, kindling, encouraging, advancing, and hawking the new linguistic framework—really, a confederacy of frameworks, in part reactively, against the Generative tradition, in part proactively, shaping a fresh new tradition—known as Cognitive Linguistics. Among its more distinctive sociological traits, inherited from Lakoff and Fillmore and others who had learned the hard lessons of the warring 1960s and 1970s, has been a healthy pluralism. There are, for instance, now a dozen or more dialects of Construction Grammar, the main syntactic branch of Cognitive Linguistics (though not confined just to Cognitive Linguistics), but they all happily borrow from each other and cheer each other along and see themselves as engaged in a collective enterprise, quite differently from the dialects of Generative Grammar originating on the other American coast.
Berkeley was the hothouse, but green shoots of general cognition sprouted across linguistics in the 1980s, with many scattered practitioners, and a growing number of allies in neighboring fields. Talmy Givón turned away from Generative Grammar earlier than most, in disgust at the “labyrinthine prison” Generative formalisms had built around the field of linguistics, “out of which no graceful natural exit is possible, short of plowing under the entire edifice and starting afresh” (Givón 1979:2). Chief among principles ruling that prison, which needed to be repealed for the renewal of linguistics, was the exclusion of general cognition: “It seems unwise to rule out the general cognitive and perceptual structure of the human organism from having strong bearing on the structure of language” (Givón 1979:4). And we’ve seen that Jackendoff’s approach, too—in which semantic structure is identical with conceptual structure and which, like Fillmore’s semantics, adapts the AI notion of framing (Jackendoff 1983, 1990)—is highly consonant with the Cognitive Linguistic greening.
Mark Turner in English studies, and Raymond W. Gibbs Jr. in psychology, and Naomi Quinn in anthropology, among a growing throng, were taken with “Conceptual Metaphor” and plumbed it in particularly rich ways, pushing their disciplines to a new understanding of analogic and correlational cognition. Turner showed the pervasiveness and refinement of common analogic frames in literature (1987; Lakoff & Turner 1989); Gibbs (1994) charted figurative modes of thought; Quinn offered important cultural correctives (1991). By the early 1990s it had spread well beyond California and the United States, with researchers and research communities all over the globe, and widespread institutional validation. The tipping point came in 1989, and—as with Government and Binding and GLOW ten years earlier—it came in Europe, with the formation of the International Cognitive Linguistics Association. The founding meeting took place during a University of Duisburg symposium, promptly rechristened after that meeting as the First International Cognitive Linguistics Conference; the second was held in conjunction with the LSA summer institute at Santa Cruz, California, marking how quickly the program was ready for prime time. The ICLA sponsored a journal, along with the annual conferences; more associations followed, more conferences, more journals. Cognitive Linguistics now has a wide presence in many departments, in most general linguistics journals and generalist conferences, to go with the more specialized conferences and journals.
What distinguishes the program is still, three decades on, often defined against Chomsky, usually Aspects-era Chomsky (e.g., Geeraerts and Cuyckens 2010b:11–17; Evans 2014). Its two most fundamental guiding principles, the Generalization Commitment, and the Cognitive Commitment, have Chomsky in their crosshairs. They were named and penned by G. Lakoff and appear in a manifesto gracing the first issue of the first volume of the journal Cognitive Linguistics.
The Generalization Commitment . . . is a commitment to characterizing the general principles governing all aspects of human language.
The Cognitive Commitment . . . is a commitment to make one’s account of human language accord with what is generally known about the mind and the brain, from other disciplines as well as our own. (G. Lakoff 1990:40)18
The Generalization Commitment is a clear repudiation of Chomskyan reductionism. The key phrase is “all aspects of human language,” which includes phonology, morphology, and syntax, of course, but also pointedly semantics, pragmatics, discourse, even aesthetic aspects, the latter three being areas of especially low interest to Chomsky (Lakoff 1989:55–56; 1990:40). The Cognitive Commitment is a clear repudiation of Chomskyan autonomy, which puts knowledge of language in a special category, encapsulated away from other aspects of cognition. Against these two commitments, Lakoff lines up a row of unwholesome commitments, prominently including “The Chomskyan Commitment . . . to try to understand syntax as a branch of recursive function theory, that is, as the study of the algorithmic manipulation of abstract symbols without regard to their interpretation, outside of real time, and not taking general cognitive mechanisms into account” (G. Lakoff 1989:56). It’s okay to hold such principles as working hypotheses, he says, but the Generalization and Cognitive Commitments take precedence; if a conflict arises between recursive function theory and an empirical generalization or a fact about general cognition, it is recursive function theory that must go. “That’s what it means,” he says, “to be committed to linguistics as empirical science, not [as] philosophical speculation” (G. Lakoff 1989:56).
The motivation behind those principles, indexed by the nose-thumbing that often accompanies their presentation, nicely illustrates the sociorhetorical forces that move scientific inquiry. The principles now orient a broad community of scholars: Cognitive Linguistics is distinguished in the ecology of language studies by its allegiance to a wide range of factors conditioning how people speak and to a wide range of data correlated with those factors (that is, the Generalization Commitment); and by seeking explanations or corroborations in what is known from psychological research about how people conceptualize and reason and perceive (that is, the Cognitive Commitment). The most commanding theme from these general cognition studies for Cognitive Linguistics is categorization. Generative linguistics has its own system of categorization, organized in binary terms. Words are [+ Verb] or [− Verb], [+ Noun] or [− Noun], and so on. Categorization is all or none. But human cognition seems to be looser than that, more associative in its category assignments, blurrier at the boundaries between categories; squishier.
There was a difference between the two leading proponents of nondiscrete categorization in linguistics, Ross and George Lakoff. Ross’s work was completely internal to linguistics. Even in a long paper addressed to psychologists—“Three Batons for Cognitive Psychology” (Ross 1974b)—he appealed to nothing beyond at-home distributional arguments. He saw his cross-disciplinary job as handing over results (the “batons” of the title) to psychologists for them to make sense of in terms of their own field. Not Lakoff. Lakoff was always wandering down hallways, knocking on doors, going to exogenous conferences, talking to philosophers, psychologists, anthropologists, bottlewashers and baristas, integrating whatever he found.
Women, Fire, and Dangerous Things features three long case studies—deep dives into data—all exemplifying some aspect of prototypicality in cognition. One, on the concept of anger, explores his “conceptual metaphor” research in terms of embodiment, organizing a superficially diverse heap of expressions in terms of heat and pressure (boiling mad, seething, steamed, bursting a vein, blowing a gasket, hothead, simmer down, keep your cool, contain yourself . . .). There is a prototypical conceptualization of anger as a hot fluid under containment, which our talk about anger reflects. Another study, on there-constructions, organized eleven types of there-expressions (called deictics, a word related to index finger, because they “point” at discourse elements) around a central usage. Here are a few:
Both studies were important for different strands of the program, and therefore the overall impetus toward Cognitive Linguistics, the conceptual study of anger engaging Frame Semantics, the syntactic study of there-expressions helping lay the groundwork for Construction Grammar. But the third one was the most definitive of the three, quickly becoming a locus classicus for Cognitive Linguistics. It was based on the “The Story of Over,” a 1981 master’s thesis by Claudia Brugman written under Lakoff (Fillmore and Kay were also on the committee). It was in circulation within a few years, both in North America (via the Indiana University Linguistics Club) and Europe (via the Linguistic Agency of the University of Trier, coinciding with a three-day lecture series by Lakoff), and formally published the year after Women, Fire (Brugman 1988 [1981]). It is widely cited in the developing early literature of Cognitive Linguistics, and invoked in most overviews and introductions to the framework, and featured in its textbooks. As a Cognitive Linguistics marketing tool, it rivals Transformational Grammar’s Passive or Affix-hopping.
The story of over touches all the bases. It is a paradigm, indeed a prototype, for Cognitive Linguistics. It implicates polysemy (the “many senses” of a word). It relies fundamentally on processes related to rhetorical figuration. It features prototype effects. It involves Idealized Cognitive Models (ICMs), one of Lakoff’s most celebrated and cited constructs. Of course, “conceptual metaphor” theory is in the mix. The arguments are even based in part on the work of one of Langacker’s students, Susan Lindner (1981, 1982), and Brugman maintains Langacker’s basic machinery (though Langacker had not quite seen the terminological light at this point; the shingle outside his office read Space Grammar at the time, the label he used before borrowing Lakoff and Thompson’s). And it puts all these notions into effective play in the analysis of a lowly preposition, over.
The story includes such data as:
These sentences no doubt all look dreadfully routine, rivaling John is easy to please in the yawn-inducement department. Over is a simple spatial-relation word, not a high-density noun or verb; not as rich semantically, one would assume, as even the bachelor of Katz and Fodor or the kill of G. Lakoff, let alone the piquant lexemes of Quang Phuc Dong. (You want polysemy? A word with near-infinite referential activity? Try fuck.)
But the lowliness of over is the point, its mundanity, its utter disappearance into the backwash of language. Brugman showed, when you press on that lowly preposition, a lot of semantics come out. I could spell the meanings out for you—no doubt, you could likely spell out a few yourself, with a bit of time and thought, now that they are arrayed in front of you—but they lend themselves even better to visual depiction, as in Figure 9.3. The details are abstracted away. You can’t see what colors Riley’s lovely painting uses or the jaunt in Addy-Rae’s step as she tramps over the hill, or whether Addyson’s rock is granite or limestone, on the beach or in the forest, large or small. The images are idealized representations of the relative positions and motions of the entities (TRs and LMs); they are ICMs.
We find ourselves beset, yet again, by the alphabetic fetish. TR (= trajector) and LM (= landmark) are Langacker’s terms for (respectively) the thing “within a relational profile” and the thing situating that relation: the TR in 5, Riley’s painting, is situated above the LM, the mantle.19 (I told you it was easier to show a picture.) An ICM (= Idealized Cognitive Model), meanwhile, is a skeletal representation of categorical knowledge governed by prototypicality and networked relations.
Birds all have wings, beaks, feathers, lay eggs; most of them are smallish, fly, nest in trees. Recalling our Rosch, people categorize birds on the basis of these properties and tendencies, and we regard certain bird representatives as more birdy than others based on their specific clusters. There is a kind of centrality for such clusters and representatives; and therefore a kind of marginality for other clusters and representatives; and therefore a kind of network among instances radiating out from an idealized center. But wings and beaks and eggs are categories as well, with their own property clusters and representatives, their own centralities and marginalities. And these categories are relevant to homologies among categories. Birds are homologous to bats, which have wings and fly, as well as to lizards, which nest and lay eggs. So, there are networks within networks within networks in our knowledge representations. ICMs chart knowledge in these terms. They also reflect knowledge in these terms, so the best way to understand ICMs is to look at their exempla, which takes us back to Figure 9.3.
Over, we can see in these pictures, confirming my intuitions and I’m guessing yours, has an above quotient to its central meaning. Sentence 5 capitalizes most clearly on that quotient, though 5–14 all exercise aboveness in some degree: Riley’s painting is above the mantle, but Addy-Rae is also above the hill she is flying over. We’re not done, of course: over also has an across quotient, illustrated by 7 and 8; themselves differentiated by whether or not there is contact between the relevant entities. Now, across implies motion, and motions have completions, end-points: sentences 9 and 10 focus on completion. When Sydney’s trajectory ends, she is on the other side of the wall. When we get to the end of crossing over the hill, we find Sydney. In sentence 6, Riley’s sculpture is above some unnamed stable entity (the ground, the floor) and in some contact with it, when its motional trajectory started; when the movement has ended, a different part of the sculpture will be in contact with that entity. Sentences 11 and 12 show there is a coverage quotient as well. The picnic blanket is above the ground, in contact with the ground, but also covering a noteworthy portion of that ground, and Linden covered a lot of the place he ran all over; the difference is that the trajector is effectively a stationary plane in the first case, a mobile point in the second case, while the landmark is a plane in both cases.
Sentences 6, 13, and 14 are curious anomalies: there is nothing apparently to be above, nothing to move across, no point of contact or coverage. But they resolve elegantly with the notion of reflexive trajectory (Langacker 1982b [1979]:268; Lindner 1981:187), in which subparts of an entity are in relation to other subparts (so, the trajector is the landmark, and vice versa). In 6, one side of Addy-Rae’s sculpture is above the other side when the trajectory has ended; one side of Addyson in 13 is above her other side, and one side of the rock in 14 is above the other side, at the end of their trajectories. And, in a sense, one part travels across another part in each reflexive case as well.
With 15–19, we have a different kettle of fish, however. There are no spatial relations, no spaces at all; nothing is above or across or in contact with anything else, just talking and singing and mild whimsy. What exactly are Brugman and Lakoff (. . . I apologize ahead of time; it was too hard to resist . . .) trying to put over on us with examples like these? The question is rhetorical. So is the answer: these uses of over are analogically spatial. They manifest that analogically framing phenomenon Lakoff and Johnson call “conceptual metaphor.” Again, pictures help (Figure 9.4). A song, like most durational events (plays, movies, vacations, sex, sleep, illnesses, relationships, life, games, mowing the lawn, writing and reading books, . . .) lends itself naturally, relentlessly even, to a linear spatial analogy. We routinely talk about time as if it had length (long days, short nights, stretching out our moments). So, Kevin’s song in Sentence 15, is very much like Sydney living on the other side of the hill (Sentence 10); when you get to the end of the time it takes to go across the durational space, it is over. In another standard-issue analogic framing, abstract objects are habitually reified, framed as if they are visible and palpable. So, Kevin’s song in 16 is framed as a thing that might be seen; sight, for its part, is often framed in a folk theory as if it goes out from the eyes, like Superman’s heat vision. When something is overlooked, it is as if the line of sight goes across the space above something, missing it all together.20 Conversations can cover a topic (as in 17), but since conversations are durational, rather than spatial, the activity is more like Linden covering a lot of ground as he runs about (12) than like Layla’s picnic blanket obscuring part of the ground (11). Similarly, while lunch is certainly a durational event (like the song of 15), it can also be reified (like the song of 16). Marianne and Jeff, our compound trajector of 18, talk over lunch as if they were walking over a hill (or, for that matter, a dale).
As for my sorry joke about too many example sentences: most of the stuff that one quantifies has some conventional amount that serves almost literally as an LM (indeed, such quantities are often called colloquially benchmarks). Six feet, for instance, is a benchmark for tallness in men (one often hears, “he is over six feet”). Well, even in a book like this one, which can be guilty of excess detail, one might assume some general benchmark for a reasonable number of example sentences, a limit—five, say, or ten—which fifteen exceeds (in Sentence 19). So, fifteen sentences goes over some vague but expected threshold much like Sydney going over a wall in 9 (Brugman calls this “boundary-traversal,” 1988 [1981]:24).21
Rather than complaining about excess, however, you might want to thank me for restraint. The actual number of individual senses for over that Brugman charts is unclear, depending on how one groups those senses. But it is way over (threshold usage) what I have sketched out here; conservatively, around double.22 Even more impressive is the way she links them together, while simultaneously teasing apart a subtle range of meanings, through a few key factors. While she is working from the template of Lindner’s (1981) superb doctoral work on the prepositions out and up, Brugman is mining very new territory here, unearthing remarkable results, and this bears repeating for such a germinal piece of work: it is a master’s thesis, by someone in her early twenties.
When syntax is both autonomous and in the grammatical driver’s seat, words are just passengers. Several trends in the Linguistics Wars, on both sides, had increased the grammatical dynamism of words into a sort-of Aunt Beulah role, backseat driving. But the Aspects lexicon was a vast bus, with tens of thousands of seats, each one a pigeonhole for a word, in Dennett’s imagery; Phrase Structure trees were all that truly counted in the base component, words existing mostly to find their way to the right twig (2003:673). Cognitive Linguistics emptied the bus, wiped the slate clean, started over. What would a cognitive account of language look like if Chomsky’s autonomy thesis had never been raised, they wanted to know, if linguistics did not exist only to serve syntax?
Cognitive Linguistics focused on the concepts historically associated with words and meaning: polysemy, figurative dimensions, and categorization. But there were no pigeonholes, no trees, no syntactocentricity, no autonomy; in a sense, no words and no syntax at all, a rather continuous movement from meaning to expression:
There is no meaningful distinction between grammar and lexicon. Lexicon, morphology, and syntax form a continuum of symbolic structures, which differ along various parameters but can be divided into separate components only arbitrarily. (Langacker 1987:3)
It is symbols all the way down, with their various combinatoric dispositions. Two theories, affiliated with Cognitive Linguistics chart these dispositions from different ends: Frame Semantics and Construction Grammar. Frame Semantics charts their possibilities. Construction Grammar charts their recurrent realizations.
Frame Semantics centers the meanings of words within a highly interconnected network of situational understandings—frames. Reciprocally, the frames are understood by the conceptual relationships among its central words. While frames are rich in details and interconnections—“We can think of a frame as a network of nodes and relations,” Marvin Minsky says (1974:1)—they are still highly abstract and idealized. Fillmorean frames are stripped down, prototypical representations of slices of knowledge, as they implicate lexical semantics, in a vast web of such representations. The commercial transaction frame, for instance, includes the mutually defining concepts of seller and buyer; you can’t have one without the other. Nor can there be a sale without the transfer of goods or the performing of services. Money must change hands, and so on. Understanding a sentence like 20 is not a matter of accessing individual denotations and designations but of fitting it into a conceptual system that connects nodes such as the commercial transaction frame, the gift frame, the pet ownership frame, and so on, each implicating the participation of various thematic roles, like agent (the buyer of the cat, the giver of the gift), theme (the entity that is bought and is given), recipient (the receiver of the gift), and so on; these are the descendants of Fillmore’s deep cases.
The sentence doesn’t give us very many details, but because it activates the commercial transaction frame, we know that questions like “How much was it?” and “Where did she buy it?” have answers. Because the gift frame is hooked in, questions like “Did she give it to him yet?” and “What was the occasion?” even “Did she put a bow on it?” have answers. Because pet ownership is a node in the network of 20’s semantics, we might ask, or wonder, if the cat has a name, if it has had its shots, is house-trained, and so on.
All this stuff clearly has to be part of what we know and how we act in the world, including how we act with language. In the Katz-n-Fodorian tradition, a word like buy is labeled for how it can co-occur with nouns like Tess and Urs and cat, but semantically it is little more than an address for some other part of the mind where you can go and do your thinking about such matters. In Frame Semantic terms, 20 is embedded in that thinking. Katz and Fodor assume a kind of semantic autonomy; in particular, “information about setting” is banished (1964b [1963]:484), but Fillmore (1982:115) saw the meaning of a word as a function “of a small abstract ‘scene’ or ‘situation’;” a setting. The move from Aspects-style semantics (and from Montagovian model-theoretic semantics for that matter) to Frame Semantics is the move from a dictionary + rules to an encyclopedia; or, in Fillmore’s terms (1985a), from T-semantics to U-semantics, from a preoccupation with Truth conditions, something that most people don’t think about, unless those people are semanticists, to a focus on an essential goal of most, even semanticists, in their daily traffic in meaning, Understanding. 23
Other sentences that activate the commercial transaction frame might hook into the barter frame in the network, when the price or the use of money is negotiable, or the fraud frame if one of the agents is deceptive. If this account of meaning doesn’t look familiar, try thinking of it as a really big tree with lots of substructure strings like x cause to own (x, z) and exchange (w, z), and x intend to cause to own (y, z), or whatever. Generative Semantics was turning the Aspects dictionary into something more like an encyclopedia, and Frame Semantics pulled it off. Those gigantic trees, if you add transderivational constraints, are networks.
The “twin sister” of Frame Semantics is Construction Grammar (Dirven, Pütz, & Radden 2008). The two arose largely at the same time, in roughly the same place, with many of the same proponents, in consilience with the same principles, in reaction to the same history.
Construction Grammar (CxG) ignores the whole point of transformations (moving things around subterraneously to express surface relations), attending rather to all the bits and pieces of language, arranged in more or less stable patterns, however they got that way. Construction Grammar is literally antithetical to the entire Chomskyan program, which has, since the early 1960s, reduced, restricted, and eventually eliminated the notion of constructions (like, say, actives and passives and questions) from its core mechanisms. As befits its name, CxG glories in constructions, of every shape and size; often, the weirder, the gloriouser (that pattern, by the way, is a CxG favorite, colloquially called “The X-er, the Y-er Construction,” because one comparative correlates with another—here, X = weirder and Y = gloriouser). Construction Grammar also aspires “to account for the entirety of each language” (Kay & Fillmore 1999:1), in stark contrast to Chomsky’s categories of aversion (performance, the periphery, E-Language). It is also doubly or triply cursed in Chomsky’s eyes because most Construction Grammars (there are lots of them) are usage-based, which means they get their evidence exclusively from corpora, something that Chomsky has always considered the height of uselessness in linguistics. When Lees was pretty much the Transformational Grammar Press Officer in the early 1960s, for instance, here is what he told a linguist who had just received a corpus-studies grant:
That is a complete waste of your time and the government’s money. You are a native speaker of English; in ten minutes you can produce more illustrations of any point in English grammar than you will find in many millions of words of random text. (Francis 1979:110)
If anything, the antipathy has grown. Chomsky has more recently quipped that “corpus linguistics doesn’t mean anything.” His argument? This analogy:
Suppose physics and chemistry decide that instead of relying on experiments, what they’re going to do is take videotapes of things happening in the world and they’ll collect huge videotapes of everything that’s happening and from that maybe they’ll come up with some generalizations or insights. (Andor 2004:97)
Construction Grammar also does not wait around for a semantic interpretation to be assigned through some special component. Constructions are by definition symbolic, which means they are form-meaning pairs.
The most famous, much traveled, argument for Construction Grammar comes from another one of Lakoff’s students, Adele Goldberg, on the Ditransitive Construction. In the Aspects view, maintained pretty much throughout the Wars, certain verbs are intransitive (subject only), others are transitive (subject and object), and a few are ditransitive (one subject, two objects). They all have to do with transferring control or ownership over objects. Sentence 20 exhibits such a transfer. In its full trajectory, ownership of the cat transfers from Tess to Urs. Aspects tells us that is one of the functions of the verb, buy. Most of the time, buy is a simple transitive (“Tess bought a cat”), but once in a while, as in 20, buy shows up as ditransitive, so the Aspects lexicon gives it an extra line in the dictionary (a “selectional restriction”) to signal its extra duty.
But Construction Grammar tells us the on-again, off-again ditransitivity of buy is just an illusion. It’s not the verb that calls the shots. It’s the construction: “the ditransitive syntactic pattern is more felicitously associated directly with the construction as a whole,” Goldberg says, “than with the lexicosemantic structure of the verbs” (1992b:69). Her argument is remarkably sophisticated, but it comes down to (or, at least, for our purposes, we can pretend it comes down to) the way we can describe such activities as the ones in 21–26.
The verbs in 21–23 look like the verbs in 24–26. But they are doing something very different. For Aspects, oh well, sometimes shoot is transitive, sometimes it is ditransitive, and I guess its meaning is different in both cases; oh well, whip is sometimes transitive . . . But Goldberg points out that the Ditransitive Construction makes the verb do its bidding, not that the verb is hanging around in some chat room with the tagline, “looking for a subject and two objects.” If you doubt her, take a look at 27–29.
None of the key words here are even verbs, strictly speaking, and the precise meaning may be a little hazy without more context, but (1) the construction insists that they have to be verbs if they are going to have a job; and (2) no one has any trouble understanding that whatever elbowing, saucering, and goldberging might be, Tavares had the puck before he engaged in that activity and Marner had it when he was finished. It doesn’t matter if goldberging is completely opaque or if Goldberg is known to the interlocutors for her particularly impressive behind-the-back drop-pass: Tavares had the puck; Tavares did something; then Marner had the puck. The construction tells us that. While they are not seamlessly integrated, you can see easily here how compatible Construction Grammar is with its twin sister. Understanding the ‘verbs’ to saucer and to golberg here involves fitting them to a transfer frame in a network that connects them to a team-sport frame.
I said earlier, paraphrasing Langacker’s lexico-syntactic continuum, “It’s symbols all the way down.” Here’s what Goldberg says: “It’s constructions all the way down” (2006: 18). Potato, pahtahtah. Meaning and form, form and meaning. They come as a pair in Cognitive Linguistics. That’s what Generative Semantics was after all along, but now all the elaborate connective mechanisms inherited from the Chomskyan program are gone.
Where we end up, here in the twenty-first century, is with two basic polarities associated with the word cognitive. It came into linguistics under Chomsky’s influence, who famously said in the heyday of Transformational Grammar that linguistics was a “branch of cognitive psychology” (1968:1), by which he meant something very much like what Miller, Galanter, and Pribram meant in 1960: a fundamentally computational mental system—modular, procedural, rule-governed, and, in principle, amenable to precise modeling. But Cognitive Psychology changed under Chomsky’s feet. Predominantly it now means a fundamentally organic and embodied mental system, decidedly nonmodular, in which perception, memory, principles of categorization, and general dispositions of knowledge formation, such as analogy, correlation, and meronymy, are all deeply enmeshed. Allegiance to this organic cognition pole defines the confederacy of approaches under the label, Cognitive Linguistics. Allegiance to the computational cognition pole continues to define Chomsky’s approach.
While he has now terminologically dissociated himself from transformations and grammaticality and D(eep)-Structure and almost everything else that brought him such rapid fame across the academic landscape in the 1960s, while he is bucking the cognitive and computer science trends that he helped to kindle, while much of the field is working on problems different from his, with methods different from his, probing data different from his, Chomsky remains “the 800-pound gorilla in linguistics” (Hughes 2006 [2001]:83).