3

Generative Semantics 1: The Model

Imitation alone is not sufficient

—Quintilian (Institutio Oratoria 10.2)

Disciples are usually eager to improve on the master, and . . . the leader of a movement sometimes discovers he cannot or does not wish to go quite as fast to the Promised Land as those around him.

—Gerald Holton (1988:35)

Trouble in Paradise

The Chomskyan universe was unfolding as it should in the middle of that optimistic and captious decade, the 1960s. The Bloomfieldians had been driven to the margins. There was a cadre of feisty, clever, dedicated linguists and philosophers working on generative grammar. Young linguists everywhere were clamoring for their wisdom. The MIT graduate program was up and running, generating an impressive, in-demand, soon-to-be-influential string of philosophical doctors. MIT was a stronghold of truth and purity in language studies, and branch plants were starting to dot the hinterlands—California, Illinois, Indiana, Ohio, Texas. Chomsky was the uniformly acknowledged intellectual leader, Aspects the new scripture.

The defining concern of the work codified, extended, and enriched in that scripture was to get beneath the literal surface of language and explore its subterranean logico-semantic regularities, to find its Deep Structures, to get at meaning. “In general,” Chomsky told the troops, “as syntactic description becomes deeper, what appear to be semantic questions fall increasingly within its scope; and it is not entirely obvious whether or where one can draw a natural bound between grammar and ‘logical grammar’ ” (1964d [1963]:51). The talismanic Deep Structure is the basic motivating idea for Transformational Grammar, realizing a “very fruitful and important insight . . . as old as syntactic theory itself” (1965 [1964]:221n33). It’s old, it’s new; it’s the reason we get out of bed in the morning and do linguistics.

Everybody was happy. But no one was content (well, maybe Katz). Postal expressed his unwillingness to stay put by digging deeper and deeper under the surface, working in a direction which rapidly came to be known as Abstract Syntax, and several other linguists joined this project—most notably Robin and George Lakoff, John Robert (Haj) Ross, and James (Jim) D. McCawley. Together they pushed syntax deeper and deeper yet, until—to the extent that semantics had substance at the time—their Deep Structures became virtually indistinguishable from semantic representation. It would not be an exaggeration to say that their Deep Structures were the closest things to explicit semantic representations in mid-1960s Cambridge. They took Chomsky at his word and developed a grammar in which Logical Form had a central place. At one point, early in 1967, the program mutated into Generative Semantics. Deep Structure was the first casualty. Who needs Deep Structure when you have a Semantic Representation and a Surface Structure?

Meanwhile, there was this new student at MIT, Ray Jackendoff, moving in a different direction altogether, one that compromised the Katz-Postal Principle, loosened Deep Structure’s stranglehold on meaning, and shifted attention to the hitherto “much less interesting level of Surface Structure” (Chomsky 1966d:588).

Chomsky? He stunned everyone. Having coaxed the semantic genie out of the bottle, he spent the next several years trying to stuff it back in again, in partial alliance with Katz, Jackendoff, and a few others, mostly his students, against the increasingly agitated resistance of the Generative Semanticists. Arguments collided, or sailed past each other, and the field divided, then fractured, then reformed itself largely around two polarities; Chomsky at one pole, Lakoff at the other.

But these are just the bones of the story. There is sinew and gristle, hair and hide, wet stuff and dry, yet to tell. We can start, as in all fleshy matters, with the progenitors.

Home-Brewed Dissension

As of 1965, and even later, we find in the bowels of Building 20 [the home of the MIT linguistics department] a group of dedicated co-conspirators, united by missionary zeal and shared purpose. A year or two later, the garment is unraveling, and by the end of the decade the mood is total warfare. The field was always closed off against the outside: no serpent was introduced from outside of Eden to seduce or corrupt. Any dissension had to be home-brewed.

—Robin Lakoff (1989:943–44)

Paul Postal, studying under Floyd Lounsbury at Yale in the late 1950s, met Chomsky on one of his speaking visits. Postal converted almost overnight:

I was very impressed, first with the power of his thought, but also it seemed that this stuff was from a different planet, in the sense that it was based on an entirely different way of thinking from anything I had come into contact with before.

He hitched his corrosive polemical style to the new movement, and his classmate, Jay Keyser, remembers the note-passing beginnings of his campaign against the Bloomfieldians:

I remember sitting there [in class at Yale], listening to Bernard Bloch lecture on morphemic postulates. Bloch would be saying something, and Paul would write me a note with a counterexample. I thought to myself “Jesus. Postal’s right. This is just not working.”

Eagerly escaping the Bloomfieldian confines of New Haven, Postal finished his dissertation in Cambridge as part of the Research Laboratory of Electronics (RLE) and joined the MIT staff, where he helped shape the first transformational generation, and, of course, hammered out some of the properties of Deep Structure with Katz. He also began working on a program of increasing abstraction and semantic perspicacity that excited a number of linguists; most notably, George Lakoff.

George Lakoff first encountered generative grammar as an MIT Humanities undergraduate in 1961, taking classes from Chomsky and Halle. He reports finding it all pretty dry and uninspiring. But when he went off to do graduate work in English at Indiana, he began to read the material on his own, found it a good deal more compelling, and embarked on some unorthodox work trying to transform the principles of Propp’s Morphology of the Folktale into a Generative story grammar. Returning to Cambridge in the summer of 1963 to marry Robin Tolmach, he met Ross and McCawley, and found a job on Yngve’s machine-translation project in the RLE. Katz and Postal were down the hall, working on Integrated Theory, and he spoke with them frequently. Through this regular participation in the MIT community, he became more directly interested in language and returned to Indiana to work on a doctorate in linguistics, under transformational early-adopter, Fred Householder. The following summer he attended the Linguistic Institute, at which Postal was teaching, renewing their friendship. So, when Householder left for a sabbatical during Lakoff’s dissertation year, he naturally headed back to Cambridge, where Postal effectively directed his dissertation, and John Robert Ross became his very close associate.

Haj Ross, son of a nuclear physicist, grandson of a Nobel Peace Prize laureate, did his undergraduate work at Yale, where Postal was chafing at the Bloomfieldian bit, and Bloch was the reigning theorist. He studied under both, but was, as he puts it, a “typical fatuous undergraduate and [did] zero work.” Ross was “interested in language always,” but that didn’t have any effect on his studying:

I was trying to grow up a little bit through the helpful tutelage of playing poker about forty hours a week and football and being on the radio station and in a fraternity and drinking a lot and going to class occasionally and never going to the library or doing any studying. (Huck & Goldsmith 1995:121).

Understandably, when he went off to MIT to talk to Halle about enrolling in its increasingly important linguistics program, Halle was singularly unimpressed, suggesting he go somewhere and “prove himself.” He did. In fact, he went off to Chomsky’s old stomping grounds and completed a master’s thesis under Zellig Harris at the University of Pennsylvania. When he returned to MIT he secured enrollment in its brink-of-the-Aspects-theory linguistics program. One of the shining stars in a stellar class, he went on to produce a mammothly influential dissertation. He also began collaborating closely with Lakoff, particularly on Postal’s abstract genre of analyses, and gained the friendship of James D. McCawley.

Jim McCawley, in the estimation of his teacher, colleague, friend, and opponent, Noam Chomsky, was “one of the smartest people to have gone through the field.” Lees places him among “the sanest and most astute linguists of our time.” Ross and Lakoff go on at great length about his intelligence, sensitivity, humor, warmth, inventiveness, pedagogical gifts, musicianship, culinary talents . . . He was “the generative grammarians’ Shiva, the many-handed one of transformational theory” (Zwicky et al. 1970:vii), an important, diverse, challenging linguist. He grew up precocious in the extreme, skipping several grades, gaining early admission to the University of Chicago, moving quickly into graduate school, and earning an MS in mathematics at twenty. He took up a Fulbright Fellowship in mathematics and logic in Germany, where he lost interest in mathematics and started taking language courses. When he returned to the University of Chicago, under faculty expectations that he would be taking a doctorate, he poked around the library for anything that would support his new love of languages. “I came upon Syntactic Structures and it really turned me on,” he remembered.

Not long after that I saw the announcement for the new linguistics graduate program that they were starting at MIT. I applied, got accepted, went there, became a linguist, and have been enjoying life much more ever since then. . . .

Languages are weird and wonderful things. As long as you are perceptive enough there is plenty to keep you happy and busy. (Cheng & Sybesma 1998:26)

By the time he entered MIT, McCawley spoke several languages fluently, played several instruments just as fluently, and was supporting himself by translating Russian math books (Lawler 2003:614). He also had a thick Glasgow accent, overgrown eyebrows, a quick and lively smile, and a receding stutter (at the time, only in English, and then not at all).

He entered the Halle-Chomsky linguistics program in 1962, distinguishing himself rapidly for both the clarity of his thought and the deftness of his wit. He was more intrigued by phonology than syntax and produced a brilliant dissertation on Japanese tone phenomena (1968b [1965]), earning the second doctorate awarded by the new department. But he soon found himself back at the University of Chicago, now teaching linguistics, and having to offer courses in syntax. He educated himself in the area by spending a great deal of time on the phone with two friends in Cambridge who were directing a syntactic study group at Harvard, Ross and Lakoff.

Each of these four men has been credited with engendering the theory (though Ross only, in his words, “as George’s sidekick”).1 Susumu Kuno, a distinguished Harvard linguist who knew all the principals, working closely with several of them, says, “I have no doubt that George was the original proponent and founder of the idea.” Arnold Zwicky says that the theory was the joint issue of Lakoff and Ross. Newmeyer says that it was born out of work by Lakoff and Ross, under Postal’s influence. Ross, for his part, says “it’s basically Postal’s idea. He was basically the architect. [George and I] were sort of lieutenants and tried to flesh out the theory.” But Postal says that “McCawley first started talking about Generative Semantics,” and that McCawley’s arguments first got him interested in the movement. McCawley says that Lakoff and Ross “turned me . . . into a generative semanticist.” Lakoff, notably the least humble of the four, says that the idea was his, but that Postal talked him out of it for a period of about three years, when McCawley then convinced him it was correct, while he was working with Ross, on some of Postal’s ideas.2

However muddled this genesis is, the moral is clear: most big ideas don’t have fathers or mothers so much as they have communities. Generative Semantics coalesced in mid-1960s Cambridge—partially in concert with the Aspects theory; partially in reaction to it. Chomsky, who is not particularly interested in Generative Semantics’ parentage says, simply, “It was in the air,” and there is much to recommend this account.

As transformational analyses of syntax grew more probing and more comprehensive, from Harris on, they increasingly involved semantics. At some indistinct point—or, rather, at several different indistinct points, for several people—this program began to seem exactly wrong. It began to appear that syntax should not be borrowing from semantics so much as exploring its own semantic roots; that syntax should not be feeding semantics, semantics should be feeding the syntax—that semantics should be generative rather than interpretive. At some indistinct point, there was Generative Semantics.

But “in the air” is far too vague. There were clear leaders. As Chomsky, quite suddenly, began to show less inclination for this type of deep syntactic work, and as Generative Semantics began to take shape against the backdrop of Aspects, it was obvious who these leaders were: George Lakoff, Haj Ross, Jim McCawley, Paul Postal. Others had influence in various ways, especially Robin Lakoff, Jeffrey Gruber, Charles Fillmore, and several MIT instructors—Edward Klima, Paul Kiparsky, and the alpha silverback, Noam Chomsky.

“I sort of believed [Generative Semantics] myself back in the early sixties,” Chomsky has said, “and in fact more or less proposed it.” The popular arguments in Cartesian Linguistics (1966a) and Language and Mind (1968) certainly support that claim. In the former, for instance, he says that “the Deep Structure underlying the actual utterance . . . is purely mental [and] conveys the semantic content of the sentence. [It is] a simple reflection of the forms of thought” (1966a:35)—as clear as articulation of Generative Semantics as ever there was. But there are large differences of scale between the Four Horsemen of the Apocalypse—as Postal, Lakoff, Ross, and McCawley became known—and everyone else.

Chomsky, in particular, certainly never adopted Generative Semantics as an active program, and everywhere that he comes close to endorsing something that looks like Generative Semantics, there is a characteristic rider attached. Just as he had a footnoted qualification about the Katz-Postal Principle in Aspects, after he equates Deep Structure and thought in Cartesian Linguistics, he says, in a footnote, that this connection is a “further and open question” (1966a:100n8). The Horsemen expressed no such reservations.

Robin Lakoff, who similarly expressed no such reservations, whose published dissertation was celebrated by one reviewer as “post-Chomskyan,” who engaged in a few skirmishes, who was prolific in this period (and afterward), who was one of the movement’s most influential teachers, and who led the movement into pragmatics, was neither the proselytizer nor the large-scale theorizer that the others were. Bucholtz and Hall call her the “lone Horsewoman,” not without cause; certainly, as they argue, she had a central enough role that she doesn’t fit very easily or even fairly into the “everyone else” category. She has never been credited with “inventing Generative Semantics,” as were George, Paul, Jim and Haj, in various ways and combinations. But she was among the Abstract Syntax avant-garde and was much more directly involved in the development, growth, and propagation of Generative Semantics than Gruber or Fillmore. But if we try to fully assimilate Bucholtz and Hall’s phrase to the Generative Semantics origin story, spelling it out as “Horsewoman of the Apocalypse,” its application is somewhat less compelling. Robin Lakoff is not well known for breathing fire. In her own words, she is “basically a wuss” who never particularly liked conferencing or the cut-and-thrust of public argumentation that characterized so much of the Wars. She had little confidence, she says,

as I was listening to papers and discussion, whether thoughts coming into my mind were interesting to anyone else, so I could never raise my hand to ask them. Instead I would scribble something and show it to George, and if he thought it worthwhile he might ask it. But I guess I was sort of invisible.3

Gruber is perhaps the most interesting case of the other linguists who have been connected to the origins of Generative Semantics. His dissertation under Klima (Gruber 1965 & 1976), working in an Aspects framework that assumed transformations preserved meaning, argued for “a system which comes close to what might be called a derivational semantic theory, as opposed to an interpretive one” (Gruber 1965:1). His system posited a semantically rich pre-lexical level, “deeper than the level of ‘Deep Structure’ in syntax” (Gruber 1965:2), which prominently featured semantic roles such as Agent, Source, Goal, and Theme. But he relocated to Africa largely for religious reasons just as the disagreements were starting to surface, and left linguistics altogether for two years to teach English in a Botswana village school (Collins 2014).

Fillmore, on the other hand, was very much engaged in generative linguistics through the period (and for decades beyond), publishing regularly, attending conferences, always on the cutting edge. He developed a syntactic model with key similarities to Gruber’s approach and highly compatible with Generative Semantics, including over the totemic matter of the existence of Deep Structure:

It is likely that the syntactic Deep Structure of the type that has been made familiar from the work of Chomsky and his students is going to go the way of the phoneme [which was rejected in an argument by Halle that became an instant classic for Generativists]. It is an artificial intermediate level between the empirically discoverable “semantic Deep Structure” and the observationally accessible surface structure, a level the properties of which have more to do with the methodological commitments of grammarians than with the nature of human languages. (Fillmore 1968:88)

His approach to “semantic Deep Structure” took shape in a new model he proposed, Case Grammar, largely via a set of theoretical primitives with remarkable (but apparently independent) similarities to Gruber’s semantic roles (Fillmore 1968:117n14). These relations—“deep cases” to Fillmore—are now ubiquitous in linguistics, wearing such labels as “thematic roles,” “theta-roles,” and “participant roles,” a ubiquity that is due mostly to the influence of Case Grammar, the passionate commitment of Fillmore’s scholarly activity, and the power of his arguments (along, no doubt, with the relative inactivity of Gruber). But Fillmore was not an MIT insider and—not coincidentally—had a far more ecumenical approach to linguistics than could be found among the Cambridge sectarians of the time.4

The kernel of Generative Semantics was a dissolution of the syntax-semantics boundary at the deepest level of grammar—the axiom that the true Deep Structure was the semantic representation, not a syntactic input to the semantic component. This dissolution, in real terms, began with Postal, though George Lakoff was the first to propose it actively.

Lakoff’s Proposal

The approach taken by Katz, Fodor, and Postal has been to view a semantic theory as being necessarily interpretive, rather than generative. The problem, as they see it, is to take given sentences of a language and find a device to tell what they mean. A generative approach to the problem might be to find a device that could generate meanings and could map those meanings onto syntactic structures.

—George Lakoff (1976 [1963]:44)

The first step toward Generative Semantics was a paper by George Lakoff, very early in his graduate career, which, after some preliminary hole-poking in Katz-n-Fodorian Interpretive Semantics, says, “There are several motivations for proposing a generative semantic theory” (1976a [1963]:50).5 Well, okay, this is not a step. This is a hop, skip, and hyper-jump. In three lunging moves, Lakoff wanted to replace Katz and Fodor’s just-published semantic program, to bypass Katz and Postal’s still-in-the-proofs integration of that program with transformational syntax, and to preempt Chomsky’s just-a-glint-in-his-eye Deep Structure. Yowza!

He took the paper, “Toward Generative Semantics,” to Chomsky, who was, Lakoff recalls, “completely opposed” to his ideas, and sent him off to Thomas Bever and Peter Rosenbaum for some semantic tutelage. Chomsky remembers nothing about the paper except that “everybody was writing papers like that” in 1963—a remark that is, at best, difficult to corroborate; at worst, a baseless dismissal. Bever and Rosenbaum were at MITRE corporation, an Air Force research lab in Cambridge where linguistics students went in the summer to spawn transformations.6 Bever and Rosenbaum didn’t like it either, and Lakoff remembers a huge, three-hour argument. No one budged, though Ross, another MITRE summer employee, sat in as an onlooker and thereafter began his close, collaborative friendship with Lakoff. Lakoff ran off some copies and gave one to Ross, another to McCawley, another to Postal, and sent a few more off to people he thought might be interested. It was not a hit. No one held much hope for his proposals, and no one adopted them. Lakoff does not give up easily, but he respected Postal immensely and took his advice. He abandoned the notion of Generative Semantics (or perhaps more accurately, suppressed it), and went back to work in the interpretive framework of the emerging Aspects theory.

The most important word in Lakoff’s label wasn’t generative, but semantics, in pointed opposition to syntax. The Katz and Fodor and Postal work, for all its inroads into meaning, under Chomsky’s heavy influence, clearly left syntax in the driver’s seat. Not so Lakoff. He wanted to put meaning behind the wheel.

But, aside from proposing the label, Generative Semantics, and raising some of the issues that engulfed that label several years later, Lakoff’s paper is, as Ross puts it, “only good for historical gourmets.” Nor, even though a similar model a few years later would sweep through the field like a brush fire, is it really very hard to see why Lakoff’s proposal fell so flat. The paper is very inventive in places, which everyone could surely see, but it is also naïve in the extreme in other places, and is prosecuted in a tone, an arrogant certainty, that Chomskyans were used to seeing in one another’s polemics against the Bloomfieldians, but not directed at their own internal mechanisms. The first half of the paper, remember, attacks Katz and Fodor and Postal’s recent innovations. In particular, the Katz-Postal Principle requires that sentences with the same meaning have the same Deep Structure, but Lakoff casually adduces counterexamples like 1–3, where the a and b sentences all mean essentially the same thing, but the Deep Structures are quite different.

image

These present problems for the Aspects theory, all right, but hardly incapacitating ones; there are fairly clear ways around them, none of which Lakoff explores. He just says Katz-n-Fodorian semantics run into trouble here, and blithely proposes tossing out the whole approach. Then comes the second section of the paper, the one in which he offers his replacement, and we find an even larger measure of certainty, about matters which are obscure in the extreme. Lakoff talks of semantic laws, for instance, and rules for embedding thoughts into other thoughts, and even formalizes a few thoughts. Take this rule as an example, the rule which introduces subjects, predicates, and objects—presumably (though Lakoff does not specify) replacing Chomsky’s iconic S → NP +VP phrase structure rule.

image

The T stands for ‘Thought,’ so the rule says that every thought must have a semantic predicate and a semantic subject, but need not have a semantic object (hence the zero). Oh, and what does a specific thought look like? As it turns out, a great deal like the bundles of features hanging down from the bottoms of Deep Structures; 5 is the thought rendered into English as “Spinach became popular.”

image

The content of 5 is not especially important here (if you need a sample, -DS is “do something,” negated; the popularization just happens, without any agent causing it). It’s just that we see Lakoff here—brainy, cocky, and taking the cognitive claims of generative grammar far more literally than anyone else at the time—talking sanguinely about thoughts, plopping representations of them down on the page, at a time when Katz and Fodor are just starting to explore what meanings might look like in Chomskyan terms, Katz and Postal just starting to etch out how they might fit into a Transformational Grammar, Chomsky just putting the final recursive touches on Deep Structure. The ink was barely dry on these groundbreaking works, by the architects of the theory, and Lakoff, just a year from his (Humanities!) BA, was happily, confidently, throwing them in the dustbin.

The final section of Lakoff’s highly speculative paper is “Some Loose Ends,” implying that he had completed the hard work at the grammar loom and only a few stray threads need be woven back into the rug of Chomskyan—or, with these advances, perhaps Lakovian—theory. The section reveals much about Lakoff, particularly the tendency to view his own work in the most grandiose and comprehensive terms, to look constantly at the big picture. But no one else was similarly moved by his case. Chomsky apparently saw little in it, Bever and Rosenbaum ditto, Ross and McCawley quickly forgot about it, and Postal suggested Lakoff curb his rogue enthusiasm and try to help revise the incipient Aspects model, rather than drop it altogether. He did just that, shortly setting to work on his dissertation, which not only revises the Aspects model, but stretches it about as far as it could go without breaking.

Abstract Syntax

As we proceed ever further with our investigations, we come of necessity to primitive words which can no longer be defined and to principles so clear that it is no longer possible to find others more clear for their definition.

—Blaise Pascal (1952 [1670]:431)

George Lakoff’s dissertation was “an exploration into Postal’s conception of grammar” (G. Lakoff 1970a [1965]:xii), and the published title—Irregularity in Syntax—reads like a diagnosis of the main defect Postal saw in transformational modeling. There was something wrong with the innards of Transformational Grammar, constipation maybe, which was hampering progress and clarity. The diagnosis led Postal to embark on a line of research soon known as Abstract Syntax.

The fact that Abstract Syntax had its own label tells us it was seen to be a distinct track, at least in some measure, but it was not a new theory of grammar, as Generative Semantics and Fillmore’s Case Grammar would bill themselves. It was a program within the Aspects framework to continue the deep, deeper, deepest trajectory that moved syntactic description further into semantic territory, so that it was no longer obvious if there was any separation at all (Chomsky 1964d [1963]:51). No one felt Postal’s project to be at odds with all the other feverish activity surrounding Aspects. Or maybe we should say “almost no one” felt it to be at odds, because, . . . well, Chomsky. The thrust of Postal’s work was to reduce complexity in the base down to an axiomatic minimum of primitive categories, which entailed some moves Chomsky may have found unappetizing—Newmeyer reports “the first public signs of division” in some of Postal’s arguments at a 1965 colloquium that adjectives were verbs at some deep level (Newmeyer 1980a:93; 1986a:82).

But once again we come into difficulties of demarcation; in particular, it is not at all obvious when this work became uncongenial to Chomsky. Certainly the divergence was one of degree, organized around a few technical proposals, rather than one of kind. There was no sharp change in either the argumentation or the direction of Postal’s research; if anything, it was just a moment ahead of the established Chomskyan arc. The trend in generative work from the outset was toward increasingly greater abstraction, a trend that gathered considerable momentum in the early 1960s with the introduction of trigger morphemes and Δ-nodes. Certainly no one remembers any kind of flare-up around Postal’s deep-verb colloquium.

Postal was attacking a growing problem in the early theory. The post–Syntactic Structures developments featured an unchecked mushrooming of categories. Much work in early transformational research simply projected the wide variety of surface categories onto the underlying representations, and even work that winnowed off some of those categories still retained an alarming number of them; Lees’s exemplary Grammar, for instance, had dozens of underlying categories (e.g., 1968 [1960]:22–23), though it dealt with only one small corner of English syntax and used the power of transformations to reduce the surface categories—something Schachter complained about, saying the trend “was staggering to contemplate; it seems likely in fact, that each word would ultimately occupy a subcategory of its own” (1962:137). This consequence was particularly glaring in a theory that marketed itself on the virtues of simplicity and generality.

Postal took up the mission to pare back the number of deep categories. He argued in classes, papers, colloquia, and at the 1964 Linguistic Institute, that adjectives are really deep verbs (G. Lakoff 1970a [1965]:115ff); that noun phrases and prepositional phrases derive from the same underlying category (Ross 1986 [1967]:124n9); and that pronouns weren’t “real,” that they were a figment of superficial grammatical processes (Postal 1966a). This was textbook Transformational Generative argumentation, by one of its sharpest practitioners, but as Postal pursued this course, and as Ross and the Lakoffs joined him so thoroughly and enthusiastically that it became something of a program, Chomsky apparently lost his taste for it.

Ross’s most effective work along these reductionist lines was to argue that there is no special category of auxiliary verbs; a verb is a verb is a verb.7 Auxiliaries, as traditional grammar usually held them to be, were categorically nothing special, and so they didn’t need their own Deep Structure category, as in Syntactic Structures. Again, we have standard-issue Generative argumentation relying on the kind of feature notation we saw with Katz and Fodor’s work, but don’t forget that Affix-hopping was one of the star attractions of Chomskyan linguistics, and Affix-hopping leaned on the special category of auxiliary verbs. Ross’s proposal, then, may have caused some friction with Chomsky. Ross was certainly aware of whose proposals he was overthrowing to make room for his no-special-auxiliary-category proposal. Of course, many of Chomsky’s proposals in fashioning the Aspects framework, such as base-level recursion and the elimination of Generalized Transformations, overthrew his own earlier analyses. But revising your own previous work is not always the same as someone else revising it.

Friction or not, Ross’s case is persuasive. He is one of the most sensitive analysts in the transformational tradition and, at the time, one of the most dedicated to its tenets. So, while some of his conclusions are at odds with Syntactic Structures, the arguments are models of Chomskyan reasoning. Ross’s case depends on two of Chomsky’s most important early themes, naturalness and simplicity, and on the descriptive power of transformations. Ross points out that a wide range of transformations must somehow refer to the complex in 6.

image

(Don’t worry that this doesn’t look exactly like the rule 1d, on page 23, which introduces some of these elements. This just describes a configuration that can arise from that rule; namely, that the tense affix 1d puts into the verb phrase can be followed by either have or be, and multiple transformations needed to reference that configuration.)

The negative (Tnot) transformation, for instance, needs this complex in the Aspects framework, to account for data like this.

image

What this data tells us is that not has to line up after the first auxiliary verb, if there is one (have in 7b, be in 8b, but if there is no auxiliary verb, one needs to be introduced (do, in 9b). Now, harking back to our old friend, Affix-hopping, remember how this would go: the Deep Structure for 7a includes the sequence past have -en show, hopping occurs to produce the sequence have past show -en, which becomes had shown at the surface. With 8a it is past be -ing show transforming to be past show -ing, surfacing as was showing, and with 9a it is a simple case of past show becoming show past becoming showed. Beauty.

But introducing not seems to gum that pattern up a bit. The transformation Tnot gives us the respective underlying sequences past have not -en show (for 7b), past be not -ing show (for 8b), and past not show (for 9b). The complex Ross is complaining about (6) is exactly what seems to be needed here for the rule to operate: the landing site for not is after the tense (9b) and have if there is one (like in 7b) or be if there is one (like in 8b). There’s even a cool example of a Gyro Gearloose kind of thrill here, since 9b has this specially imported auxiliary verb that has to be explained somehow. The established account was that Tnot “stranded” the tense if there was no base-generated have or be, separating it from the verb it would otherwise hop over. So another rule, another Syntactic Structures stalwart which was known as Do-support, comes to the rescue: in 9b we would then get a sequence like past do show, with Affix-hopping then giving us the sequence do past show, which in turn surfaces as did show.

But it isn’t just Tnot that seems to require 6. Take a look at these examples:

image

You got it: Question transformations (TQ and TWh) need to reference the complex in 6 as well. The tense and have or be, if they are present, get moved around by the question transformations, putting in front of Pixie one of the following (i) the tense on its own, or (ii) the tense and have (if present), or (iii) the tense and have (if present). And, again, we witness a “stranding” situation in 12a and 12b, where the tense is moved in front of Pixie, with no auxiliary verb to hop over, and Do-support shows up to wear the tense (did). It is exactly these kinds of coördinated rule activities that made Transformational Grammar so exhilarating to so many people at the time. The vast, contingent, variable mass of human linguistic activities seem to fall neatly into a powerful descriptive calculus. The transformational relations among the relevant structures is a slick account of the knowledge English speakers have about negatives and questions.

Back now to the worm Ross found in the Aspects apple, that this elegant structure of rules has to refer to a kludgey thingamajig like 6. In terms of the Aspects model, or almost any other one you might want to name, 6 is not even a natural constituent, like a Noun Phrase or a Verb Phrase. As Ross puts it, a rule which calls on something like 6 has no more credibility in relation to Aspects than a rule which calls on 13.8

image

Toast? And? Ross’s fantasy complex exhibits the creeping jocularity that was showing up in some Generative argumentation—and, speaking of friction with Chomsky, might not someone who perceived themselves to the target of such jocularity think rather of another term for it, such as ridicule, might not they see themselves as the butt of the joke; minimally, might not they see their whole serious enterprise as being infiltrated by juvenilia?

What Ross proposes actually capitalizes on one of Chomsky’s major advances in Aspects, which had expanded and refined Katz and Fodor’s adumbrations about syntactico-semantic features. Ross suggests that, instead of 6, all the rules in question refer to any element bearing the features [+V, +Aux]. He then assigns those features to have and be, whereas a verb like show is [+V, -Aux], and the majority of other words in the universe, like Pixie and toast and and are [-V, -Aux].9 This solution is certainly more natural on the surface than Chomsky’s analysis, in the sense of replacing a seemingly arbitrary grouping with a defined class, and it manifestly simplifies both the base and the transformational component.

It is, of course, more abstract than Chomsky’s analysis, since it depends on lexical features, and there is some irreverent nose-tweaking in the noun-toast-and comparison to a venerable Syntactic-Structures-sponsored, Aspects-endorsed complex. But Ross’s proposal absolutely falls within the purview of mid-1960s Transformational Grammar. Indeed, it is less abstract than much of Aspects.

The argumentation for Ross’s suggestion appeals to naturalness and simplicity in both the base and the transformational component, but a very welcome by-product of work in Abstract Syntax was to make Deep Structure (and thereby the entire grammar) more transparently semantic. So, his argument included subroutines like the following. Consider 14a and 14b:

image

In the Aspects model, 14a and 14b have distinct Deep Structures (in 14a, need is a main verb; in 14b it is an auxiliary verb). If the category distinction is erased, as in Ross’s proposal, then the two (semantically equivalent) sentences have the same Deep Structure.

As this subroutine shows, the Katz-Postal semantic neutrality hypothesis governed virtually all of the work in Abstract Syntax. Abstract Syntax was the child of Paul Postal, and Postal was part of the duo that not only formulated the hypothesis, but paired it with this heuristic:

Given a sentence for which a syntactic derivation is needed, look for simple paraphrases of the sentence which are not paraphrases by virtue of synonymous expressions; on finding them, construct grammatical rules that relate the original sentence and its paraphrases in such a way that each of these sentences has the same sequence of underlying [trees]. (Katz & Postal 1964:157)

Katz and Postal’s heuristic is a blueprint for Abstract Syntax and, throwing our metaphors to the wind, a roadmap to Generative Semantics.

Nobody played out this heuristic more thoroughly, or more astutely, than the Lakoffs. Robin’s Harvard dissertation was published as Abstract Syntax and Latin Complementation (1968 [1967]), and George’s Postal-guided Irregularity in Syntax is an Abstract Syntax treasure trove. George’s most renowned proposal falls under the general label lexical decomposition and starts by following the heuristic to sentences like those in 15, which he notes are effectively paraphrases of each other.

image

Lakoff adduced a number of strong arguments that all four derive from the same Deep Structure, 15d; or, more properly, that all derive from 15e, which is populated not with the concrete verbs of English but with deep, possibly universal abstract verbs (coded in small-capital letters):10

image

Most of the arguments for these abstract analyses were syntactic, in the expansive Aspects what-appear-to-be-semantic-questions-fall-increasingly-within-its-scope sense of syntactic—that the object of kill and the subject of die, for instance, have exactly the same relevant feature assignments (e.g., they both must be [+alive]; so bogies is okay, rocks is not)—but once again the most persuasive component of the case is that an analysis which treats kill as composed of more elementary meanings is “more semantically transparent” (McCawley in G. Lakoff 1970a [1965]:i) than treating kill as a lexical atom.

Chomsky had predicted with trepidation in the 1950s that “if we should take meaning seriously as a criterion, our analyses would be wildly complex” (1958:453), and it doesn’t take very much imagination to see that’s exactly where these sorts of analyses were heading. Think, for instance, of how one would represent the component meanings of lexical items like, just sticking with Lakoff’s fatality theme, slaughter, garotte, and assassinate; the last would have to be, conservatively, something on the order of cause x to become not alive for political reasons in a covert manner, where x is a reasonably important human. Or, here’s a real example from the period, 16b, a proposed Deep Structure string for 16a:

image

The most famous of these wildly complex Deep Structures, featured frequently in Lakoff and Ross’s talks, and which even made the New York Times (Shenker 1972), is their analysis of 17. Aspects would assign a Deep Structure to this sentence much like that of Tree-1, but Ross and Lakoff assigned it an underlying structure like that of Tree-2 (both trees, however, are somewhat elided).11

image

image

image

Keep in mind that Tree-2—although it looks like, and has been taken to be, a reductio ad absurdum vitiating the theory that produced it—is the serious proposal of two very good, very dedicated linguists. Lakoff and Ross knew it looked goofy. Merriment was always part of the their schtick, and the Floyd tree was the butt of many jokes. Ross had a ceiling-to-floor mobile of it hanging in his office, made from coat-hangers and Christmas-tree balls. But Lakoff and Ross were loyally tracing out the Aspects trajectory, with the Katz-Postal Principle lighting the way, to its natural consequences.

The Katz-Postal Principle and its start-with-the-paraphrases heuristic required taking meaning as a guiding criterion for grammar construction. The more one did so, the deeper and more abstract the analyses became. Notice three things before we move on, though. First, that representations at least as detailed as 16b and Tree-2, since they represent meanings, are inevitable in any grammar which wants to get the semantics right. Deep Structure string 16b, in particular, foreshadows a major twenty-first-century mainstay, Frame Semantics, and if we tried to render it into a tree it would be far more gangly than Tree-2. Second, since the meaning of kill, or some significant part of that meaning, is “cause to become not alive,” that fact has to be treated somewhere, even in a wholly Interpretive Semantics. And, third, notions like causation and negation and becoming (the technical term for this last one is inchoation) are fundamental parts of the human meaning-making repertoire.

And here’s the thing: despite the enthusiasm for meaning that suffused the Aspects program, that book really has very little semantics in it at all. One philosopher remarked about the Aspects framework that “semantic interpretation [is] provided, or rather not provided, by a separate (and highly ad hoc) ‘semantic component’ ” (Lycan 1984:6). That component remained very sketchy as Abstract Syntax moved ahead, but everyone realized that whatever an articulated semantic representation in the Aspects model might end up looking like, it would be distended in the direction of the Floyd tree. Meaning is a complicated beast.

For some observers, Lakoff and Ross were not going far enough or fast enough with their elaborate trees. Anna Wierzbicka, who had been studying linguistics in Warsaw with Andrzej Bogusławski, a serious semanticist if ever there was one, visited MIT just as the Abstract Syntax arboretum was expanding. She found it all a bit tame and recalls urging the brink-of-Generative-Semanticists to follow through on the implications of their work and get really abstract (Newmeyer 2014:257).

The important point for the moment, however, is much more straightforward than whatever the appropriate semantic representation of “Floyd broke the glass” might be: many Chomskyan linguists found the arguments surrounding the abstract Floyd tree (Tree-2) very persuasive in the post-Aspects context, and it became a touchstone for Lakoff and Ross’s evolving perspective.

Logic

Consider for a moment what grammar is. It is the most elementary part of logic. It is the beginning of the analysis of the thinking process. The principles and rules of grammar are the means by which the forms of language are made to correspond with the universal forms of thought. . . . The structure of every sentence is a lesson in logic.

—John Stuart Mill (1867:15)

The elaborate Floyd tree shows just how much traction Postal’s reductionist campaign had got: lots of things were now deep verbs (adjectives, prepositions, conjunctions, tenses, quantifiers, some nouns); phrase types dissolved (no adjective phrases or prepositional phrases); articles arose transformationally. Devoutly following Chomsky’s repeated injunctions toward simplicity, Abstract Syntax arrived at a small core of deep categories: NPs, Vs, and Ss. Every other category was introduced in the course of the derivation.12

This core was especially attractive because, as McCawley and George Lakoff began to argue, it aligned very closely with the atomic categories of symbolic logic: arguments (≈NP), predicates (≈V), and propositions (≈S). Further: the reduction of the Deep Structure inventory meant a corresponding reduction of the phrase structure rules, which now fell into an equally close alignment with the formation rules of logic. Further: the formalisms of symbolic logic and Transformational Grammar also fell nicely together.

None of this is surprising on the long view, of course. “Since the times of the Greeks,” Ernst Cassirer reminded linguists in the first issue of Word, “there was always a sort of solidarity, of open or hidden alliance between grammar and logic” (Cassirer 1945:103). More immediately, Transformational Grammar itself was developed specifically by men with considerable knowledge in, and equal affection for, twentieth-century formal logic—Harris and Chomsky. But on the short view—that is, in the 1960s, Cambridge-based, transformational hothouse—it came to some as a revelation.

Take a simple statement in symbolic logic.

image

We have a two-place predicate (love) that (therefore) takes two arguments, x and y, and two one-place predicates (man and platypus) that take one argument each, respectively x and y; they are assembled into one proposition by way of a pair of Boolean conjunctions (i.e., two instances of &). It says that x, the man, and y, the platypus, are related through the predicate, love, such that the x holds that state with respect to y. That is, the whole assemblage means, pretty much (ignoring tense and other minor complications), what sentence 19 means.

image

These two entities, 18 and 19, are similar in crucial respects. The word love in 19, for instance, just like the predicate love in 18, links up two “arguments”; it is a transitive verb that requires a subject and an object. But they also look pretty different in other ways (What’s with the variables? And why do we need those &s?). Not to worry. Lakoff, in true Abstract-Syntax fashion, argues that the differences are only skin deep. So, we know that sentences are represented in phrase structural terms as labeled trees, like Tree-3.

image

But there was an alternate and fully equivalent formalism for representing constituent structure, namely, labeled bracketing, like 20:13

image

Put this way—18 and 20 are both bracketed strings, and 20 is equivalent to Tree-3—you can probably see the dénouement as clearly as Lakoff and McCawley did: just reverse engineer the logical bracketing formalism of 18 into the tree formalism and you get a representation like Tree-4. The formalisms of symbolic logic and Transformational Grammar, in other words, are fully equivalent. That’s Step One. Step Two, in the Transformationally heady, magic-bullet, post-Aspects days, was a minor one: build an operation to get us from Tree-4 to Tree-3.

image

This realization—that Deep Structure coincided with symbolic logic, the traditional mathematico-philosophical language of meaning—was a mind-blowing confirmation that Abstract Syntax was on exactly the right track. It was, in fact, the conversion experience that effected Generative Semantics.

Logic is ultimately an empirical product, the distillate of word and sentence observations by Aristotle and his contemporaries, so it should be rather unremarkable that it can be brought to function prominently in linguistic theories. But the developments it went through over the millennia gave it an aura of purity, and of psychological or even metaphysical priority—not something that comes from language, but something that determines or regulates language.

The Deep Structure convergence with logic, too, seemed to be just what Dr. Chomsky had ordered. Very early in his career he suggested that his approach might “constitute a bridge between vernacular language in all its variety and complexity and the restricted language of the logician” (1957b:291). From his MA thesis, which bears the unmistakable imprint of Rudolf Carnap’s Logical Syntax of Language (1937 [1934]; Newmeyer 1988a; Tomalin 2006:159–60); through his massive Logical Structure of Linguistic Theory, through Aspects up to Cartesian Linguistics, in which Deep Structure and Logical Form are synonymous—through his entire career to that point—Chomsky courted symbolic logic.14 The Abstract Syntacticians thought it was time to end the courtship. They were ready to elope.

There was one more factor—beyond the increase in semantic clarity, the amenability to tree formalisms, and the natural fulfillment of Chomsky’s trajectory—which contributed enormously to the appeal of symbolic logic for Generative Semantics: its relation to thought.

Exactly what logic says about the way humans acquire, manage, and perpetuate knowledge is a far from simple or uncontroversial matter, with different logicians and philosophers giving different answers. But “logic, under every view, involves frequent references to the laws and workings of the mind” (Bain 1879.1:1), and, in the strongest views logic directly reflects the governing principles of mind. McCawley explicitly took this position, aligning himself with one of its strongest expressions, George Booles’s foundational 1854 logic monograph, An Investigation of the Laws of Thought (McCawley 1976b [1968]:136).

Logic was therefore bringing the Abstract Syntacticians much closer to the mentalist goals that they had imbibed with their early transformational milk.

The Universal Base

The latent content of all languages is the same.

—Edward Sapir (1921:218)

Along with vitamin M, Mentalism, the early Transformational milk included another essential nutrient, especially when Chomsky hitched his program to the goals and mechanisms of pre-Bloomfieldian traditional grammar, vitamin U, Universality. In Aspects Chomsky associates this nutrient with one specific module of his grammar, the base component:

To say that the formal properties of the base will provide the framework for the characterization of universal categories is to assume that much of the structure of the base is common to all languages. This is a way of stating a traditional view, whose origins can . . . be traced back at least to the [Port-Royal] Grammaire generate el raisonée. (1965 [1964]:117)

More strongly yet, in Cartesian Linguistics, which outlines a traditional, philosophical grammar he associates most strongly with the seventeenth century Port-Royal Grammaire he goes further. He traces the notion of a Deep Structure to the Port-Royalists that “is common to all languages, so it is claimed, being a simple reflection of the forms of thought”—1966a:35), unambiguously bringing Deep Structure and meaning into the mentalist universal-base suggestion, all but proposing Generative Semantics. Well, okay, not so unambiguously—there is that telltale Chomskyan hedge, “so it is claimed”—but the early Transformationalists overlooked the hedge too, and Chomsky’s suggestion caught fire. It rapidly evolved into the uppercase Universal Base Hypothesis (nicknamed UBH)—an explicit proposal about Universal Grammar which claimed that at some deep, deep level all languages had the same set of generative mechanisms—and became identified exclusively with Abstract Syntax. The Lakoffs both found it beguiling. George endorsed it in his thesis (1970a [1965]:108–109), as one possibility for getting at the universal dimensions of language, and by 1967, at a watershed conference in La Jolla, California, his enthusiasm had gone way up (Bach 1968 [1967]:114n12).

Robin endorsed it in similarly hopeful terms in her thesis. The thesis had a mainstream Chomskyan title, “Studies in the Transformational Grammar of Latin: The Complement System,” but by the time it was published it was waving the new banner, as Abstract Syntax and Latin Complementation. In it, she spells out the UBH implications for the Aspects model clearly and points the way toward a common-denominator approach for finding base rules which can underlie both Latin and English. Betraying some of the schismatic spirit that had begun to infuse Abstract Syntax, she identifies this position with “some of the more radical transformationalists” (1968:168) and opposes it to “more conservative transformational linguists (such as Chomsky)” (1968:215n5). Bach was also influential in the development of the hypothesis (1968 [1967]), and Fillmore leveraged it in his influential Case Grammar (1968:1–2).

But the two figures most closely associated with the Universal Base hypothesis are McCawley and Ross. McCawley is important because, although he was not one of the chief marketers of the proposal—may never, indeed, have conjoined the words universal and base in print—he wrote an important paper in which many found strong support for the Universal Base, and because arguments he made for a deep verb-subject-object (VSO) order were very harmonious with a base component that produced universal structures compatible with symbolic logic.15 Ross is important because the explicit claim “that the base is biologically innate” appears to have been his (Lancelot et al. 1976 [1968]:258); because of his recurrent use of the hypothesis as an appeal for Generative Semantics (for instance, in a paper directed at cognitive psychologists—1974b:97); and because he gave the hypothesis its most succinct, best known, formulation:

THE UNIVERSAL BASE HYPOTHESIS

The Deep Structures of all languages are identical, up to the ordering of constituents immediately dominated by the same node. (Ross 1970b [1968]:260)

Ross’s definition makes it inescapably clear what is being said: that the dizzying variety of linguistic expression in all known languages, in all unknown languages, in all possible human languages, derived from a common set of base rules, with the trivial exception of within-constituent ordering differences (acknowledging, for instance, that adjectives precede nouns in English noun phrases, and follow them in French noun phrases). In fact, Ross and Lakoff were confident enough in the hypothesis to begin working on such a set of rules, and their confidence was infectious.

Once again, we can see that the themes and methods of the claim had the Aspects seal of approval, a book preoccupied with the universal: universal properties that determine the form of language (35); a universal vocabulary from which grammatical descriptions are constructed (65, 66, 117, etc.); linguistic universals (29, 36, 209, etc.); universal conditions (120, 121, 209, etc.); universal semantic (144) and lexical rules (112); universal categories and functions (116, 220); and universal phonetics (31). Aspects incessantly trumpets the overarching, governing theoretical commitment to a Universal Grammar (5–7, 28, 65, 118), an inheritance from traditional grammar (5–6), which had been unfortunately opposed by “Modern [aka Bloomfieldian] linguistics” (6).

Not the least of the attractions for the Universal Base Hypothesis, then, was its inverse relation to the irredeemable Bad Guys of American linguistics, who Chomsky repeatedly accused in classes and public lectures and books, of holding that “languages could differ from each other without limit and in unpredictable ways” (Joos 1957:96). The Chomskyans regarded this notion as repugnant in the extreme, the epitome of woolly-mindedness and unscientific confusion, and they regularly shook Joos’s remarks aloft at conferences, in books and in articles, like the head of a vanquished foe (Thomas 2002:355ff).

But the Universal Base Hypothesis, in a typical scientific irony, actually got much of its drive from the genius that ruled Joos’s comment—attending to a wide (or at least, wider) variety of languages. The overwhelming majority of Transformational-Generative research in its first decade was on English, and Aspects reflects this emphasis. The Abstract Syntacticians were certainly not in the Bloomfieldians’ league in terms of experience with non-European languages, but they began thinking more deliberately about taking Generative principles beyond English. Postal’s thesis was on Mohawk, McCawley’s was on Japanese, Ross’s was widely cross-linguistic, and most of them studied under G. H. Matthews at MIT, who taught portions of his influential generative grammar of Hidatsa (1965); Robin Lakoff’s was on Latin, not that far afield, but a language with its own peculiar demands because there were no native speakers to query or intuitions to probe. Perhaps most importantly, though the fuller influence of his work was still several years off, Joseph Greenberg had just published the second edition of his typology of linguistic universals (1966), which surveyed the morphological and syntactic patterns of a great many, quite diverse languages.16

Now, since the base component sketched in Aspects depended on English, attempts to universalize it inevitably led to assorted hiccups. The Aspects base included adjectives, for instance, but Postal’s work on Mohawk had shown him that not all languages have a separate category of adjectives, distinct from verbs, which led rather directly to his adjectives-are-deep-verbs arguments. The Aspects base included a verb phrase of the form V + NP, but Japanese, Latin, and Hidatsa can’t easily accommodate such a VP; other languages seem to have no VP at all. The Aspects base included auxiliary verbs; not all languages do. The Aspects base included prepositions; not all languages do. The Aspects base included articles; not all languages do. The Aspects base ignores causatives and inchoatives (cause and become, respectively, in the conventions of Abstract Syntax), which are often covert in English; in many languages, they play very overt roles. The Aspects base adopted the basic order of English; Greenberg demonstrated very convincingly that there were several distinct basic orders.

The rub, then: while following out many of Chomsky’s general comments, Abstract Syntax was forced to reject or modify lots of his specific analyses. Where there is a rub, there is friction.

Filters and Constraints

We would like to put as many constraints as possible on the form of a possible transformational rule.

—George Lakoff (1970a [1965]:21)

The Abstract Syntax move toward a Universal Grammar underneath the literal skin of languages had another reflex: the shift in emphasis from phenomena-specific and language-particular rules to general grammatical principles.17 Early Transformational Grammar was rule-obsessed. The motto seemed to be “Find a phenomenon, write a rule; if you have the time, write two” and the search for phenomena was heavily biased toward English, fueled by the Chomskyan propensity for working from personal intuitions about grammaticality.

Once again, Lees’s Grammar of English Nominalizations is the best illustration. A brief monograph, focusing on a small neighborhood in the rambling cosmopolis of English, it posits over a hundred transformations (while noting frequently along the way the need for many other rules that Lees doesn’t have the occasion or the mandate to work out), most of which are highly specific. Some rules refer, for instance, to particular verb or adjective classes, some to individual words or morphemes, some to certain stress patterns or juncture types, and a good many of them have special conditions of application attached to them. Take this transformation (21), which handles a specific deletion (Lees 1968 [1960]:103–104):

image

[where that introduces a Factive Nominal, for and to introduce Infinitival Nominals]

What this rule is up to isn’t particularly important at the moment (or, really, at all), but notice how incredibly mucky it is with respect to general grammatical principles—how gummy it is to move from a rule that depends on the presence of specific English words to a principle holding of all languages—and the defining trajectory of Abstract Syntax was toward general (better, universal) grammatical principles.

A principle-oriented approach to grammar was clearly not going to achieve a very convincing Universal Grammar with transformations as detailed as 21. In fact, one of the axioms of “Postal’s conception of grammar,” Lakoff relates, was that “transformations may not mention individual lexical items” (G. Lakoff 1970a [1965]:xii) like that and for. We can see this axiom at work, for instance, in Ross’s smoothing out of the auxiliary system, where have and be are discarded for [+V, +Aux]. Ross indicts Chomsky’s auxiliary treatment for the use of individual lexical items like have and be, and the central virtue he cites in favor of his own analysis is that it permits generalizing several transformations so that they refer to a natural class of items rather than to a seemingly arbitrary list of words. A transformation that refers to a class of items with the features [+V, +Aux] has a much better chance of getting a job in a Universal Grammar than a transformation that refers to a class of items like the set {have, be} or {toast, and}.

The first truly important work on general transformational principles was—can you guess?—uh-huh, Chomsky’s. Describing languages, Chomsky had said of the Bloomfieldian descriptive mandate, is only part of the linguistic enterprise, and the least interesting part; one can describe languages with myriad specific rules like 21. The tough, rewarding, truly authentic linguistic work was in explaining Language, and explaining Language, in Chomsky’s terms, involved keeping the transformations from running amok. Transformations were necessary to capture crucial generalizations about language. Your theory would be too weak without them. But with them it becomes too powerful. Transformations could produce all sorts of ungrammatical gobbledygook alongside their productions of grammatical structures and strings. So long as one saw linguistic rules as tools to describe what people do when they speak, that’s not a problem.

But if you see linguistic rules as part of the explanation for what people do when they speak, mechanisms with some kind of psychological and genetic reality, mechanisms with evolutionary roots, mechanisms that baby humans have to acquire in some way, the fact that they can make gobbledygook as easily as (in fact easier than) they make good pieces of language becomes a rather urgent problem. So, transformations need to be reined in.

One way to do this is just by fiat, such as Postal’s stipulation that transformations couldn’t refer to individual words, or (with Katz) that transformations couldn’t change meaning. These conditions came from formal considerations. Certain stipulations—call them axioms—just make the system work better. But another way to curb the power of transformations was by empirical investigation: look for activities that transformations can do, by virtue of their definition, but which they don’t appear to do, according to the data, and develop a corresponding principle, on the assumption that something systemic must be ‘blocking’ that activity.

Movement is one source of transformational gobbledygook. Transformations can easily move any constituent anywhere in a sentence, which means all languages should have completely unconstrained word order. But they don’t. So, if transformations are part of natural human grammars (an article of faith among Aspects-era Chomskyans), there must be something blocking some kinds of movements. Chomsky started looking for the sorts of potential movements that don’t seem to be exploited by natural languages; or, putting it in his terms, started looking for constraints that must be operating to block those non-occurring movements.

The constraint Chomsky proposed (later named the A-over-A Principle by Ross) began as a loose suggestion about the tenaciousness of certain locations, which don’t seem to want to hand over their constituents to transformations. The conclusion was that this local tenacity would need to be wired into Universal Grammar, prohibiting movement out of those locations.

Unfortunately, A-over-A was a bust. When Chomsky discovered that it made the wrong predictions in certain cases, he—quickly, quietly, and reluctantly—dropped it, adding a prophetic and hopeful “there is certainly much more to be said about this matter” (1964c [1963]:74n16a; 1964d [1963]:46n10). There was indeed. The one who said it was Ross, in his epochal, Chomsky-supervised, dissertation.

With a cautiousness and modesty uncommon to the mid-1960s MIT community, with a cross-linguistic sensitivity equally uncommon to Transformational Grammar of the period, with an eye for abstract universals unprecedented in syntactic work, Ross plays a remarkable sequence of variations on the theme of Chomsky’s A-over-A Principle, coming up with his theory of syntactic islands.

The notion of islands is quite arcane, but the metaphorical long and short of islands is that certain grammatical constituents are surrounded by water, and transformations have no boats. So, here we go: 22a is fine (from 22b), but *23a isn’t, because transformations can’t move constituents off a “complex noun phrase” island; 24a is fine (from 24b), but *25a isn’t, because transformations can’t get constituents off a “coördinate structure” island; and *26a is bad because transformations can’t move constituents off a “sentential subject” island (there is no grammatical equivalent to 26b, no corresponding Surface Structure, but notice that 26b and 26c are effectively the same type of Deep Structure string, so the Phrase Structure Rules need to generate 26c in order to get the corresponding grammatical sentences).

image

Tricky stuff, I know.

Ross suggested his syntactic-island constraints as an integral part of the innate mechanism guiding language acquisition, and his work is volcano-and-palm-trees above similar efforts at the time. The constraints would be hardwired into Universal Grammar, so that there would be no possibility of a grammar in which transformations could produce monstrosities like 23a, 25a, or 26a.

Meanwhile, other Abstract Syntacticians were exploring similar restrictions. In particular, Postal worked out his Crossover Principle (1971a [1968]), so-named because it prohibited the transformational movement of a noun phrase in a way that “crosses over” another noun phrase which has the same real world referent. He drafted the principle to explain a number of grammaticality facts, like the ones in 27 and 28. So, 27a is fine, while *27b and *27c are bad because Rob and himself cross over one another when the Passive rule moves them; and 28a is fine, while 28b and 28c are bad, because Rob and himself cross over one another by the rule Tough-movement.

image

image

Figure 3.1 shows the principle in action: In example A, there is no crossover violation (so it gets a check mark), because Rob and Sheana have different real-world referents. The crossover in example B, which corresponds to 27b and 27c, does violate the principle (getting an x), because Rob and himself have the same real-world referents.

The beauty of Postal’s principle is that it allows much more general formulations of the individual rules. Continuing to pick on Lees’s Grammar of English Nominalizations—but only because it is a quite extensive application of the Syntactic Structures model to one specific corner of English grammar—that approach could only account for facts like the ones in 27 and 28 by placing individual conditions on Passive and on Tough-movement (and, in fact, several other rules). But a single condition for all movement rules can be stated once, and none of the relevant transformations will misfire. It makes the overall system more elegan.

image

Figure 3.1 Illustrations of Postal’s Crossover principle.

There is a related Aspects-era phenomenon, in the sense of excluding bad Surface Structures, that Chomsky called filtering and assigned to the transformational component. In addition to their regular job of making grammatical sentences, he says, transformations also had the job of weeding out any erroneous structures that the base component might produce: filtering off Deep Structures that couldn’t achieve grammatically. The Aspects base component generated structures like 29a and 29b, for instance:

image

The base component presents these structures to the transformational array, which has no trouble with 29a. The transformation TR (for Relative Clause) changes the second occurrence of the cat into the relative pronoun which, yielding a perfectly grammatical sentence containing the relative clause that chased the dog, 29c.

image

Not so with 29b. TR calls for two identical noun phrases: it doesn’t find them in 29b, so it can’t apply and therefore prevents the offending structure from surfacing (Chomsky 1965 [1964]:137–39). It filters off deviant structures.

This process, the blocking of derivations from problematic Deep Structures by way of TR’s inability to fire, was part of a general filtering condition called Recoverability of Deletion (yet one more influential innovation coming from Katz and Postal). One of the things transformations do is erase certain Deep Structure constituents which either don’t belong or just aren’t wanted in corresponding Surface Structures (like the second occurrence of the cat in 29a). Chomsky advocated the Recoverability of Deletion condition to control these erasures (transformations can’t just go around rubbing out any old constituent) and to otherwise ensure good output from the grammar. Deleting the kangaroo from 29b, for instance, would be unrecoverable, because there would be no evidence of what was deleted, but deleting an instance of the cat from 29a is fine because it still leaves one instance behind, testifying to what was erased. Among the more interesting features of this proposal is that erasures “leave a residue” (Chomsky 1965 [1964]:146). In the derivation of 29c, effectively it leaves behind that, but often they leave behind wholly abstract markers, with no phonological presence at all.

Filtering became a significant line of post-Aspects research, along with various theories of transformational residues, with contributions on all fronts, but rhetorically it was claimed by Interpretive Semantics.

The Performative Analysis

“I do take this woman to be my lawful wedded wife)”—as uttered in the course of the marriage ceremony.

“I name this ship the Queen Elizabeth”—as uttered when smashing the bottle against the stern.

“I give and bequeath my watch to my brother”—as according to a will. “I bet you sixpence it will rain tomorrow.”

examples cited by J. L. Austin (1962:5)

If Abstract Syntax was largely an effort to increase the scope of the Aspects model, the largest push was to continue the semantic expansionism of Katz and Fodor, most dramatically in another development closely associated with Ross, which radically exploded the notion of meaning in Transformational Grammar, the performative analysis.

For much of the twentieth century, meaning was more the focus of philosophy than of linguistics; it’s no coincidence that the first people to work on semantics in the Chomskyan framework were card-carrying philosophers, Katz and Fodor. So, as Transformationalists started to expand their repertoire into semantics, they found the ground occupied—by logicians on the one hand, whose work Lakoff and McCawley adapted, and by ordinary language philosophers on the other. There was a big difference in the sort of meaning these two groups cared about, though. Logicians were concerned, centripetally, with very narrow propositional meaning, with reducing the ambiguity, messiness, and contextuality of day-to-day linguistic traffic. Ordinary language philosophers moved in the other direction entirely, centrifugally out onto the highways and into hallways of daily linguistic engagements.

Ordinary language philosophy developed somewhat in parallel to Chomskyan linguistics; indeed, it was a parallel revolution, according to one of its founding voices. J. L. Austin not only unequivocally declared the movement a revolution, but hints rather broadly that it is “the greatest and most salutary” revolution in the history of philosophy (1962 [1955]:3)—adding, however, that given the history of philosophy the claim is a modest one. The title of Austin’s 1955 book, How to Do Things with Words, identifies the driving theme of ordinary language philosophy—that people do things with language—a ridiculously obvious but remarkably long unnoticed notion in philosophy and linguistics (not so in rhetoric or literary studies). Austin’s starting point is that philosophy had generally only noticed one of the things that people do with words—assert propositions—and had consequently paid virtually all of its attention to truth conditions. This was largely the territory of the logicians, and its why Katz and Fodor were focused on notions like synonymy, paraphrase, entailment, antonymy, and anomaly—matters in the orbit of truth conditions. If it’s true that Tristan is a bachelor, then it’s true that he is unmarried, and it’s true that he’s male, and it’s true that he’s human, and so on.

But people also inquire, Austin says, and order, and warn, and threaten, and promise, and christen, and bequeath, and bet, and insult, and joke; we perform a copious range of actions with our talk, us language users, actions in which truth conditions are often either subservient or wholly irrelevant. Austin calls these activities speech acts and the sentences that perform them, performative sentences.

Despite Austin and Chomsky’s familiarity with each other, early Transformational Grammar and Speech Act Theory saw very little cross-talk, but Katz and Postal broke the ice in a footnote that entertained the idea that base-generated structures might come with performative verbs and other paraphernalia (1964:149n9); so “Wash your hands!” might derive from “I order you to wash your hands,” with the first bits erased transformationally. They reeled off some of the virtues this proposal had over the course they actually did take (that is, derivation from something more like, “imp you wash your hands”), then added, “Although we do not adopt this description here, it certainly deserves further study.” Enter Ross, stage left.

Robin Lakoff’s dissertation suggests a few abstract performative verbs for Latin (1968:170–217), and McCawley also sketched out the approach in an important paper for the Texas Universals conference (1976b [1967–1968]:84–85). But the “performative proposal,” as it became known, clearly belongs to Ross, and to his paper “On Declarative Sentences” (1970b [1968]).18 The principal claim of the paper is that the Deep Structures of simple statements have a topmost clause of the form “I tell you,” as in Tree-5 (forgive me for abbreviating its full Abstract-Syntax glory), which was usually excised by a rule of performative deletion.19 By extension, the same mechanism would take care of questions, imperatives, promises, and the entire remaining panoply of speech acts—giving topmost clauses like “I ask you” and “I command you” to questions and imperatives, and clauses like “I bet you,” “I warn you” “I christen it,” and so on.

image

Ross offers a battery of arguments—copia is Ross’s most prominent stylistic signature—but let’s take just one of them, based on reflexive pronouns. In English, reflexive pronouns agree in person and number (and sometimes gender) with their antecedents. Using the instruments of Transformational Grammar, the most efficient way to explain the grammaticality of 30a and 30b but the ungrammaticality of *30c is to assume underlying antecedents that get deleted (as in the truncated Deep Structure string, 30d):

image

With 30d as the Deep Structure, 30a is okay (since I and myself agree in person and number). Ditto for 30b (since you and yourself agree in person and number). But *30c is out (since there is no third-person plural antecedent, such as they, anywhere in 30d). Simple and clean. And without that underlying “I tell you” clause, the grammar has to come up with some other explanation for why 30a and 30b are grammatical sentences, but 30c is not.

The case for an underlying “I tell you” clause has a certain force to it, especially with the stack of arguments Ross produces, and all of the data that Ross offers are compatible with the facts illustrated by 30a–d, syntactic facts—facts about the distribution and coöccurrence of words in sentences—in the best, most rigorous Aspects tradition. But it escaped no one’s attention that Ross’s proposal increased the semantic reach and perspicacity of the Deep Structure. Ross likely wouldn’t even have noticed the facts in 30a–d, of course, or of all his other arguments, if he wasn’t taking the Katz-Postal heuristic into Austinian territories.

The performative proposal was yet one more downward push of Deep Structure, moving it ever deeper, toward abstraction, yes, but also toward meaning: the defining trend of Abstract Syntax. In McCawley’s diagnosis of the situation,

Since the introduction of the notion of “Deep Structure” by Chomsky, virtually every “Deep Structure” which has been postulated (excluding those which have been demonstrated simply to be wrong) has turned out not really to be [the] Deep Structure [appropriate for the given analysis] but to have underlying it a more abstract structure which could more appropriately be called the “Deep Structure” of the sentence in question. (McCawley 1976b [1967]:105)

Deep Structures, in sum, begot deeper structures, which begot deeper structures yet. “This,” McCawley added, “raises the question of whether there is indeed such a thing as ‘Deep Structure.’ ” Lakoff and Ross were in the front row. They had an answer to that question. They put up their hands.

The Opening Salvo

We believe semantics may be generative.

—George Lakoff and Haj Ross (1976 [1967]:159)

In the spring of 1967, Ross wrote a letter to Arnold Zwicky outlining the work he and Lakoff had been doing, in telephonic collaboration with McCawley, and the conclusions the three of them had all come to about the interpenetrations of syntax and semantics in Transformational Grammar. Zwicky recalls being “very impressed” by the letter, and Ross decided to circulate it more widely. By this point Ross’s collaboration with Lakoff had become a sort of Lennon and McCartney affair, where it didn’t really matter who wrote what; both names went on the letter, the important passages were mimeographed, and copies quickly made the rounds. By the following year it was circulating through the Indiana University Linguistics Club (the central clearing house at the time for working papers, theses, and cutting-edge speculations) under the title, “Is Deep Structure Necessary?”

No, the letter answers.

The Ross/Lakoff brief against Deep Structure was sketchy at best, but the letter was very compelling, for Zwicky and for most of its secondary readers. McCawley says the letter is what turned him “from a revisionist interpretive semanticist into a generative semanticist” (1976b:159). He was not alone. Where Lakoff’s “Toward Generative Semantics” sputtered, the joint letter sparked, kindling a brush fire not so much because of the immediate force of their arguments, but because of all the rich and intriguing work amassed behind it in the name of Abstract Syntax. The letter plugged directly into a feeling that had begun to pervade Generative linguistics, that Deep Structure was just a way station. Chomsky had said, after all, that the deeper syntax got the closer it came to meaning and the Abstract Syntacticians were getting awfully deep. Look at how many fathoms down McCawley was in the spring of 1967, just weeks before Ross licked the stamp and posted his letter to Zwicky:

On any page of a large dictionary one finds words with incredibly specific selectional restrictions, involving an apparently unlimited range of semantic properties; for example, the verb diagonalize requires as its object a noun phrase denoting a matrix (in the mathematical sense), the adjective benign in the sense “noncancerous” requires a subject denoting a tumor, and the verb devein as used in cookery requires an object denoting a shrimp or a prawn. (1976a [1967]:67)

Selectional restrictions in the Aspects model were considered syntactic, but, clearly, calling [± Tumor] or [± Prawn] features parallel to [± N] or [± Plural] destroys any traditional notion of a semantics/syntax boundary. Lakoff and Ross, in fact, use McCawley’s argument as damning evidence against Deep Structure.

Once McCawley was nudged from Abstract Syntax to Generative Semantics, he began to develop the strongest arguments against Deep Structure, including a rather notorious and ingenious one pivoting on the word respectively. But his most damaging argument was not a negative argument that Deep Structure had to go. It was a positive proposal that one could get along just fine without it.

McCawley built on some of the lexical decomposition ideas in Lakoff’s thesis, his argument that “kill, die and dead could be represented as having the same lexical reading and lexical base, but different lexical extensions” (1970 [1965]:100): they would all involve the primitive definition for dead (something like not alive), but die would additionally undergo the transformation, Inchoative (bringing in become), and kill would be further undergo Causative (cause), capturing rather smoothly that dead means not alive, die means become not alive, and kill means cause to become not alive.

This was not all about meaning, or even primarily about meaning. The most attractive part of this suggestion was a simple increase in scope. These transformations were necessary for the grammar anyway, Lakoff argued, to account for the range of syntactic properties in words like hard (as in 31); he just increased their workload.

image

McCawley proposed a new rule which “includes as special cases the inchoative and causative transformations of Lakoff” (1976b [1967]:159)—more scope increases—and collects atomic predicates into a subtree to provide for lexical insertion. The new rule, Predicate-raising, was about as simple as transformations come. It simply moves a predicate up the tree and adjoins it to another predicate, as in Trees 6, 7, and 8.20

image

Lexical insertion could take place on any of these trees, yielding any of the synonymous sentences in 32.

image

Moreover, the dictionary entry for kill in the grammar no longer needs the detailed semantic markerese of the Aspects model; it could be expressed simply “as a transformation which replaces [a] subtree” (McCawley 1976b [1967]:158), like the conglomeration of abstract verbs in Tree-8. The entry for kill could, for instance, simply be the relevant transformation in Figure 3.2.

The implications of this proposal are sweeping, and many Transformationalists found it extremely persuasive. In particular, as promised in the title of the paper in which McCawley proposed it, “Lexical Insertion in a Transformational Grammar without Deep Structure,” it showed linguists how to make do without Aspects’ biggest drawing card. With this paper, McCawley presents a “model [that] unifies a theory of syntax with one of semantics and formally says the rules which govern the distribution of forms in sentences are those which govern the distributions of meanings in words” (Vroman 1976:38). Yowza!

Lakoff and Ross’s mimeographed assault on Deep Structure was important, but it was almost exclusively corrosive. Their arguments gnawed away at Deep Structure without a solid proposal for what to do once it had been chewed up and swallowed, once it was gone. One of the criteria they cite and then denounce for Deep Structure is its role as the locus of lexical insertion, the place where the words show up in a derivation, but their dismissal is incredibly curt. “We think we can show,” they say, that “lexical items are inserted at many points of a derivation” (1976 [1967]:160): ergo, that there is no one specific location, no Deep Structure, where all lexical items enter a derivation. This bold claim is followed by a one-sentence wave in the direction that such a proof might take, and they’re on to other matters. At best, this move looked like chucking out the lexical-insertion baby with the Deep-Structure bathwater. Their apparent recklessness and lack of positive substance are the main reasons that Lakoff and Ross’s letter had virtually no impact on Postal at all. Even if he agreed that Deep Structure was compromised by Ross and Lakoff’s arguments—a conclusion, as one of Deep Structure’s architects, he could not have been eager to embrace—there was nothing left for him to grab onto.

image

Figure 3.2 McCawley’s lexical insertion transformations (from McCawley 1976b [1968]:158; only the one for kill is provided there, the other two are extrapolated).

When Postal says that “the fomenter of [Generative Semantics] was McCawley; I’ve always considered it to be his,” it is primarily these lexical insertion arguments he has in mind.

Postal was off at the IBM John Watson Research Center in Yorktown Heights by this point, closer to Cambridge than McCawley, actually, but further out of the academic loop. He had stayed in contact with Lakoff and Ross, but not closely enough to find their patchy claims very compelling. His response to their arguments, in fact, was much the same as it had been to Lakoff’s solo arguments four years earlier: the case was suggestive, promising even, but too perfunctory and vague to warrant abandoning the interpretive assumptions that had grounded his research from the outset. It was the more explicit, more closely reasoned, and more positive arguments of McCawley that got him to drop Deep Structure and work on the emerging Generative Semantics framework.

With the dissolution of Deep Structure, and the four leading figures all on their semantic steeds, there is still one more point to make before we can get to the Generative Semantics model directly, a sociological and rhetorical point: the campaign against Deep Structure was in many ways a campaign against Chomsky. For all the sense of a natural outgrowth from Aspects, all the reproduction of its themes and its arguments, all the warranting drawn directly from Chomsky’s work, all the compatibility with his universalist, logical, and mentalist programs, there was also a sense that they were leaving him behind.

Ross’s letter doesn’t even mention Chomsky, or Aspects for that matter, but look at what McCawley says when he questions the existence of Deep Structure:

As an alternative to Chomsky’s conception of linguistic structure, one could propose that in each language there is simply a single system of processes which convert the semantic representation of each sentence into its surface syntactic representation and that none of the intermediate stages in the conversion of semantic representation into surface syntactic representation is entitled to any special status such as that which Chomsky ascribes to “Deep Structure.” To decide whether Chomsky’s conception of language or a conception without a level of Deep Structure is correct, it is necessary to determine at least in rough outlines what semantic representations must consist of, and on the basis of that knowledge to answer the two following questions, which are crucial for the choice between these two conceptions of language. (1) Are semantic representations objects of a fundamentally different nature than syntactic representations or can syntactic and semantic representations more fruitfully be considered to be basically objects of the same type? (2) Does the relationship between semantic representation and surface syntactic representation involve processes which are of two fundamentally different types and are organized into two separate systems, corresponding to what Chomsky called “transformations” and “semantic interpretation rules,” or is there in fact no such division of the processes which link the meaning of an utterance with its superficial form? (McCawley 1976b [1967]:105–106)

This passage, about the clearest and most succinct expression of the central issues that dominated the Generative Semantics debates, isn’t directed against “current views” or “the Aspects model” or any collective notion of a framework or theory. It’s personal, and the person is Chomsky. McCawley mentions him four times—no Katz, no Fodor, no Postal, rather central members of the team which developed Deep Structure, the lexicon, and Semantic Interpretation Rules, and which wired in all the various technical adjustments that gave those instruments their power (base recursion, abstract symbols, selectional restrictions, and so on). This argument hails Chomsky and lays out explicitly, in the language of binary choices, that there are some new kids on the block.

The Model

In syntax meaning is everything.

—Otto Jespersen (1949 [1931]:4.291)

The new kids on the block advocated the grammatical model, simple in the extreme, given as Figure 3.3.

Leaving aside the Homogeneous I label for now, it isn’t hard to see where the model came from, or why it was so appealing. The defining allegiances in the historical flow of science—call them movements, paradigms, programs, schools, call them by any of the hatful of overlapping terms of the trade—are all to conglomerations, to knots of ideas, procedures, instruments, values, and desires. There are almost always a few leading notions, a few themes head and shoulders above the pack, but it is aggregation that makes the school, and when the aggregate begins to surge as one in a single direction, the pull is, for many, irresistible. Witness William James’s enthusiasm over the manifest destiny of the philosophical school of pragmatism in 1907:

The pragmatic movement, so-called—I do not like the name, but apparently it is too late to change it—seems to have rather suddenly precipitated itself out of the air. A number of tendencies that have always existed in philosophy have all at once become conscious of themselves collectively, and of their combined mission. (1981 [1907]:3)

Precisely this teleological sense pervaded the Abstract Syntacticians in the mid-1960s, the sense that they were witnessing a company of ideas marching ineluctably toward the position that meaning and form are directly related through the iterative interplay of a small group of transformations: simplicity and generality argued for a few atomic categories; these categories coincided almost exactly with the atomic categories of symbolic logic; symbolic logic reflected the laws of thought; thought was the universal base underlying language. Meaning, everyone had felt from the beginning of the program, was the pot of gold at the end of the transformational rainbow, and Generative Semantics, if it hadn’t quite arrived there, seemed to offer the most promising map for getting to that pot. Listen to the unbridled excitement of someone we haven’t heard from for a while, someone who was there at the beginning of the transformational program, Robert Lees:

In the most recent studies of syntactic structures using [transformational] methods, an interesting, though not very surprising, fact has gradually emerged: as underlying grammatical representations are made more and more abstract, and as they are required to relate more and more diverse expressions, the Deep Structures come to resemble more and more a direct picture of the meaning of those expressions! (Lees 1970a [1967]:136)

Elsewhere, he exclaims that “the deepest syntactic structure of expressions is itself a more or less direct picture of their semantic descriptions!” (1970b [1967]:185).

image

Figure 3.3 The Generative Semantics model, Homogeneous I. Adapted from Postal 1971 [1969]:134.

There are a number of noteworthy features to the directionality-of-abstractness appeal—as Postal awkwardly dubbed it (1971b [1969])—all of them illustrated by Lees’s passage. First, the word fact appears prominently in them; that Transformational Analysis led to semantic clarity was an undoubted phenomenon, in need of explanation the way the fact that English passives have more morphology than English actives is in need of explanation. Second, there is usually a combined expression of naturalness to the finding (as in Lees’s “not very surprising”) and enthusiasm for it (as in his exclamatory endings). The naturalness follows from the conviction shared by most Transformationalists (1) that they were on the right track, and (2) that the point of doing linguistics was to mediate form and meaning. The enthusiasm follows from the immense promise of the result.

If the Aspects model was beautiful, Generative Semantics was gorgeous. The focus of language scholars, as long as there have been language scholars, has always been to provide a link between sound and meaning. In the Aspects model, that link was Deep Structure. To the Generative Semanticists, Deep Structure no longer looked like a link. It looked like a barrier. As Ross expressed it to me, still rapturous decades after the fact, once you break down the Deep Structure wall, “you have semantic representations, which are tree-like, and you have surface structures, which are trees, and you have a fairly homogeneous set of rules which converts one set of trees into another one.” The link between sound and meaning becomes the entire homogeneous grammar. Aspects’ semantic component was grafted onto the hip of the Syntactic Structures grammar, having only limited access to a derivation, and extracting a distinct semantic representation. Generative Semantics started with that representation (“the meaning”) and the entire machinery of the theory was dedicated to encoding it into a configuration of words and phrases, ultimately into an acoustic waveform. As a huge added simplicity bonus, the Semantic Interpretation Rules could be completely discarded, their role assumed by transformations.

Chomsky’s claims to Cartesian ancestry, too, as everyone could see, only fed the appeal of Generative Semantics. Robin Lakoff hinted broadly in a review article of the Grammaire generate el raisonee that when placed cheek-by-jowl with Generative Semantics the Aspects model looked like a very pale shadow of the Port-Royal work. She dropped terms like Abstract Syntax about the Port-Royal program, and drew attention to its notions of language and the mind as fundamentally logical, and to its use of highly abstract Deep Structures, even to a somewhat overlapping discussion of inchoatives (1969b:347–50). But she needn’t have bothered. The case is even more persuasively offered in Chomsky’s own book, Cartesian Linguistics, with only a few scattered qualifiers.

Generative Semantics was inevitable. In the tone of utter, unassailable conviction that he had brought to the Bloomfieldian rout, Postal put it this way:

Because of its a priori logical and conceptual properties, this theory of grammar [Generative Semantics, or, as the paper terms it, Homogeneous I] . . . is the basic one which generative linguists should operate from as an investigatory framework, . . . [and which] should be abandoned, if at all, only under the strongest pressures of empirical disconfirmation. In short, I suggest that the Homogeneous I framework has a rather special logical position vis-à–vis its possible competitors within the generative framework, a position which makes the choice of this theory obligatory in the absence of direct empirical disconfirmation. (1972a [1969]:135)

But, as Homogeneous I suggests, the story is far from over. Generative Semantics leaked.

The gorgeous model has some variation ahead of it—a Homogeneous II—but that will have to wait a chapter or so. Even Figure 3.3 wasn’t entirely accurate in 1972, since, as we’ve seen, there were also a few other devices beyond transformations. Ross’s and Postal’s constraints don’t change the model, because they are external to it, part of the overarching general theory that defines the model. But there was something else, which Ross called a conditions box, an additional trunk full of odd bits of tubing and pieces of cheesecloth to route and filter off the grammatical effluvia the rest of the grammar couldn’t control. As it turned out, Pandora’s box would have been a better label.

Before we look at some of the material that escaped the box, though, we should check in with Chomsky and see what his reaction was to all of these developments. It looked at the time to be a flat rejection of Generative Semantics, and it still looks to many people in retrospect to have been a flat rejection. But this is Chomsky, don’t forget. Things were not so simple.