CHAPTER 14
The organization man’s Army
In the mid-1950s, under the direction of Defense Secretary Charles Wilson, the former CEO of the General Motors Corporation, the Army was fast becoming a collection of “organization men,” to use the term of William Whyte, author of The Organization Man, one of the biggest-selling nonfiction books of that decade. Underscoring this was the unhappy departure of three prominent individualists—Ridgway, Taylor, and Gavin—who left the Army to file public dissents in the form of books.
As Whyte wrote, the organization’s managers did not welcome such rebellious types. This is how he summarized the emerging culture of corporate America in the 1950s:
“The rough-and-tumble days are over.”
“Unorthodoxy is dangerous to The Organization.”
“Ideas come from the group, not from the individual.”
“Creative leadership is a staff function.” That is, when the organization needs new thinking, the leader “hires staff people to think up the ideas.”
These characteristics were remarkably similar to those of the Army in the 1950s. In such organizations, where competition was submerged and a great emphasis was placed on cooperation, Whyte perceived a new, unsettling social atmosphere. People tended to have a friendly mien, but it was “a rather automatic, and icy, bonhomie” that in fact masked a certain distance. Corporate rotation policies that moved executives every few years deepened this impersonal conformism, with a tendency to avoid developing “a personal identity that depends upon a particular place,” noted William Henry, an academic who specialized in the psychological makeup of corporate executives.
In the military, similar rotation policies would mean, among other things, being less wedded to the regiment or a similar unit and being more of a person known for getting along in all situations. It would be an Army with little place for the Terry Allens of the world but with rich opportunity for ambitious micromanagers such as William Westmoreland.
Looking at Col. Westmoreland when he led an airborne regiment in Korea in 1952, one can glimpse the looming future of the Army. “To his paratroopers, the most impressive thing about their new commander was his care for details,” wrote one of his biographers. “He wanted to check every turn in a patrol route, often suggesting alterations. . . . One of his battalion commanders characterized Westmoreland’s command: ‘He makes you feel like he’s looking over your shoulder all the time.’” Westmoreland struck Maj. Frederick Kroesen, a future four-star general, as a fine commander, if somewhat misguided by his careerism. “I remember . . . someone saying he’d court-martial his wife if he thought it would get him another star,” Kroesen recalled.
The military’s trend toward a corporatist approach was reinforced by developments in strategic thinking. During the 1950s, Thomas Schelling, an influential economist turned nuclear strategist, began to argue that, in intellectual terms, fighting a war was little different from operating in a market. “There is more than a semantic connection between price war and real war,” he wrote, with more wit than wisdom. “There is at least a touch of similarity between, say, a threat to retaliate with nuclear weapons and a threat to retaliate by calling a strike.” Schelling, who decades later would win a Nobel Prize, perceived in this similarity the opportunity to rethink military strategy:
Today’s strategy is less concerned with how to conduct a war that has already begun than with using potential military force in the conduct of foreign affairs. “Deterrence” is a strategic concept, but not a purely military one. Certain military capabilities are necessary to deter aggression; but essentially deterrence is concerned with manipulating or working on or influencing a potential enemy’s preferences, intentions, and understandings. Deterrence depends not only on what one can do in a purely military sense but on how one can display what he can do.
Nor, he continued, was this applicable only to considering how to deal with the Soviet Union. He also proposed it as a way to think about smaller engagements:
Limited war is essentially a bargaining process in which violence and the threat of violence is used, in which one tries to coerce or to deter an enemy and cause him not to pursue all of the actions of which he is currently militarily capable.
That sentence essentially captures the strategy of “gradual escalation” that President Lyndon Johnson would employ half a decade later in the Vietnam War as he sought to use increasingly heavy bombing to try to bring the North Vietnamese to the negotiating table.
The logic seemed compelling. In 1957, Robert Osgood argued in his influential work Limited War: The Challenge to American Strategy that the United States, with all its vast wealth, would prevail in any war of attrition, even against the millions of people of China. Indeed, he assured his readers, that “would be precisely the kind of war in which our superior production and economic base would give us the greatest advantage. As one writer has observed, a war of attrition is the one war China could not win.” Both these theories would be put to the test a few years later—not against the Chinese but against Vietnam’s Communists, presumably a smaller and easier case. As Osgood would write much later, “If the early 1960s saw the height of enthusiasm for limited war, the late 1960s witnessed in Vietnam the greatest blow to that enthusiasm.” Ironically, spirit and enthusiasm—what George Marshall had called “determination”—were exactly the elements that Schelling’s rationalist approach lacked.
The post–Korean War Army’s search for a mission
The post–Korean War Army was a surprisingly troubled institution. By the mid-1950s, noted William DePuy, the tough little veteran of slaughter in Normandy who was on his way to becoming a general, “the Army was feeling sorry for itself.” Coming home from a frustrating war in Korea, the Army faced difficult and unexpected problems. On Capitol Hill, the Army’s leadership was under attack by Sen. Joseph McCarthy, a boorish Wisconsin Republican who accused it of harboring Communists.
More significantly for the Army, as one service historian put it, the 1950s was “the decade of doctrinal chaos.” The service came to doubt its future. The first major issue was the question of the Army’s role in an era of nuclear weapons, which were proving revolutionary for the other armed services. The Air Force was rapidly expanding, opening scores of new bases in the United States and overseas and, in 1955, fielding its first genuinely intercontinental bomber, the B-52. It also was moving smartly into space, launching reconnaissance satellites. The Navy introduced its first nuclear-powered submarine, the USS Nautilus, in 1955, and later in the decade developed an intermediate-range nuclear-tipped missile, the Polaris A-1 SLBM. For the first time in the nation’s history, land power was no longer seen as the paramount form of military force. Rather, as one Army historian put it, ground combat had begun to seem almost quaint. Just a decade after playing a central role in the biggest war in history, the U.S. Army’s size was reduced from twenty to fourteen divisions.
The irony of the Army’s losing its way in the 1950s is that it occurred on the watch of our last general turned president, Dwight D. Eisenhower—“a man,” as historian Adrian Lewis put it, “whose very being was so deeply associated with the U.S. Army, whose character was shaped by the institution.” Even the president’s son, John Eisenhower, himself an Army officer, told his father about the Army’s malaise, saying that as an institution, its lack of a clear mission “has left them somewhat unsatisfied and even bewildered,” according to Ike’s military aide Col. Andrew Goodpaster. “Their role is rather hazy to many of them.”
The feeling of being adrift extended to the field. Reporting to Fort Dix in 1956, Maj. John Collins found the New Jersey base had a ghost-town feel to it, with decrepit barracks and antique plumbing. His first battalion commander committed suicide. The second one was an alcoholic who sowed salt into the roots of shade trees in order to eliminate leaf raking in the fall. When Norman Schwarzkopf moved from West Point to Fort Campbell, Kentucky, in 1957, he was surprised to find many of its officers and sergeants wallowing in alcohol. “The ones who were still in the junior ranks were too often the dregs—guys who were just marking time, who had no sense of duty or honor, and who saw the world through an alcoholic haze.” One of Schwarzkopf’s routine tasks was to collect his company commander at the base’s “rod and gun club” around six every evening, when the commander, who left the office early, passed out from drinking. “If you didn’t show up for happy hour at the officers’ club on Friday afternoon, you were regarded as a weak sister,” Schwarzkopf recalled. “Drinks cost a quarter, and the object was to put away as many as possible before seven o’clock.” The Army of that time, he wrote decades later, was “in many ways . . . ethically and morally bankrupt.”
“When I came back to Washington” in June 1955 to be Army chief of staff, Gen. Maxwell Taylor said later that year in a talk at Fort Benning, “some people told me that the Army was ‘in the doghouse,’ that it was consistently in a minority position in the important decisions taken by the Joint Chiefs of Staff, that it was a forgotten service.” He insisted that he did not share that feeling. His lack of candor in that talk is somewhat forgivable, because he was trying to rally the Army troops at Benning. What is unforgivable is Taylor’s proclivity for not telling the truth, which would haunt the country a decade later. In fact, a few years later Taylor would bitterly describe the mid-1950s as the Army’s “period of Babylonian captivity.” As one of Taylor’s aides at the time, John Cushman, would remember, “The Army [was] . . . fighting for its existence.”
As Army chief of staff, Taylor led his beleaguered service’s attempt to respond to the ascendancy of the Air Force and the Navy. In 1956, he unveiled a muddled response, the “Pentomic Army,” in which Army divisions were re-formed into five independent “battle groups” that would operate in a more dispersed, semi-independent fashion. The Army hoped the change would make soldiers more likely to survive on “the atomic battlefield.” The characteristic weapon of the time would be the new “Davy Crockett” tripod-mounted recoilless rifle, which fired a fifty-one-pound, ten-kiloton warhead just over a mile. Soldiers, calculating that the portable nuclear weapon’s range was smaller than the area of its lethal radioactivity, joked that it was a new Darwinian form of an intelligence test. Nevertheless, by 1957 half the instruction at the Army’s Command and General Staff College was about how to operate during nuclear warfare. In 1959, Taylor lamented, the Army’s allocation of the Pentagon budget was 23 percent, precisely half the Air Force’s 46 percent share.
At the same time, another answer to the question of the Army’s future was emerging. If the Air Force and the Navy were focusing on atomic war, at the high end of the spectrum of conflict, the Army could show its flexibility and move to the lower end, into the area historically occupied by the Marine Corps: small wars. Taylor lit this particular spark not long after he became Army chief of staff, in 1955. In a letter to the Army’s retired generals, he stated, “We must be able to deter or win any kind of war. It is particularly important to prevent or to put out the brush fire war before it can spread into a general conflagration.” In 1957, eight Army officers wrote an article for Military Review titled “Readiness for the Little War—Optimum Integrated Strategy,” which argued against the Eisenhower Administration’s doctrine of “massive retaliation.” “Small aggressions do not warrant big bombs,” they stated. At the same time, the Command and General Staff College began putting a new emphasis on counterinsurgency. Also in 1957, Taylor established the new “Special Warfare School” at Fort Bragg, North Carolina. Taylor’s Pentomic Division would be discarded soon after he stepped down as Army chief, in 1959, but his emphasis on being able to respond to “brush fire wars” lived on, especially when a new administration came into power, led by a president attracted to Taylor’s notion of flexible response—and deeply influenced by Taylor’s views. It is not overstating the case to say that the Army’s doomed voyage to Vietnam grew in part out of its search for a mission in the mid-1950s.
Also contributing to the Army’s melancholy were problems with its structure and culture. By the mid-1950s, only about seven years after the personnel-law changes of 1947 were instituted and only four years after combat rotation was introduced in the Korean War, a spate of articles appeared complaining about oversupervision, or what is now called micromanagement. These critiques identified different causes but agreed that the Army was becoming increasingly bureaucratized, to the point that, as military journalist George Fielding Eliot put it, the service was at risk of losing its soul. “From corporals to colonels,” he wrote,
the men whose main job it is to train fighting soldiers and forge them into fighting units find themselves instead mere cogs in the vast machinery of the “system”; martyrs to the American devotion to the idea that the American businessman is the most efficient individual in the world and therefore all American institutions should be “run on business lines.”
A major part of the problem, Eliot added, was the constant rotation of soldiers and officers: “The noncoms who receive [new soldiers] are rotated out before they’ve gotten acquainted; their officers are being constantly changed.” Also, this was the first time in its history that the Army had been manned by a draft during peacetime, so it was dealing uncomfortably with many soldiers and officers who wore the Army uniform for just two years.
Rotation and micromanagement proved to be mutually reinforcing flaws. The more soldiers and officers moved, the less familiar they were with one another and, therefore, the more leaders tended to oversupervise, because they could not be sure of who was competent and who was not. Rotation also tended to reward abusive leadership that aimed for short-term results at the long-term expense of those who produced them. In this sense, a star performer might not necessarily be a good leader. In some units, an officer would enjoy personal advancement but, as he moved to his next position, would leave a demoralized and exhausted unit behind him. “The leader is often rewarded as a top performer in spite of being responsible for serious organizational problems,” Col. Steven Jones pointed out decades later, as the problem persisted.
Others sensed a larger and growing depersonalization in the Army. Capt. Roger Little lamented in 1955 that
like the mass society in which we live, military units have become more like crowds than neighborhoods or regiments. Membership is constantly changing, with persons moving in and out, up and down, and to widely different stations. . . . They don’t really “know” one another. The regiments are like crowds, anonymous collections of people, constantly changing before their members develop common standards, and sharing few if any memories of the battle or the bivouac.
By 1957 the Army was sufficiently concerned that it surveyed students at its Command and General Staff College, at Fort Leavenworth. A full 81 percent said they believed that commanders oversupervised junior officers. Among the causes they cited were an unrelenting demand for perfection, the use of excessively detailed orders, and overall lack of confidence in younger officers. Maj. Gen. Lionel McGarr, the commandant at Leavenworth, in a follow-up letter to the Army’s personnel chief, reported a consensus among students and faculty that the way the Army was managing its officers tended “to reward caution and conformity and to penalize progressive initiative.”
The Army tried to address the problem but had little success. In September 1957, Army chief Taylor sent a letter to his senior generals expressing a concern about this tendency of junior officers to perceive that they were being micromanaged. The following year, the Army’s manual FM 22-100, on military leadership, warned, “Over-supervision stifles initiative and creates resentment.” Senior Army commanders also were sent a letter summarizing the issue. But not much was done besides this discussion. Most significantly, there was no indication that anyone saw the problem as a structural one arising from the way the Army managed and promoted its officers.
One reason for inaction might have been that those who rose to the top in an era of micromanagement saw nothing worrisome about the close supervision of subordinates. It was, after all, what had helped them climb the ladder. “Why do so many generals pay so much attention to details?” wrote Maj. Gen. Aubrey Newman. “That they paid attention to important small matters is one reason they were made generals.” It was a maddening, but accurate, formulation: Micromanagement was becoming part of the Army’s culture—or, as George Fielding Eliot would have put it, its soul. As Gen. Newman’s comment indicates, general officers as a class were extremely resistant to outside criticism. After all, they could always say, they were the generation that had won World War II. After defeating the Nazis, everything else was deemed less of a challenge.
By 1961, there was growing evidence that the Army was losing hold of the concept of command. Lt. Col. David Ramsey Jr. took to the pages of Military Review to argue that, despite what many of his comrades seemed to believe, “command and management are not the same thing.”
From the outside, the Army looked terrific, in part because so much effort had been put into looking good. “It can be said without exaggeration that the Army . . . has never entered a war situation as well led as it is today,” Fortune magazine would report as the Vietnam War intensified. Part of its evidence was that “all but a fraction of the serving general officers and colonels have seen action or done staff duty in one or another of the great campaigns of World War II or Korea.” But there were signs of rot inside the service. Henry Gole, who had left the Army after the Korean War and returned in 1961, was shocked by the change he saw. “Officers were doing the tasks NCOs had done in 1953,” he recalled. “There was a lot of show . . . white rocks, short hair, shiny boots, the appearance of efficiency, over-centralization, fear of risk.”
In the early 1960s, Peter Dawkins was a celebrity within the Army, a captain better known than most generals. As a youth he had overcome polio. At West Point he became the first person ever to be captain of the Corps of Cadets, student body president, in the top 5 percent academically, and captain of the football team. It was almost anticlimactic that he also won the Heisman Trophy in 1958, as the nation’s outstanding collegiate football player, and then became a Rhodes scholar. But by 1965 he had grown unimpressed with the Army’s sense of leadership. “The ideal almost seems to be the man who has done so little—who has exerted such a paltry amount of initiative and imagination—that he never has done anything wrong,” he charged in an article for Infantry magazine. “There was a time when an individual wasn’t considered a very attractive candidate for promotion unless he had one or two scars on his record. . . . If [a man] is to pursue a bold and vigorous path rather then one of conformity and acquiescence, he will sometimes err.”
By this time, both elements of the Marshall system had begun to crumble: Generals were not selected for the qualities Marshall described, and were not relieved at the rate his model expected. Thus misapplied, the Marshall template of generalship tended to promote organization men who were far less inclined to judge the performance of their peers. They were acting less like stewards of their profession, answerable to the public, and more like keepers of a closed guild, answerable mainly to each other. Becoming a general was now akin to winning a tenured professorship, liable to be removed not for professional failure but only for embarrassing one’s institution with moral lapses.
Without realizing it, by ceasing to police its own generals for competence, the Army had spurred the rise of a new practice: the relief of top generals by civilians, as occurred in Korea with MacArthur and would continue in Vietnam and subsequent wars. One of the few predictors of how well a war will go is the quality of discourse between civilian and military leaders. Unfortunately, in America’s next war, it was not the benign spirit of Marshall but the malign spirit of MacArthur that would hover over presidents’ discussions with their generals.
This was the Army that would go into Vietnam—and that already was advising the Vietnamese military to take what now appears to have been entirely the wrong direction.