CHAPTER THREE


The Consolidation of Professional Authority 1850–1930

MOST studies of social mobility follow the movement of individuals or families through the socioeconomic order. They generally take for granted the relative positions of occupations and classes, as if the structure of society remained fixed and only the fate of individuals varied. For many purposes, this is a convenient fiction. But it obscures the movements that classes and occupational groups themselves have made through social hierarchies. These instances of collective social mobility reshape the structure of society and set new terms for the realization of personal ambition. Just as behind the apparently fixed contours of a landscape lie great historic shifts and upheavals in the earth, so behind the seeming permanence of a social order lie the past struggles of classes and other groups negotiating for advantage.

The rise of medicine, and of the professions more generally, represents one of the more striking instances of collective mobility in recent history. The historical success of a profession rests fundamentally on the growth of its particular source of wealth and status—its authority. Acknowledged skills and cultural authority are to the professional classes what land and capital are to the propertied. They are the means of securing income and power. For any group, the accumulation of authority requires the resolution of at least two distinct problems. One is the internal problem of consensus; the other is the external problem of legitimacy. These are necessary but not sufficient conditions for success. Consensus facilitates the articulation of common interests and the mobilization of group effort, while respect and deference, especially from the more powerful classes, open the way to resources and legally sanctioned privileges.

A profession, as I earlier suggested, differs from other occupations in part by its ability to set its own rules and standards. But it cannot do so unless its members agree, first, on criteria for belonging to the profession and, second, on what its rules and standards ought to be. Before convincing the public and the state of the legitimacy of their claims to self-regulation, physicians had to reach some agreement among themselves. Perhaps the foremost obstacle to the collective authority of the medical profession in mid-nineteenth-century America arose within its ranks. Mutual hostility among practitioners, intense competition, differences in economic interest, and sectarian antagonisms held the medical profession in check. Internally divided, it was incapable of mobilizing its members for collective action or of winning over public opinion.

While individual practitioners enjoyed autonomy, not to say isolation, they prospered—or more likely coped—according to their own wits: The profession of medicine did not endow its members automatically with public respect. In the early nineteenth century, as we have already seen, physicians had failed to establish clear boundaries marking off members of the profession from untrained and “irregular” practitioners. Internecine hatreds were rife. When Samuel Gross, later a famous surgeon, took up practice in the town of Easton, Pennsylvania, in the early 1830s, he found the local practitioners busy with enmity. “Every man seemed to live in and for himself. Hardly any two could be found willing to meet each other in consultation. Jealousy and ill-feeling were the order of the day.”1

The failure of doctors to establish any effective authority within the profession or in the society at large profoundly affected their relationships with patients. The doctor in America was more a courtier than an autocrat. Arpad Gerster, a scholarly and perceptive young Hungarian physician arriving in New York in the 1870s, was struck by the way American practitioners treated their patients:

As I soon found [he later wrote], physicians in America were more concerned with establishing a feeling of confidence and trust, hence of comfort in patients, than were our colleagues abroad. To a great extent, this was a natural consequence of the difference between the status of the physician in the United States and in Europe. Abroad, the medical degree per se invested the physician with a social standing and authority unknown in America, where, in 1874, the meager educational requirements made it easy to secure a diploma after “two sessions of so many weeks a year.” With some exceptions, the rank and file of the profession were—as far as general education went—little, if any, above the level of their clientèle. And the clientèle not only felt this, but knew it. Hence the medical man had to be more than modest; he had to be circumspect, even deferential, in facing ignorance, absurd pretensions, and ill manners—especially where they abounded most, among a certain class of the self-made, uncultured wealthy.2

One way of looking at the changes that took place between the 1870s and the early 1900s is that the social distance between doctor and patient increased, while the distance among colleagues diminished as the profession became more cohesive and uniform. The state, which had been indifferent to physicians’ claims since the Jacksonian era, finally embraced the profession’s definition of a legitimate practitioner. All these developments reflected a movement toward the strengthening of professional status and the consolidation of professional authority.

PHYSICIANS AND SOCIAL STRUCTURE IN MID-NINETEENTH-CENTURY AMERICA

Class

Before the twentieth century, the role of doctor did not confer a clear and unequivocal class position in American society. There was considerable inequality among those who practiced medicine, perhaps as much as in the communities where they lived. Instead of locating medicine at a particular point in a hierarchy of occupations, it might be more accurate to say that the inequalities among doctors paralleled the class structure. To the wealthier families, there corresponded an elite of the medical profession; to the poor, practitioners of lower status and less training. The social position of the majority of doctors was not low, but it was insecure and ambiguous. A physician’s standing depended as much on his family background and the status of his patients as on the nature of his occupation. Education, too, was a salient though probably secondary criterion of social distinction (secondary because higher education depended on family background). The men at the top of the profession had graduated from medical schools—the most prestigious having gone to Europe for part of their training—while practitioners in the lowest ranks were often no more than autodidacts. In the middle were the rank and file, the great majority of doctors, who had served apprenticeships and perhaps taken a course of lectures or a two-term medical degree, but who had little general education. The transformation the profession eventually underwent consisted not so much in raising the status of those at the top, as in raising the middle and eliminating the bottom altogether. And achieving some uniformity within the profession helped to make the practice of medicine itself, apart from one’s family origins or clientele, a sufficient condition for high social position.

From the Jacksonian period through the end of the nineteenth century, a medical career did not carry the prestige and guaranteed security it does today. In 1832 J. Marion Sims, who would later become one of America’s foremost surgeons, returned home to his family in South Carolina after graduating from college. His mother, who had recently died, had wanted him to become a clergyman; his father hoped he would become a lawyer. Sims wanted to be neither, and felt that if he had to take up a profession, medicine would make the fewest demands on his frail talents. “If I had known this,” his father exclaimed in an outburst that might amuse parents today, “I certainly should not have sent you to college. . . . it is a profession for which I have the utmost contempt. There is no science in it. There is no honor to be achieved in it; no reputation to be made.”3 A similar story comes from S. Weir Mitchell, a leading neurologist and genteel novelist, who, as a young man, first thought of entering chemical manufacturing. His father, a physician, suggested commerce, and Mitchell might well have entered an English cousin’s trading firm had not the cousin died in a shipwreck. “After a while my father more distinctly insisted on a choice, and I at last decided to be a doctor, much to his disgust.”4

Perhaps both Sims and Mitchell, savoring the irony, embellished their father’s reactions, but the incidents were not implausible. Many people thought of medicine as an inferior profession, or at least as a career with inferior prospects. In 1851 a committee of the recently formed American Medical Association (AMA) reported the results of a study it had made of the careers followed by 12,400 men who had graduated between 1800 and 1850 from eight leading colleges (Amherst, Brown, Dartmouth, Hamilton, Harvard, Princeton, Union, and Yale). While 26 percent became clergymen and a similar proportion were thought to have entered law, under 8 percent became physicians. Furthermore, the proportion going into medicine was lower among students who graduated with honors than among students in general. The committee thought these figures indicative of a general distaste for medicine among the “educated talent” of the country.5 As late as 1870 a medical journal remarked that when a young man of merit and ability chose to become a doctor “the feeling among the majority of his cultivated friends is that he has thrown himself away.”6

This may exaggerate the picture. Physicians were often influential figures in their communities. The elite of the medical profession may well have had more civic importance in the earlier periods of American history than it does today. Of the first one hundred members of the Medical Society of New Jersey, organized in 1766, seventeen eventually became members of Congress or the state legislature.7 Four medical practitioners—Benjamin Rush, Josiah Bartlett, Lyman Hall, and Matthew Thornton—signed the Declaration of Independence; twenty-six other doctors were members of the Continental Congress. Historically, the number of Congressmen trained as physicians was actually highest in the earliest years of the Republic. Between 1800 and the Civil War, at least seven and usually twelve to eighteen physicians served in Congress. In the early decades of the twentieth century, the number ranged between six and ten. In recent decades, there have been at most four or five, in spite of the singularly high income and status of the medical profession in our day.8

The explanation for this decline seems relatively clear. In the earlier years, professional roles were much less specialized, and professional training was not as long and arduous as it is now. It was common for professional men, whether in law, medicine or divinity, to take on many roles. The educated were few and physicians a relatively high proportion of them. Since medicine was much less remunerative, the incentive to become politically engaged was relatively greater. Today, the demands of professional careers no longer permit the same easy interchange of roles that characterized a less industrialized and less differentiated society. The educated are more numerous and more specialized. The status of physicians has risen, but their prominence has diminished. They are less evident in politics and public affairs, which cannot easily offer them the economic rewards and security of medical practice.

The social and political fortunes of the professional elite should, in any event, not be confused with the situation of the larger body of medical practitioners. The prominence of a few notables says no more about a profession than the wealth and prestige of a handful of celebrated painters and musicians says about the general condition of artists in a society. Yet the distance between the middle and the top of a profession is itself a fact of interest, and among nineteenth-century physicians that distance was so great that doctors cannot be said to have belonged to a single social class.

Medicine then rarely offered a path to wealth. Physicians who were rich typically had inherited fortunes or made money from commercial enterprises. Even at the close of a successful career, noted a writer in 1831, professional fees “hardly compare with the profits of one fortunate voyage, or the successful operations of a single day on the exchange.”9 J. Marion Sims, even after some years as a doctor, was ready to abandon the field should a good opportunity open up “because I knew that I could never make a fortune out of the practice of medicine.”10 Data from Rochester, New York, suggest the financial position of physicians in that community was actually declining in the mid-nineteenth century. In 1836, when two thirds of Rochester practitioners were property owners, their property was worth an average of $2,400, while the average value of all property per voter was $1,420. But in 1860 the proportion of property-owning physicians fell to one third, and their property averaged out at $1,500, the same as for all voters. Among 455 Rochester men who reported income of more than $1,000 in 1865, there were only eleven physicians and, among these, only four regular practitioners.11

Estimates of physicians’ incomes, while scattered and fragmentary, present a relatively consistent picture. The few physicians who counted their incomes in the thousands were clearly atypical. In 1850, in a much noted report on public health, Lemuel Shattuck wrote that the average Massachusetts practitioner had billings of about $800 and earned about $600 in income.12 By way of comparison, the budget for a working family of five, printed in the New York Daily Tribune in 1851, gave its annual expenditures as $538.44.13 This level of expenditure, however, was probably within the reach only of skilled workers; the average annual earnings of nonfarm employees for 1860 were an estimated $363.14 One economist suggests that incomes in the working class around 1860 ranged from $200 to $800, in the middle class from $800 to $5,000, and among the rich between $5,000 and $10,000.15 This would put most doctors at the lower end of the middle class. In 1861 the city physician of Chicago, then a city of 134,000, was paid an annual salary of $600.16 In 1871 a Detroit journal estimated that the average doctor earned $1,000 a year.17(Prices that year, however were 40 percent above 1860 levels, still showing the residual effects of Civil War inflation.) In 1888 a doctor ruefully observed, “Even with continued health and strength, a physician in this country can never possibly acquire by his toil the incomes readily made in other occupations now recognized as professions.”18 In 1901 a financial handbook for doctors put the earning of an average city physician at $730 and those of a country doctor at $1,200.19 Another guide, in editions from 1890 to 1905, estimated the average physician’s income at between $1,000 and $1,500, and noted that every older physician knew it was impossible to get rich by practicing medicine, except in a money-making specialty.20 In 1904 the Journal of the American Medical Association observed that an average income for a doctor was about $750, though this may have been a self-serving underestimate.21 That same year, average earnings in all occupations, excluding farm labor, were $540; federal employees averaged just over $1,000; ministers, $759.22 A magazine article in 1903 commented that doctors often earned less than an “ordinary mechanic.”23 This no doubt understated their average income, but it reflected a widespread perception that doctors were not particularly well off. It seems unlikely they earned more than other professionals.

Status

Whatever a physician earned, even in the 1800s he was still a professional man and this lent him a higher status than manual workers. Two dimensions of social ranking need to be kept separate: differences in wealth and income (objective access to scarce resources) and differences in honor, deference, and prestige (favorable or unfavorable social evaluations). The former corresponds roughly to the concept of class; the latter to that of status. Property and income should not necessarily be taken as accurate indicators of honor and prestige. The status of the medical profession, though insecure, was probably higher than its objective economic situation might suggest. This incongruity created a distinctive strain. On the one hand, physicians felt a need to maintain an image of a cultivated, respectable, learned profession; on the other, the reality was that many doctors had little education and often, when starting out in practice, could barely support themselves. In the face of financial pressures, the American physician was obliged to take on various kinds of work, like pharmacy and midwifery, that many of his European counterparts would have regarded as beneath their dignity. The village doctor was not above looking after his farmer’s livestock as well as his family. He pulled teeth, sat up all night with patients, and embalmed the dead—functions later spun off to dentists, nurses, and undertakers.

Like many people whose position in society is somewhat precarious, physicians were much concerned to maintain a front of propriety and respectability. There is perhaps no more acute testimony to the status anxieties of late nineteenth-century doctors than the popular manual for medical practice by D. W. Cathell, The Physician Himself, which ran through numerous editions beginning in 1881. Cathell gave elaborate attention to the establishing of a proper distance between doctors and their clients. Physicians could not allow people to get overly familiar with them. Conviviality, he warned, “has a levelling effect, and divests the physician of his proper prestige.” Appearing in public in shirtsleeves, unwashed and unkempt, was unwise because it would “show weakness, diminish your prestige, detract from your dignity, and lessen you in public esteem, by forcing on everybody the conclusion that you are, after all, but an ordinary person.”24

Manuals of personal advice generally come in two varieties: vague, uplifting, moralistic treatises filled with tedious pieties; and no-nonsense, amoral guidebooks to getting on in the world. Cathell’s manual fell into the second category, and consisted fundamentally of rules for what Erving Goffman has called “impression management.” In the interests of presenting an idealized image of the physician, it attached paramount importance to his manner and appearance. “If one is especially polished in manner and moderately well versed in medicine,” wrote Cathell, “his politeness will do him a great deal more good with the public than special acquaintance with histology, embryology, and other ultra-scientific acquirements.”25

Here as elsewhere, Cathell’s standard for judging the value of any aspect of a physician’s behavior or personality was the effect it would have on public opinion. This concern reflected the situation of the average doctor, who depended for his livelihood on public favor, rather than the judgment of his professional brethren or bureaucratic superiors. Because most physicians were independent general practitioners, doing basically the same kinds of work, they acquired business through a lay referral network, rather than from colleagues, as do specialists, or from organizational affiliations, as do physicians employed by institutions. Cathell’s physician was basically on his own, at the mercy of lay judgment and anxious to curry good will.

The result was a preoccupation with the image that physicians projected to clients rather than to colleagues. This frame of reference affected the psychology of medical work. All people, as Goffman says, are obliged not only to carry out their tasks and routines, but also to express their competence in doing so. Only in some cases, however, does the expression become more important than the activity itself. Some students so concentrate on appearing attentive, with their eyes open and pens poised, that they miss everything being said.26 This is one of the more familiar pathologies of everyday life, and it appears abundantly in Cathell’s manual. Cathell advises the physician to concern himself first with expressing his competence and only secondarily with actually being competent. “Errors of diagnosis and prognosis,” he writes in an altogether typical passage, “are ordinarily far more damaging to the physician than errors of treatment. Very few people can discover whether or not your diagnosis and treatment are correct . . . but if you say a patient will recover and he dies, or that he will die and he gets well. . . everybody will see that you are wrong. . . and they will naturally seek out some one with more experience or deeper thought.”27

For the same reason, a doctor had to be bold and prompt. Simulated spontaneity helps to dramatize social performance. “The public,” Cathell wrote, “love to see a physician appear to understand his business fully and to know things intuitively; therefore you must study and practice to be quick in diagnosis and ever ready in the treatment of the ordinary diseases and emergencies that will constitute nine-tenths of your practice.”28 This premium on quick and bold response suggests why doctors were drawn to active and “heroic” intervention, especially when medical knowledge was uncertain.

Cathell’s guide portrays physicians as facing a hostile, skeptical, and treacherous world. They had to take precautions against colleagues who might steal their patients and be on guard against “jealous midwives, ignorant doctor-women and busy neighbors,” who spread malicious rumors about physicians.29 Even patients were a threat as potential competitors. At one point, Cathell suggests various ways for physicians to conceal the contents of their prescriptions. “By employing the terms ac. phenicum for carbolic acid, secale cornutum for ergot, kalium for potassium, natrum for sodium, chinin for quinia, etc., you will debar the average patient from reading your prescriptions. . . . You can also further eclipse his wisdom by transposing the terms. . . .” There is some revealing advice about what to do with people who thought they could treat themselves:

Especially avoid giving self-sufficient people therapeutic points that they can thereafter resort to. . . . It is not your duty to cheat either yourself or other physicians out of legitimate practice by supplying this person and that one with a word-of-mouth pharmacopoeia for general use. If compelled to give a person remedies under a simple form, study to do so in such a way as not to increase his self-conceit and make him feel that he knows enough to practice self-medication and dispense with your services; use whatever strategy is necessary to prevent such persons from taking unfair advantage of your prescriptions.

The physician ought to do tests at his office, not at patients’ homes, lest they “begin to do tests for themselves—think they know more than they really do, and give you trouble.”30

Deliberate artifice is the weapon, not of a powerful profession, but of a weak one, which has no confidence in its own authority. Cathell’s guide reflects the exceptional insecurity of nineteenth-century doctors, their complete dependence on their clients, and their vulnerability to competition from laymen as well as colleagues. Uncertain of their authority, they were inclined to dissemble and cajole. “The American physician of those days,” Gerster recalled of the 1870s, “wielded less authority over his patients than did his European colleagues; he had to endure too much quizzing, and had to waste time in arguing patients into acquiescence.”31 In 1888 one physician, writing in a medical journal about the depressed state of the profession, recounted a futile attempt to explain the importance of color-blindness to the board of directors of a railroad:

They could not or would not understand or admit it. One otherwise pleasant old gentleman sank back in his arm-chair, and with almost a snarl of doubt and derision exclaimed, “Why Dr. Jeffries, I have been railroading more than forty years, now if any such thing as color-blindness existed, I must know all about it.”32

The inability to command deference was the root of the profession’s trouble. Cathell noted that physicians would very likely meet up with “many a presumptuous patient or his keen friend” who would question prescriptions and argue about treatment. “You will be often harrassed and cross-examined by such self-constituted Solomons, and compelled to resort to various expedients to satisfy or foil them, and avoid collisions with their whims, insinuations and prejudices. In fact, from this cause, the good effects of mystery, hope, expectation and will-power are of late almost entirely lost to regular physicians; all special confidence is sapped. . . .”33 And here Cathell may have had a point. Diminished authority may have cost doctors therapeutic effectiveness as well as social status.

Powerlessness

The stresses and insecurities of nineteenth-century medicine were particularly acute for young physicians. Consider the contrast between a professional career in medicine today and in the 1800s. Now a medical career follows a virtually fixed course. In America, becoming a doctor means four years of liberal arts education, followed by four years of medical school, and an average of four years of supervised hospital training. Standardized national tests must be taken first to get into medical school, then to get through it, and finally to qualify as a certified specialist. The entire process, aptly called “contest mobility,” emphasizes academic competition and meritocratic achievement. It has a strong semblance of legitimacy. The students who fail generally believe it is their fault; those who do well interpret their success as the result of ability and hard work. The prolonged training imparts a strong sense of common identity as well as technical skills. The training is difficult, but the social and economic rewards are fairly certain.

The nineteenth century could hardly offer a more vivid contrast. A professional career had no fixed pattern. Whether or not a physician went to medical school and if he did, for how long and with what general education, were all variable. Apprenticeships had no standard content. A medical education was neither as long nor as peer oriented; organized professional socialization was minimal. Few training positions were available in hospitals, and those were not awarded competitively; social connections weighed heavily in selecting candidates. Most young physicians had to strike out on their own, gradually building up a practice. At that early point in a professional career when doctors now spend sleepless nights as overworked interns and residents, their counterparts in the nineteenth century were waiting for their first patients to show up. Often a first location might not work out because of an unfavorable reception or a surplus of local practitioners. Everything depended on the successful courting of patients. The process was difficult, and the social and economic rewards were uncertain.

For the ambitious, status competition in medicine revolved around two major contingencies: the acquisition of socially prominent patients and appointments to medical colleges, hospitals, and dispensaries. Often the two were related, as the socially prominent held positions as trustees of medical institutions and could open the necessary channels of influence. The elite of the profession, even in a fair-sized city, was ordinarily small enough so that its members knew each other. Admission to the group was not easy; the wrong ethnic background was often a categorical disqualification. Family ties could be crucial. The same surnames tend to appear in succeeding generations as the leading doctors of a city: Bigelows, Warrens, Minots, and Jacksons in Boston; or Peppers, Chapmans, and McClellans in Philadelphia.

The professional elite did not necessarily identify its interests with those of ordinary practitioners. On the contrary, they were often scornful of their abilities and character, and anxious to dissociate themselves from their less favored colleagues. The profession in New York City in the years after the Civil War was organized in a series of concentric circles. At the center was the small Medical and Surgical Society, whose thirty-four members held about half the consulting and attending positions at city hospitals and dispensaries. They were known, appropriately enough, as the “hospital men.” Next in exclusiveness was the New York Academy of Medicine, with 273 members; last, the county medical society, open to all regular practitioners, of whom there were about eight hundred. The elite played a role in the academy but none in the county society.34

Neither the top ranks of physicians nor the bottom had a strong interest in effective medical licensing. The less educated practitioners, who had never been to medical school or had never graduated or held degrees of doubtful quality, feared the laws would be used to exclude them. The elite, on the other hand, stood to gain very little from their enactment. “These physicians,” John Shaw Billings pointed out, “whose positions are fairly assured, and who, as a rule, have all the practice they desire, are not usually active leaders in movements to secure medical legislation, although they passively assent to such efforts, or at least do not oppose them; and their names may sometimes be found appended to memorials urging such legislation. They are clear-headed, shrewd, ‘practical’ men, who know that their business interests are not specifically injured by quacks.”35

In England, according to W. J. Reader, the impulse for protection in the professions came, not from the highest ranks, but rather from the practitioners just beneath them. The elite was quite content with its gentlemanly, informal way of co-opting members to the royal colleges. It was the men at the edges of the elite who most wanted formal examinations and formal standards.36 This may also have been the case in America. Billings suggested that the competition of irregular and untrained practitioners was felt most strongly among “young men who have not yet acquired local fame,” who accordingly had the “more decided views about the importance of diplomas.”37 When, in 1846, after several false starts, a convention met in New York to plan a national medical association for the United States, it was composed, as its chief organizer, then only twenty-nine, later recalled, “of the younger, more active, and, perhaps, more ambitious members of the profession.” This initial session of what would become the leading organization of the profession—the American Medical Association—failed to attract many of the men who customarily took leading roles in professional affairs.38

If the AMA owed its impetus to the discontent felt by younger, less established doctors, it nonetheless had a very traditional program. It aimed primarily to raise and standardize the requirements for medical degrees. It also enacted a code of ethics that denied fraternal courtesy to “irregular” practitioners. Several immediate considerations prompted the founding of the association. The call for a convention emerged from discussions of educational reform in the New York State Medical Society, which concluded that local efforts would inevitably be frustrated. If the schools in New York raised their requirements, students would simply move elsewhere, and only the schools and their professors would suffer. Consequently, a national approach was necessary. Second, because of the repeal of licensing statutes, which had come in New York in 1844, only two years earlier, the orthodox profession could no longer look to the state for protection against what it viewed as the degradation of its standards. Instead, regular physicians would have to turn inward and rely on their own system of regulation. This was the impetus for the AMA’s adoption of a code of professional ethics, with its concern for excluding sectarian and untrained practitioners. Denied the state’s authority, the orthodox doctors were obliged to rely on their own.

Whatever the objectives of the AMA, it turned out to have little impact during its first half century. The “irregular” physicians accused it of attempting to monopolize medical practice and drive them from the field, and the AMA did have some success in keeping them out of the few medical positions in the federal government. But while monopoly was doubtless the intent of the AMA’s program, it was not the consequence. The “irregulars” thrived. The efforts of the AMA at voluntary reform of medical education failed miserably, as the schools would not comply. The AMA had scant resources at its disposition. Its membership was small, it had no permanent organization, its treasury was bare. Its authority was questioned even within the profession. The association met once a year, and then for all practical purposes disappeared. It had an amorphous system of representation, at first drawing delegates from hospitals and medical schools as well as medical societies; once elected, delegates became permanent members, so long as they paid the dues. “A purely voluntary organization,” one prominent doctor called it, “without any chartered privileges and with no authority to enforce its own edicts.”39

The association became so embroiled in political squabbles that the more scientifically minded members split off to form a separate learned society. “We want,” said Francis Delafield, the first president of the Association of American Physicians, at its first meeting in 1886, “an association in which there will be no medical politics and no medical ethics; an association in which no one will care who are the officers, and who are not. . . . We want an association composed of members, each one of whom is able to contribute something real to the common stock of knowledge and where he who reads such a contribution feels sure of a discriminating audience.”40 There was no mistaking which group Delafield intended to criticize.

The failure of physicians to generate strong collective organization reflected a deeper structural weakness in the profession. It is all too easy to assume, as some analysts do, that because doctors, or any other group, share some imputed common interest, say, in obtaining a monopoly, they will act coherently to support and defend that interest. Yet any number of factors—competing loyalties; internal conflicts; the inability of the members of a group or class to communicate with one another; the active hostility of the state, church, or other powerful institutions—can prevent the effective articulation of common interests. At a minimum, collective action requires some mechanism for inducing individuals to lay their private affairs aside and devote effort, time, and resources to the group. Paradoxically, the collective ends pursued by organizations usually are not sufficient inducement. Interest-group organizations tend to produce generalized benefits, like favorable public opinion or friendly legislation, that members of a group can enjoy regardless of whether they personally contribute to the organization’s activities. Such collective goods encourage individuals to take a “free ride” on others’ efforts. To counteract this tendency, organizations have to be able to provide some benefit or penalty, apart from the collective good, that will induce participation. Mancur Olson has called these sanctions “selective incentives.”41

Selective incentives for participation in professional organization were precisely what nineteenth-century medical societies were missing. Had licensing been placed in their hands, and had a license been essential to practice, they would have had a very powerful inducement. In depriving the medical societies of licensing powers, the states had deprived them of the power to organize and discipline their natural constituency. To be a member of a medical society helped certify the practitioner’s social status, but a diploma from a medical college could do the same. Professional organizations languished because they had no leverage over individual doctors.

The medical practitioners of the nineteenth century could get by pretty much on their own. They did not need access to hospital facilities, since very little medical care took place there. The natural inclination of physicians then was to solve their problems individually. They advertised themselves, either by their manner or in the press, or through what economists call “product differentiation” (that is, by offering a distinctive brand of medicine). The orientation of the profession, in short, was competitive rather than corporate. The forces pulling its members apart prevailed over the common interests that might have held them together.

MEDICINE’S CIVIL WAR AND RECONSTRUCTION

The Origins of Medical Sectarianism

Nothing weakened the medical profession more than the bitter feuds and divisions that plagued doctors through the late nineteenth century. Partly, the hatreds were sectarian; partly, they were personal. They were open and acrimonious, and as common in the high tiers of the profession as in the low. Philadelphia, the center of early American medicine, was a maelstrom of professional ill will. The animosity between John Morgan and William Shippen, Jr., the first two medical professors in the city, was especially notorious. It divided the country’s first medical school and then split the medical department of the Continental Army during the Revolutionary War, as the two physicians conspired against each other for control. During the yellow fever epidemic in Philadelphia in 1793, Benjamin Rush and his rivals took to the press to denounce each other’s treatment. “A Mahometan and a Jew,” Rush wrote, “might as well attempt to worship the Supreme Being in the same temple, and through the medium of the same ceremonies, as two physicians of opposite principles and practice, attempt to confer about the life of the same patient.”42

Medical colleges were a particularly rich source of fraternal hatred. Since an appointment had great value in increasing the size of a physician’s practice, there was inevitably resentment among those excluded. Often the faculty at one school could not abide the faculty of another. Even within the same school professors sometimes refused, as one noted, “to hold any communication with each other except such as their official position as teachers peremptorily demanded, in faculty meeting.”43 The history of medical schools in the nineteenth century is a tale of schisms, conspiracies, and coups, often destroying the institutions in the process. Daniel Drake, who wrote an essay on professional quarrels listing ten different causes, founded a medical college in Ohio, then helped remove some of his colleagues on the faculty, only to find himself ousted in the equivalent of a palace revolt. One of the more colorful imbroglios occurred in 1856 at the Eclectic Medical Institute of Cincinnati, where the professors and their allies among the students split into two factions over the financial management of the school and the introduction of new “concentrated” medications. One party seized control of the school building and locked out its opponents, who then massed outside the doors. “This was the declaration of war,” writes the school’s historian. “Knives, pistols, chisels, bludgeons, blunderbusses, etc. were freely displayed.” The battle was finally settled when one side brought in a six-pound cannon!44

Physicians were not exactly supposed to behave this way. Professional tradition insisted that doctors present a unified front to the public, no matter how divided they were in private. The AMA’s code of ethics, like others before it, prescribed a “peculiar reserve” toward the public in all professional disputes. The code enjoined consulting physicians to discuss cases entirely in secret and to present patients with a single opinion. If two practitioners disagreed, they were supposed either to let the decision rest with the regular attending doctor or to call in a third physician, so patients would never know there had ever been a difference of opinion in their case. Only in hopeless deadlocks were physicians to let patients make a choice.45 The etiquette of consultations called for the regular attendant to precede the consultant in entering the sick room and to follow him in leaving it, thereby allowing the outsider no time to impugn the regular doctor’s ability. Especially during a first consultation, the consultant was supposed to suggest as few changes in treatment as possible to avoid embarrassment. He was to say nothing that might jeopardize the impression of competence the regular doctor was trying to project.46

A concern for fostering professional solidarity guided the formulation of professional ethics. Under the code, physicians were to avoid any behavior that might smack of patient stealing, even in their daily activities. So, for example, they were supposed to avoid visiting any sick friends who were under the care of another doctor since that might arouse suspicions; and if they did make such a visit, they were to skirt any discussion of the treatment. Doctors were to give free care to one another and fill in for sick or traveling colleagues. When taking over a case, they were to justify, as far as honesty would permit, the previous doctor’s conduct of the case. A wealthy doctor, according to the code, was never to give free advice to the affluent, for this was to deplenish the “common funds” available for support of the profession.

These rules had only mixed success. Bedside controversies among consulting physicians were apparently not unusual. It was a standing joke that no two doctors could ever agree. And the ethical code itself exacerbated divisions because it excluded sectarian physicians from professional courtesy and association.

Of all the divisions that rent the profession, sectarianism was the most virulent. According to the usual explanation, medical sects grew in the mid-nineteenth century because of the inadequacy of contemporary medicine, particularly the disastrous errors of “heroic therapy,” which emphasized bleeding, heavy doses of mercury, and other modes of treatment now believed to range from the ineffective to the lethal. No doubt the inadequacy of therapeutics was one reason why physicians disagreed. It fails to explain, however, why disagreements led to sectarian organization. The shortcomings of medical treatment in earlier epochs were grievous, but they did not always produce antagonistic sects. Practitioners who severely disagree do not necessarily read each other out of a profession. Every field of thought or work is subject to great differences of opinion, sometimes violent ones, but only under certain circumstances do those differences generate organized factions.

Sectarianism also has a rather special meaning. A sect, religious or professional, is a dissident group that sets itself apart from an established institution—a church or a profession; its members often see themselves as neglected and scorned apostles of truth. As Troeltsch and Weber point out in distinguishing a sect from a church, sects typically originate in charismatic leadership and have a fundamentally voluntary character. One joins a sect, but one may be born into a church (or graduated into a profession).47 Medical and religious sects resemble each other in their stipulation of certain definite ideas as requirements for membership, while churches and professions often accept members without closely inquiring into their beliefs. Religious sects, however, typically offer their members a complete way of life, while medical sects are more circumscribed in their concerns.

More than a qualified analogy links religious and medical sects; they often overlap. The Mormons favored Thomsonian medicine and the Millerites hydropathy. The Swedenborgians were inclined toward homeopathic medicine. And the Christian Scientists orginated in concerns that were medical as well as religious. In America various religious sects still make active efforts to cure the sick, while the dominant churches are more or less reconciled to the claims of the medical profession and have abandoned healing as part of pastoral care.48

For both medicine and religion, the nineteenth century in America was a period of growing sectarianism. The society was not just pluralist, as many have described it, but “pluralizing”: It created new divisions as well as incorporating traditional ones. To attribute this tendency to the sheer tolerance and diversity of American life would be ingenuous. Religious denominations multiplied, especially among the less privileged, along the lines of class, sectional, and ethnic antagonisms, as well as differences of theology.49 Sectarianism intensified not only because American society was open, but also because it was closed. This was probably as true in medicine as in religion. The cliquishness of medical politics encouraged excluded practitioners to generate countermovements to improve their position. A sect, as one analyst points out, serves as a competing “reference-group” for its members, allowing them to seek status and prestige on more favorable terms than are available in the wider society.50 For the less educated medical practitioner, or for educated immigrant physicians denied access to hospital and medical school appointments, sectarian organization provided an avenue for asserting their claims against the dominant profession. Moreover, because of the competitive conditions of medical practice, doctors had every incentive to differentiate themselves, to make their services distinctive, and to appeal to changing currents of public sentiment.

In the second half of the nineteenth century, after the decline of Thomsonianism, the principal medical sects in America were the Eclectics and homeopaths. The Eclectics, who had absorbed most of the Thomsonian movement, were botanic doctors, though they professed, as their name suggests, to take the best from various schools. They were followers of a New Yorker named Wooster Beach, who like Samuel Thomson had mixed radical politics with herbal medicine (a combination not unknown today). Unlike the Thomsonians, the Eclectics neither denied the importance of scientific training nor hesitated to create their own schools, although they also did not hesitate to destroy them by fighting with one another. The Eclectics accepted and taught most conventional medical science, except that in the area of therapeutics they carried on a vigorous campaign against the excessive drugging and bleeding of the regular profession. All but one Eclectic college accepted women. In number, the Eclectics were the third largest group of practitioners, following the regulars and homeopaths; they probably also stood third in the social status of their adherents. The Eclectics were distinguished mainly by their adversarial stance against the regular profession, their claim to be reformers, their empiricism, and their indigenous American roots.

The homeopaths were an entirely different breed. They had a highly elaborated, abstruse philosophical doctrine, drew many of their number from German immigrant physicians, and had much of their appeal among the urban upper classes. The founder of homeopathy was Samuel Hahnemann (1755–1843), a German physician. Hahnemann and his followers saw disease fundamentally as a matter of spirit; what occurred inside the body did not follow physical laws. The homeopaths had three central doctrines. They maintained first that diseases could be cured by drugs which produced the same symptoms when given to a healthy person. This was the homeopathic “law of similars”—like cures like. Second, the effects of drugs could be heightened by administering them in minute doses. The more diluted the dose, the greater the “dynamic” effect. And third, nearly all diseases were the result of a suppressed itch, or “psora.” The rationale for homeopathic treatment was that a patient’s natural disease was somehow displaced after taking a homeopathic medicine by a weaker, but similar, artificial disease that the body could more easily overcome.51

The first homeopathic practitioner in America was a Dutch immigrant who settled in New York in 1825. Before 1840 homeopathy had only a few proponents in a few states, but it became better known in the next decade, and in 1850 a homeopathic college was founded in Cleveland. Before 1860 the majority of homeopathic practitioners had been recruited from the orthodox profession and still considered themselves regular physicians. Converts to homeopathy seem to have prospered; for many practitioners, it may have served as a route to public favor.52

Part of the appeal of homeopathy apparently lay in the kind of relationship it encouraged between doctor and patient. Homeopathic doctrine insisted that symptoms were the only perceptible aspect of disease and that they had to be learned from the uninterrupted report of the patient. Consequently, homeopathy stressed the need for sympathetic attention by the physician and individualized diagnosis and treatment of patients. (The parallel with certain schools of modern psychiatry will be obvious.) Moreover, because homeopathy called for reduced dosages, it provided an alternative to the pharmacological excesses of orthodox physicians. Homeopathic treatment was probably more pleasant than were the ministrations of the conventional doctor of the epoch. Additionally, Hahnemann’s notion that diseases could be cured by drugs producing similar symptoms led him and his followers to take an interest in experimental tests, or “provings,” of drugs on healthy subjects. Because homeopathy was simultaneously philosophical and experimental, it seemed to many people to be more rather than less scientific than orthodox medicine.53

The significance of this appeal should not be missed. No longer were the main challengers to medical orthodoxy claiming that all useful medical knowledge was simple. They, too, now accepted that medicine was legitimately complex. The two leading dissident sects both believed in scientific training; most of their curriculum was indistinguishable from that of orthodox schools. The three groups shared a wide common ground, even though they disagreed about therapeutics. Much of the public, in fact, may have been unaware of the doctrinal differences that divided them.

As homeopathy won increasing numbers of adherents in the 1850s, a countermovement against it took shape among regular physicians. The orthodox insisted that homeopaths had to be expelled from the profession not because their doctrines were wrong, but because they violated professional ethics in heaping abuse on their colleagues, basing their practice on an “exclusive” dogma, and actively proselytyzing among both physicians and the lay public. A report to the Connecticut Medical Society in 1852 accused homeopathy of waging “a war of radicalism against the profession” and observed, “Very different would have been the profession’s attitude toward homeopathy if it had aimed, like other doctrines advanced by physicians, to gain a foothold among medical men alone or chiefly, instead of making its appeal to the popular favor and against the profession.”54 Whatever the justice of these and other charges, the homeopaths were forced to leave the company of the orthodox. They did not secede; the regulars threw them out. Although the AMA had not been formed primarily with homeopathy in mind, it quickly turned to the challenge. In 1855 the organization insisted that state and local societies desiring representation accept its code of ethics, including the bar from membership of doctors subscribing to an exclusive dogma, of which homeopathy was a chief example. A showdown came in the early 1870s. A committee of the AMA recommended that the Massachusetts Medical Society, which continued to harbor homeopaths among its members, lose representation until it purged itself of heretics. The Massachusetts society at first demurred, then faced a court battle over its legal right to expel dissenters, but finally removed them, to great public consternation. While the AMA did not cripple the advance of homeopathy, it did prevent regular physicians who adopted homeopathy from remaining in orthodox societies.

Between 1850 and 1880, the battle raged between the two camps, as the regulars sought to deny the homeopaths all official positions as well as all association with the profession. The avoidance of contact with homeopaths took on all the gravity of a pollution taboo. In 1878 a physician in Norwalk, Connecticut, Dr. Moses Pardee, was expelled from his local medical society under suspicion of having consulted with a homeopath—his wife, Dr. Emily Pardee. (The state society later annulled the decision for want of evidence.) A New York doctor was expelled for buying milk sugar at a homeopathic pharmacy. The Surgeon General of the United States was denounced for having taken part in the treatment of Secretary of State William Seward, the night he was stabbed and President Lincoln was shot, because Seward’s personal physician was a homeopath. (Seward survived, and the AMA graciously held back from censoring the Surgeon General for having helped save the life of the Secretary of State.) The orthodox doctors righteously refused to serve with homeopaths on hospital staffs and for thirty years were able to exclude them from municipal institutions in major cities like New York and Chicago. During the Civil War, the regulars dominated the military medical boards, and the homeopaths, in spite of congressional support, were unable to gain approval for military service.55

Despite these attacks, homeopathy enjoyed wide popularity in the two decades after the Civil War. It was especially strong in cities like Boston and New York and had the support of many prominent social figures. In 1870 Congress approved a charter for a homeopathic medical society in Washington, D.C., which, unlike the regular society, was willing to admit blacks. When Boston University formed a medical school in 1873, it asked homeopaths to form the faculty. Homeopathy fought itself to a position almost of parity with the regular profession in legal entitlements and public respectability, though not in numbers of practitioners.

While unorthodox practitioners multiplied, they were still greatly outnumbered by members of the regular profession. Between 1835 and 1860, according to Kett, sectarians represented roughly 10 percent of the total number of physicians. By 1871 they represented 13 percent (nearly 6,000 sectarians compared to 39,000 regulars) according to statistics gathered by J. M. Toner and published by the AMA. Toner, however, was unable to classify 4,800 doctors and seems to have undercounted the total size of the profession by ten to fourteen thousand. As of 1870, the sectarians operated fifteen out of seventy-five medical schools, and during the next few decades their share of the market seems to have stabilized at around one fifth. In 1880 the regulars conducted seventy-six medical schools, the homeopaths fourteen, the Eclectics eight. Ten years later, the respective totals were 106, 16 and 9. (During both decades, two schools belonged to a fourth group, the “physiomedicals,” direct descendants of the Thomsonians.) These figures suggest a fairly stable distribution of strength among the rival groups, with the irregulars at roughly 20 percent, or slightly lower.56

Conflict and Convergence

The sectarian challenge had an effect on the profession that cannot be measured in numbers. The mere existence of competing parties in medicine was a standing rebuke to the claims of orthodoxy to represent a science. As long as physicians were divided, any move by the regulars to bring back licensing or reform medical education seemed like a narrow maneuver on their part aimed at winning advantage over the dissenters. “[T]o-day our profession is regarded by the State,” one orthodox practitioner pointed out, “as only a numerically strong medical sect.”57 This judgment was not wholly undeserved. The sectarians, following Hahnemann, had dubbed the regular doctors “allopaths,” insisting that they, too, had an exclusive dogma, cure by opposites, the reverse of homeopathy. While leaders of the profession had insisted the designation was inaccurate, many regular practitioners apparently accepted it and believed it to be correct, thereby reducing themselves to nothing more than an orthodox sectarianism.58

For most of this period, the attitude of the press, the courts, and state legislatures was predominantly agnostic. They neither believed nor disbelieved in the claims of the competing groups and tried, when possible, to avoid becoming embroiled in their partisan disputes. When the regulars attacked the divergent practitioners, the press often took the side of the persecuted and called for tolerance. This was not so much an indication that they approved sectarian ideas; they wanted a generous pluralism to prevail. In the aftermath of the expulsion of the Massachusetts homeopaths, The New York Times commented that while the medical society “meant to disgrace the heretical physicians . . . we have little doubt that in the minds of all intelligent persons they have only succeeded in bringing disgrace upon themselves.”59 Many saw diversity in medical practice as a counterpart to religious differences. When the orthodox sought control of medical practice, they bridled: One could no more have boards of orthodox doctors passing on homeopaths than Protestant boards ruling on the acceptability of Catholic priests.

Slowly, the regular profession faced up to the unhappy fact that it had been fought to a draw. Public resistance to orthodox claims eventually brought concessions. An early sign of compromise came in Michigan, where the state legislature required the incorporation of homeopathy into the University of Michigan Medical School. The regulars were aghast but finally yielded. Once the school added a homeopathic division in 1875, professors of the two camps taught there together. Homeopathic students took the same basic courses as their classmates, except for materia medica (pharmacology) and the practice of medicine, which were given separately. The arrangement put orthodox faculty members in the awkward position of teaching future homeopaths in basic science courses, but the AMA declined to expel them on that account.60 Acting through the state legislature, the public was forcing the regulars and homeopaths to come to terms with each other.

The movement toward convergence and compromise now came from both sides. By the late 1870s and early 1880s, there was growing restlessness among many regular physicians who wanted to work out some modus vivendi with the sectarians. It soon became apparent that wherever sectarians practiced, even in relatively small numbers, the regular physicians were unable to obtain licensing legislation over their opposition. Moreover, specialists, especially in the larger cities, were increasingly unhappy with the professional restriction on consultations. Referrals from homeopathic and Eclectic general practitioners represented a potentially significant source of income for them. At the same time, many homeopathic practitioners began moving toward an accommodation with orthodox medicine. While the purists among them held to Hahnemann’s faith in extreme dilutions of drugs and believed in treating symptoms individually, the moderates, who were the dominant group, accepted concentrated medicines and thought in terms of treating diseases, as did allopathic physicians. The moderates also rejected as unscientific Hahnemann’s belief in a “vital force.” In 1880, as the internal disputes among homeopaths intensified, their national organization, the American Institute of Homeopathy, split as the purists left to form their own International Hahnemannian Association.61

An analogous division then opened up in the regular profession between doctors who wanted an accommodation with the homeopaths and others who wanted to maintain the AMA’s hard line against them. Disenchantment with the AMA’s position first surfaced in New York, where the state medical society voted in 1882 to abolish the clause in the code of ethics that forbade consultations with sectarians. The leaders of the revolt were among the most eminent New York physicians; many of them were specialists with important hospital connections. They argued that the AMA’s strategy against the sectarians had failed. Rather than stigmatizing them, the denial of fraternal courtesies had only made the profession seem narrow-minded and monopolistic. In any event, the code was continually being violated: Consultations between regulars and homeopaths were increasingly common and the code was only selectively enforced. Doctrinal differences, furthermore, were diminishing as homeopaths abandoned Hahnemann’s more extreme ideas. “We all know that the majority of sectarian physicians of the present day have a regular medical education, and avail themselves ‘of the accumulated experience of the profession, and of the aids actually furnished by anatomy, physiology, pathology and organic chemistry,’” wrote one physician, referring to the language of the AMA code.62

This was by no means a majority viewpoint among orthodox doctors. The New York society was quickly expelled from the AMA, and a newly formed New York Medical Association replaced it as the state’s national representative. Even in New York most doctors seem to have opposed the state society’s decision to seek peace with homeopathy. In a survey taken at the time, support for the repeal of the code was greatest in the largest cities. The majority of doctors in New York City, Brooklyn, Albany, and Rochester favored the change, while small-town doctors generally were opposed.63

But the minority of the 1880s represented a future majority. The profession would become more urbanized, more specialized, more like the physicians who opposed the hard line against sectarians in the 1880s. The growth of specialization increased interdependence in the professions. In particular, it made specialists dependent upon homeopathic and Eclectic general practitioners for referrals. Conversely, the growth of hospitals made the sectarian general practitioners dependent upon the “hospital men” for access to the increasingly important facilities that the specialists controlled. Only in some areas could the sectarians create their own institutions. In the eighties, homeopaths gained entry to the municipal hospitals in Boston and Chicago that previously had been closed to them. The incentive to differentiate oneself was now counterbalanced by an incentive to conform and accommodate. At the same time, the development of medical science provided an increasingly firm basis for convergence. The sectarians shared most of the fundamentals of medical science in common with the regular profession; as scientific knowledge advanced into the area of therapeutics, their differences tended to diminish. The growth of science thus reinforced the effect of new institutional relations, laying the ground for a new professional consensus.

Licensing and Organization

Probably the most important sign of convergence between the competing groups was their common support, beginning in the 1870s and 1880s, for the restoration of medical licensing. Recognizing their inability to secure legislation on their own, many educated regular physicians accepted collaboration with sectarians to win licensing laws that would protect all of them against competition from untrained practitioners. Once united behind these measures, doctors were able to win favorable action, though the initial statutes enacted at their behest set rather minimal licensing requirements, usually no more than a medical diploma. This was acceptable to homeopaths and Eclectics because they had medical colleges under their own control. Moreover, any doctor who had been in practice for a given number of years, regardless of education, was typically allowed to continue.

It would be a mistake to take these new licensing laws as evidence that physicians were now a powerful interest group. The licensing movement of the late nineteenth century was by no means restricted to doctors or even to what are customarily regarded as professions. Plumbers, barbers, horseshoers, pharmacists, embalmers, and sundry other groups sought and were granted licensing protection. Historically, as Lawrence Friedman has pointed out, there have been two kinds of occupational licensing. Some statutes, like those requiring licenses of peddlers, have been clearly hostile in intent. They have set high, if not prohibitive, licensing fees and have been administered by local government officials. In the case of peddlers, the principal aim has been to limit competition with local tradesmen. Other licensing laws, sought by members of the regulated occupations, have set moderate fees and placed enforcement in the hands of the practitioners themselves. While hostile licensing had been common in America as far back as colonial times, “friendly” licensing developed on a large scale only in the late nineteenth century.64

The origins of this new pattern lay in the changed circumstances of the society, which put occupational licensing in a substantially new light. Increasingly, large corporations dominated the economic landscape, dwarfing independent professionals and small businessmen. Struggling to hold their own, these groups struck back under the banners of various movements. Support for antitrust legislation was one expression of their effort to survive; trade and professional organizations were another. Licensing, rather than being identified with power and privilege, as it had been in the 1830s, became part of the resistance of a threatened petite bourgeoisie.

The occupations that pursued their interests through licensing were distinguished less by their political power than by their distinctive structural position within the economy. Predominantly self-employed, most of their members worked out of small shops requiring little capital to establish. Their trades and professions were easy to enter and consequently beset by competition. Where an occupation included some members who were employers and some who were employees, as was the case among barbers, the differences in status were slight, mobility was common, and conflict was rare. Most important, none of the occupations faced any organized buyers or employers who stood to lose by the monopoly that licensing would create. They generally sold their goods and services to individuals rather than corporations. These features helped minimize coherent political opposition to licensing bills. The people whose interests would most immediately be sacrificed by licensing were relatively unorganized and unskilled competitors. And by stipulating that anyone in business at the time be able to qualify under the statutes, the laws disarmed much of the potential opposition within the trade.65

The initial medical practice acts, which required only diplomas and made exceptions for long established practitioners, generally followed this pattern. Only gradually were the requirements stiffened and enforced. One major landmark was an 1877 law passed by Illinois, which empowered a state board of medical examiners to reject diplomas from disreputable schools. Under the law, all doctors had to register. Those with degrees from approved schools were licensed, while others had to be examined. Of 3,600 nongraduates practicing in Illinois in 1877, 1,400 were reported to have left the state within a year. Within a decade, three thousand practitioners were said to have been put out of business.66 In many states licensing developed incrementally. First, a minimal statute was enacted requiring only a diploma; then the principle was established that diplomas could be examined and candidates rejected if the school they had attended was judged inadequate; and finally, all candidates were required both to present an acceptable diploma and to pass an independent state examination. By 1901 twenty-five states and the District of Columbia were in this last category, while only two states were in the first. No jurisdiction was without a licensing statute of some sort.67

Missouri exemplifies the gradual extension of licensing control. The state passed an initial law in 1874, but it required a doctor only to register a degree from a legally chartered medical school with a county clerk. The statute had little effect. Since lax incorporation laws permitted anyone to start a school merely by applying for a charter, Missouri soon had more medical colleges than anyone could keep track of. Many were simply diploma mills. The state’s doctors were too disorganized to do anything about the situation. According to a survey by the state medical society, nearly five thousand people practiced medicine in Missouri in 1882; only about half of them were graduates of “reputable” schools. At its highest point during the previous thirty years, the medical society had a membership of only 140; this gravely weakened its claim to speak for the profession. Its resources were meager. Between 1850 and 1900, its treasury never had more than $500 in it at any one time. Most physicians had no interest in regulation, and those who did were divided, since physicians who ran substandard schools had no interest in seeing licensing requirements stiffened. The lack of unity within the profession kept it almost powerless. Finally, in 1894, the state board of health, which was nominally in charge of licensing, tried to insist that medical schools set as a prerequisite for admission a college or high school degree, or an equivalent certificate. When the proprietary medical schools then began manufacturing certificates, the board announced that medical students would have to pass a state test to demonstrate their preliminary education. But the medical schools sought judicial relief, and the Supreme Court of Missouri ruled that in raising premedical requirements, the board of health had exceeded its authority. Not until 1901 was a definitive medical practice act approved empowering the board of health to act as a board of medical examiners. By then, physicians had finally united behind effective legislation, and they had the support of the Presbyterian and Methodist churches, which were alarmed at the growing popularity of Christian Science and Weltmerism, a local mind-cure cult.68

Missouri was particularly slow to regulate medicine and long remained a festering sore to the AMA, but its history illustrates a common irony. Even after the co-optation of the sectarians, the main resistance to strong licensing laws originated within the profession. “Practically the only opposition to effective medical legislation in the country,” remarked a vice president of the AMA in 1887, “comes from the profession itself,” and he was referring specifically to the numerous, quick-degree proprietary colleges.69 The physicians who operated the schools had, in effect, acquired a strong interest in maintaining the profession’s weakness: They profited from more medical graduates, while the profession was flooded. The first licensing acts had increased the demand for diplomas, even bogus ones, thereby promoting the commercial medical colleges and diploma mills rather than putting them out of business. But as the requirements were tightened, barring the graduates of the lesser schools from practice in an increasing number of states, those schools faced extinction. Their owners opposed stricter licensing in self-defense.

On more ideological grounds, some liberals and populists also opposed medical licensure. Social Darwinists, following the English social theorist Herbert Spencer, thought all such regulation unwise. “Very many of the poorer classes are injured by druggists’ prescriptions and quack medicines,” Spencer willingly conceded. But there was nothing wrong in that; it was the penalty nature attached to ignorance. If the poor died of their own foolishness, the species would improve. The physicians, Spencer and others warned, meant to set themselves up as a clergy. The opposition to licensing also found a voice in William James, who appeared before the Massachusetts legislature in 1898 to argue that licensing would interfere with freedom of research in medicine. James had personally tried an assortment of healers and was pursuing his own research into psychic cures. At a time when Christian Science was a great subject of controversy, he defended the right of “mind curers” to test out their new modes of therapy. “I well know,” he wrote to a friend shortly thereafter, “how my colleagues at the Medical School. . . will view me and my efforts. But if Zola and Col. Picquart can face the whole French army [in the Dreyfus case], can’t I face their disapproval? Much more easily than that of my conscience!”70

Such protests had little effect. The courts as well as the legislatures had swung behind the medical profession. The crucial test of the legitimacy of medical licensing occurred in 1888, when the issue came before the U.S. Supreme Court in the case of Dent v. West Virginia. Frank Dent, an Eclectic physician in practice for six years, had been convicted and fined under an 1882 West Virginia statute requiring a physician to hold a degree from a reputable medical college, pass an examination, or prove that he had been in practice in the state for the previous ten years. The State Board of Health deemed unacceptable Dent’s degree from the American Medical Eclectic College of Cincinnati. Delivering the Supreme Court’s unanimous opinion upholding the law, Justice Stephen Field noted that every citizen had the right to follow any lawful calling, “subject only to such restrictions as are imposed upon all persons of like age, sex and condition.” But the state could protect society by imposing conditions for the exercise of that right, as long as they were imposed on everyone and were reasonably related to the occupation in question. “Few professions,” he continued, “require more careful preparation . . . than that of medicine.” It had to deal with “all those subtle and mysterious influences upon which health and life depend” and required knowledge not only “of vegetable and mineral substances, but of the human body in all its complicated parts, and their relation to each other, as well as their influence upon the mind.” Everyone might have occasion to consult a doctor, but “comparatively few can judge of the qualifications of learning and skill which he possesses.” Reliance had to be placed on the assurance given by a license. Reasonable considerations, therefore, might prompt a state to exclude people without licenses from practicing medicine.71 In a later case, Hawker v. New York (1898), the Court extended the grounds for denying a medical license, noting that in a doctor “character is as important a qualification as knowledge.”72 At the state level, courts also supported medical practice laws. After the Supreme Court decision, there never seems to have been any serious question about the matter.

As boards of medical examiners were established, two patterns predominated. The less frequent was to set up separate boards for regular physicians, homeopaths, and Eclectics, each having the right to license physicians of its own persuasion. The more common pattern was to give representation on the same board to sectarians as well as to regulars. Sometimes statutes assigned to the various groups the right to fill a designated share of the seats; though this amounted to giving private organizations control of agencies of the state, it was upheld by the courts. Despite their historic efforts to avoid contact with sectarians, the regular physicians now found that a single integrated board worked better than multiple separate boards in controlling entry into the profession. Accordingly, they set aside their scruples about consorting with heretics and made common cause with them.

This collaboration between regular physicians and sectarians clearly violated the AMA’s code of ethics, but none of the doctors who served on joint state licensing boards suffered excommunication. The code was simply ignored. By the turn of the century, prominent leaders in the AMA conceded the code was an anachronism and were anxious to put the issue of sectarianism behind them.73 So in 1903 the AMA adopted a revised code of ethics that said little about irregular practitioners. While noting that it was inconsistent with scientific principles for physicians to designate their practice as exclusive or sectarian, the new code elided any reference to the kind of medicine doctors actually practiced. Within a few years, orthodox societies were seeking out members among sectarian physicians. In New York State the two competing regular medical organizations reunited after having been at odds for two decades over their relationships with sectarians. Homeopaths and Eclectics were admitted to the merged organization. D. W. Cathell, who had been a fierce antagonist of the sectarians, wrote in a medical journal that the new code would “have a great and far-reaching effect on our material interests; it will everywhere promote and foster professional unity; and, far above all else, by putting an end to partisan agitations it will increase the good repute of every worthy medical man in America.”74

The myth persists today that homeopaths and herbal doctors were suppressed by the dominant allopathic profession. Yet the sequence of events suggests otherwise. Both the homeopaths and Eclectics won a share in the legal privileges of the profession. Only afterward did they lose their popularity. When homeopathic and Eclectic doctors were shunned and denounced by the regular profession, they thrived. But the more they gained in access to the privileges of regular physicians, the more their numbers declined. The turn of the century was both the point of acceptance and the moment of incipient disintegration. Enrollment at Eclectic schools peaked at one thousand in 1904; by 1913 it was down to 256. In 1900 there were twenty-two homeopathic schools, but ten years later there were only twelve, fewer than in any year since 1880; by 1918 only six remained and these all ceased to be homeopathic institutions within the next several years.75 Homeopathy had one foot in modern science, the other in pre-scientific mysticism; this became an increasingly untenable position. While regular medicine was producing important and demonstrable scientific advances, homeopathy generated no new discoveries. The contrast was not lost on many in the group. They edged further away from Hahnemann; the final dissolution came of itself. The Eclectics also succumbed to quiet co-optation; they were only too glad to be welcomed into the fold.

In part, the old sects gave way to new ones. The 1890s had seen the appearance of two new groups representing almost diametrically opposed positions. One was purely mechanistic in conception; the other, purely spiritual. The first, osteopathy, was founded by a rural Missouri doctor, Andrew Still, who maintained that the human body, when sick, had to be repaired by placing its parts back in their proper relationship. “Quit your pills and learn from Osteopathy the principle that governs you,” Still declared. “Learn that you are a machine, your heart an engine, your lungs a fanning machine and a sieve, your brain with its two lobes an electric battery.” In 1891 he began teaching his principles in the town of Kirksville in Missouri; the following year he obtained a state charter and created a school. Patients flocked there in hundreds, the school flourished, and in 1897 osteopathy won legal protection from the Missouri legislature.76

Christian Science, the second of the new sects, was born in the East, near Boston, and like homeopathy picked up adherents in urban areas among the well-to-do classes. Mary Baker Eddy, an obscure New England mystagogue who founded the group, altogether denied the reality of matter and claimed that disease, like all else, was purely a function of mind and spirit. Christian Science was, in a sense, homeopathy taken to the final dilution, the point where the world dissolved into idea. But while Mrs. Eddy thought medicine and nourishment of no use (“We have no evidence of food sustaining life, except false evidence”), she did not, as her biographer Edward Dakin points out, deny the value of money. Like osteopathy, Christian Science was run in a very business-like fashion and earned its founder a substantial fortune.77

The fate of these later sects turned out to be quite different from the earlier ones. While homeopathy and Eclecticism were assimilated into the medical profession, osteopathy and Christian Science remained independent and survived on their own. So, too, did chiropractic, which, like osteopathy, originated as a commercial enterprise in the Midwest in the 1890s, and was based on similarly mechanistic principles. Oddly enough, the homeopaths and Eclectics no longer exist as well organized groups because they were strong at a time when regular physicians needed their political support for licensing legislation. Their price for cooperation was acceptance into the profession. Osteopathy later also became professionalized and sought integration into the medical profession. But lacking any point of leverage with physicians, it failed to gain entry.

The AMA’s gesture of accommodation toward its old adversaries, the homeopaths and Eclectics, was part of a more general effort around the turn of the century to unify and strengthen the profession. As of 1900 the AMA had only eight thousand members. The total membership of all medical societies, local and state as well as national, was approximately 33,000; another 77,000 physicians belonged to no association whatsoever. As a writer who reported these figures wrote, the profession was in “wretched condition” as a political force.78 In 1901 the AMA revised its constitution, creating a new legislative body, the House of Delegates, with representatives drawn primarily from state medical societies in proportion to their membership. Previously, a hodgepodge of county and regional organizations, as well as state societies, had been entitled to send delegates to AMA conventions in the ratio of one representative for every ten members. Some local societies had more delegates than did their state associations. By the late 1800s, the number of delegates had become unmanageable; virtually anyone showing up at the AMA’s annual meeting could take part in its business, as it was impossible to check credentials. Besides being unwieldy, this arrangement gave inordinate influence to physicians who happened to live near the site of an annual convention, and cost the association authority and continuity in its decision-making. Under the new constitution, the House of Delegates had a fixed membership of 150, to be periodically reapportioned as in the U.S. House of Representatives. The AMA would be a confederation of the state medical societies, which in turn would be confederations of the county organizations in the states. The county medical societies, as the reorganization committees explained, would be “the foundation of the whole superstructure.”79 Henceforth, no doctors could be members of any higher professional association unless they first joined their own county organization. Membership at the county level would then carry with it membership in the state society and the obligation to pay state dues. The organizational structure thus neatly forced all physicians who wanted to belong to their county medical society or to the national AMA to become dues-paying members of their state association.

A rapid transformation quickly took place at the state level. Most of the state medical societies had been no more than nominal organizations, scarcely functioning, with only a small proportion of the profession as members. Since so many vital political decisions, like the enactment of licensure laws, were then being made by state governments, this had been a serious point of weakness. Now, between 1900 and 1905, in line with the requirements of the new AMA constitution, all but three of the state and territorial medical associations were reorganized on a uniform plan. They turned previously independent county societies into local chapters, gave their members representation in statewide decision-making bodies and assessed them for membership dues. Many of the state organizations began publishing their own monthly journals and employing paid rather than volunteer staff. The results were immediate and positive. The Michigan State Medical Association, reorganized in 1902, within two years increased its membership from 452 to 1,772 and its income from $1,615 to $4,813. The Missouri association, reorganized in 1903, saw its membership rise in one year from 258 to 1,600 and its revenues grow from $774 to $3,200. Between 1902 and 1904, membership in the Ohio Association jumped from 992 to 2,640 and in Tennessee from 386 to 1,097.80 And so it went. In a remarkably short period, physicians began to achieve the unity and coherence that had so long eluded them. From a mere eight thousand members in 1900, the AMA shot up to seventy thousand in 1910, half the physicians in the country. By 1920 membership had reached sixty percent.81 From this period dates the power of what came to be called “organized medicine.”

The gathering movement in the medical profession was by no means unique at the time. The rise of labor unions and corporations, trade associations, and trusts in the late nineteenth and early twentieth centuries all point to a broader current pulsing in the society. In part, physicians were responding to the same developments that facilitated organization in other fields. As railroads and automobiles, telegraphs and telephones promoted national markets and broke down local isolation, groups of all kinds found it at once easier and more necessary to organize nationwide. (The federal structure the AMA adopted in 1901 was, in fact, copied from other national associations.) The growth of national corporations, Richard Hofstadter has suggested, radically altered the distribution of power and status in America, overshadowing local elites and engendering among the professions an acute resentment at their lost influence.82 This may overstate the power that professionals had earlier. Yet the physicians of the Progressive era do seem to show a sharpened anger and militance, some of it directed at corporate hegemony. “The members of the profession,” wrote an Ohio doctor to the AMA’s Journal in 1902, “are constantly humiliated and insulted by wealthy corporations, state, county and city officials.” Doctors, he complained, had no power compared to organized business or labor. Physicians like himself were forced to work on contract for big corporations for pitifully low fees; a local steel manufacturer had just refused to pay more than about 60 percent of his bill for emergency services. “As it is, if I do not accept the fees the company offers, the work will go to another physician and the company knows it can get plenty of doctors to do their work for whatever they are willing to pay. What the medical profession needs is a leader, to take it out of the valley of poverty and humiliation, a Mitchell, as the miners have, or a Morgan, as the trusts have.”83

Yet the replacement of a competitive orientation with a corporate consciousness required more than common interests. It required a transfer of power to the group, and this was what began to happen in medicine around 1900 with changes in its social structure. Physicians came increasingly to rely on each other’s good will for their access to patients and facilities. I have already alluded to the instrumental role of the rise of hospitals and specialization in creating greater interdependence among doctors. Physicians also depended more on their colleagues for defense against malpractice suits, which were increasing in frequency. The courts, in working out the rules of liability for medical practice in the late nineteenth century, had set as the standard of care that of the local community where a physician practiced. This limited possible expert testimony against physicians to their immediate colleagues. By adopting the “locality rule,” the courts prepared the way for granting considerable power to the local medical society, for it became almost impossible for patients to get testimony against a physician who was a member. Medical societies began to make malpractice defense a direct service. Shortly after the turn of the century, doctors in New York, Chicago, and Cleveland organized common defense funds. The Massachusetts Medical Society began handling malpractice suits in 1908. During the next ten years, it supported accused physicians in all but three of the ninety-four cases it received. Only twelve of these ninety-one cases went to trial, all save one resulting in a victory for the doctor. For its first twenty years, the defense fund of the medical society of the state of Washington won every case it fought. Because of their ability to protect their members, medical societies were able to get low insurance rates, while doctors who did not belong could scarcely get any insurance protection.84 This provided the sort of “selective incentive” that medical societies needed to help them attract members. Professional ostracism carried increasingly serious consequences: denial of hospital privileges, loss of referrals, loss of malpractice insurance, and, in extreme cases, loss of a license to practice. The local medical fraternity became the arbiter of a doctor’s position and fortune, and he could no longer choose to ignore it. By making the county societies the gatekeeper to membership in any higher professional group, the AMA had recognized and strengthened the position of the local fraternity, as well as bolstering its own organizational underpinnings.

Yet the AMA still had to address the problem that originally motivated its formation, control of medical education. The key source of physicians’ economic distress in 1900 remained the continuing oversupply of doctors, now made much worse by the increased productivity of physicians as a result of what I referred to in the previous chapter as the squeezing of lost time from the professional working day. The enactment of licensing laws had not cut down the production of doctors: It had only changed its character, promoting the expansion of medical colleges. Toward the end of the nineteenth century, the proliferation of medical schools had accelerated. Between 1850 and 1870, the number had grown from 52 to 75; ten years later, it jumped to 100, in another decade to 133, and by 1900 to 160. This was reflected in a great increase in students, who more than doubled from 11,826 in 1880 to 25, 171 twenty years later. While the population of the United States grew 138 percent between 1870 and 1910, the number of physicians increased 153 percent.85 The immediate beneficiaries of this expansion in supply were the proprietors and professors of the schools, who garnered income and prestige from their positions. But by producing more doctors, the medical colleges exacerbated the competitive relations among physicians. The weakness of the profession was feeding on itself; ultimately, help had to come from outside. The profession could not get off the treadmill it was on until other institutions intervened. That process had already begun in the universities, where educational reformers had a concurrent agenda of their own.

MEDICAL EDUCATION AND THE RESTORATION OF OCCUPATIONAL CONTROL

Reform from Above

Reform of medical education began around 1870 as part of the coming-of-age of American universities. The two developments are historically inseparable. They had their inception at the same institutions and were led by some of the same people, notably Presidents Charles Eliot of Harvard and Daniel Coit Gilman of Johns Hopkins. Before the Civil War, American colleges were intellectual backwaters whose poorly paid professors had little claim to original thought or research. A variety of forces combined in the wake of the war to infuse some colleges with new life and larger ambitions. Money, leadership, and ideas appeared almost simultaneously. The economy was now producing enough of a surplus to generate the capital necessary to underwrite the development of universities, and there were wealthy men—not many at first, but a few—sufficiently concerned about education to leave them with large endowments. When the Baltimore merchant Johns Hopkins died in 1873, he left $7 million to build a hospital and university—at that time, the largest single endowment in the country’s history. Meanwhile, at some established institutions an older generation of college educators passed from power. Since taking command in the 1820s and 1830s, these men had viewed education as a matter of moral and mental discipline, best inculcated by a prescribed, classical curriculum in which modern languages and modern science had little place. This traditional orientation, while not wholly abandoned, gradually lost ground among their successors, as the conviction grew that higher education ought to have practical value in fitting students for the “real” world. The colleges had long been ridiculed precisely for their irrelevance to contemporary life and work; now their trustees and presidents began to converse in the language of utility. Higher education would satisfy the needs of an expanding economy. For some, this meant greater emphasis on teaching useful skills; for others, a new departure in encouraging research and the development of scientific knowledge. The universities would become worthy of respect; professors would be relieved of petty disciplinary responsibilities, paid better, and given freer rein in their work. For a model, reformers looked to Germany, which had developed a tradition of secular learning and strong universities, and sought to make their own institutions in every way the equal of those in Europe.86

In the eyes of reform-minded American educators like Eliot and Gilman, medicine epitomized both the backward state of higher education and the degraded state of the professions in America. “The ignorance and general incompetency of the average graduate of American Medical Schools, at the time when he receives the degree which turns him loose upon the community, is something horrible to contemplate,” Eliot wrote. “The whole system of medical education in this country needs thorough reformation.”87 The deficiencies had remained the same for decades. Students came to professional schools with minimal preparation; even at the best universities, young men without high school diplomas could easily find admission to study medicine. Students followed medical courses in any order they pleased; the brief two-year program had no regular sequence. In Germany the laboratory sciences of physiology, chemistry, histology, pathological anatomy, and, somewhat later, bacteriology, were revolutionizing medicine, but American medical schools had no laboratories to speak of, let alone a tradition of original research. Didactic lectures remained the principal form of instruction. Students were supposed to learn the art of medicine through apprenticeships, but the medical faculty had no control over their preceptors, who might be completely inadequate. Educational standards were none too strict. To graduate from Harvard Medical School, students needed only to pass a majority of their examinations, no matter if they failed the rest.

When Eliot, who had been trained as a chemist, became president of Harvard in 1869, reorganization of the professional schools was a leading item on his agenda; breaking precedent, he personally presided at meetings of the medical faculty. Before 1869 Harvard Medical School had only a faint connection with the university. As in any proprietary medical college, the faculty collected fees directly from the students, paid the school’s expenses, and divided what was left among themselves. They elected a dean and conducted their own affairs. A few professors favored upgrading the curriculum and standards of admission, but a conservative majority, led by the venerable Henry Bigelow, opposed any change. Bigelow thought higher requirements might keep out a natural genius in the art of healing, and considered training in the related biological sciences useful but not essential. Medical discoveries, he believed, were never made in laboratories. How was it, Bigelow asked at one meeting, that the medical faculty had for eighty years been “managing its own affairs and doing it well,” and now, abruptly, when all was prosperous, great changes were being proposed? After a dead silence, Eliot quietly replied, “I can answer Dr. Bigelow’s question very easily; there is a new President.”88 By the fall of 1871, Eliot could report that the medical faculty had “resolved to venture upon a complete revolution of the system of medical education.” The school’s finances were placed under the control of the Harvard Corporation, the system of dividing up fees was eliminated, and the professors were given salaries. The academic year was extended from four months to nine; the length of training needed to graduate rose from two years to three. In physiology, chemistry, and pathological anatomy, laboratory work was added to or replaced didactic lectures. Students henceforth would have to pass all their courses to graduate.89

The argument long invoked against higher standards in medical education was that they would drive students away and schools into bankruptcy. The reforms at Harvard initially did cause a sharp drop in enrollment, but the faculty held firm through a few difficult years. From a low of 170 students in 1872, attendance climbed steadily, reaching 263 by 1879; this was still beneath the pre-reform level of 330 ten years earlier, but because tuition had been increased, the school was momentarily in the black. Moreover, the quality of the students improved. The proportion with bachelor’s degrees jumped from 21 percent in the fall of 1869 to 48 percent in 1880. Writing that same year, Eliot remarked that a decade earlier medical students had been “noticeably inferior” in bearing and manner to other students at the university, but now were their equals.90

If competition had once held medical schools in check, preventing any one institution from risking reform, it now began to have the reverse effect. Rivals could not afford to fall behind. In the mid-1870s, fearing a decline in reputation, the trustees of the University of Pennsylvania decided, against the wishes of a conservative dean of the medical faculty, to follow Harvard’s lead and lengthen medical training from two to three years. Previously, in 1847, Penn had tried to extend its terms from four to six months but fell back after losing students to nearby Jefferson Medical College. This time, enrollment fell 22 percent, but as at Harvard, the changes stuck.91 Over the next decade, other leading institutions moved in the same direction. When the more advanced schools formed a national association in 1890, the new group set a minimum standard for member institutions of three years of training, six months a year, with required laboratory work in histology, chemistry, and pathology.* In the nineties, this organization—now the Association of American Medical Colleges (AAMC)—represented a little more than a third of the nation’s medical schools, but these were firmly in the ascendancy. As licensing boards began to impose more stringent requirements, the two-year medical degree faded into obscurity. By 1893 more than 96 percent of the schools required three or more years of work.92

The most radical departure from the old regime took place at Johns Hopkins University, which opened its medical school in 1893 with a four-year program and the unprecedented requirement that all entering students come with college degrees. From the outset, Johns Hopkins embodied a conception of medical education as a field of graduate study, rooted in basic science and hospital medicine, that was eventually to govern all institutions in the country. Scientific research and clinical instruction now moved to center stage. The faculty, rather than being recruited from local practitioners, as had always been the pattern in America, were accomplished men of research, wooed from outside Baltimore. Students were also drawn from a distance and carefully chosen; they spent their first two years studying basic laboratory sciences and their last two on the wards, personally responsible for a few patients under the watchful eyes of the faculty. A hospital was built in connection with the school, and the two were conducted as a joint enterprise. Advanced residencies in specialized fields were created. (It was at Hopkins that the term “residency” was first used to describe advanced specialty training following an internship.) Here were the glimmerings of the great university-dominated medical centers of the next century.93

The significance of Johns Hopkins Medical School lay in the new relationships it established. It joined science and research ever more firmly to clinical hospital practice. While apprentices had learned the craft of medicine in their preceptor’s office and the patient’s home, now doctors in training would see medical practice almost entirely on the wards of teaching hospitals. Hopkins also stood for a new synthesis of medicine and the larger culture—a union vividly represented by the two major figures at the school, William Welch and William Osler. Welch, who had done important work in pathology as a young man, and Osler, the great clinician, were both dedicated to research, but they were also broadly educated and had a lively interest in the history and traditions of their profession. Though Hopkins accentuated science, it did not stand for a narrowly technical vision of medicine; this was the secret of its special éclat. It radiated cultural as well as scientific assurance, especially in the person of Osler, whose learning and urbanity made him the profession’s favorite doctor. Welch became its authoritative voice in public affairs. The influence of Johns Hopkins extended far beyond Baltimore. It sent its graduates to medical institutions all over the country and abroad, where, as professors and scientists, they took a major part in shaping the character of medical education and research in the twentieth century.94

Consolidating the System

Sharp contrasts characterized medicine by 1900. The changes in progress at Harvard, Johns Hopkins, and other universities were counter-pointed by the continuing growth of commercial medical colleges. In 1850 there had been no example of an alternative in medical education; fifty years later, the alternative had begun to take shape but was yet to prevail. Despite the new licensing laws, the ports of entry into medicine were still wide open, and the unwelcome passed through in great numbers. At proprietary schools and some of the weaker medical departments of universities, the ranks of the profession were being recruited from workingmen and the lower-middle classes, to the dismay of professional leaders, who thought such riff-raff jeopardized efforts to raise the doctor’s status in society. From the viewpoint of established physicians, the commercial schools were undesirable on at least two counts: for the added competition they were creating and for the low image of the physician that their graduates fostered. Medicine would never be a respected profession—so its most vocal spokesman declared—until it sloughed off its coarse and common elements.95

Among those who entered medicine in increasing numbers were women. In the second half of the nineteenth century, seventeen medical colleges for women were founded in the United States. A long struggle for admission to the elite medical schools finally brought victory in 1890. Strapped for funds, Johns Hopkins agreed to accept women into to its medical school in return for a half million dollars in endowment money contributed by wealthy women. In effect, American women were forced to buy their way into elite medical education. Many of those who had fought to establish separate women’s medical colleges now thought their function was unnecessary. The women’s schools began to close or merge as women gained entry to schools that trained men. By 1893–94, women represented 10 percent or more of the students at 19 coeducational medical schools. Between 1880 and 1900, the percentage of doctors who were women increased nationally from 2.8 to 5.6 percent. In some cities the proportion of women was considerably higher: 18.2 percent of doctors in Boston, 19.3 percent in Minneapolis, 13.8 percent in San Francisco. With more than 7,000 women physicians at the turn of the century, the United States was far ahead of England, which had just 258, and France, which had only 95. The increasing numbers of women in American medicine, however, brought in their train a growing reaction from men in the field.96

After its own reorganization, the American Medical Association made reform of medical schools a top priority. Since there was no chance of intervention by the federal government, any national action would have to be undertaken by the association itself, via the state licensing boards, which its members controlled. In 1904 the AMA established a Council on Medical Education, composed of five medical professors from major universities, with a permanent secretary, a regular budget, and a mandate to elevate and standardize the requirements for medical education. As one of its first acts, the council formulated a minimum standard for physicians calling for four years of high school, an equal period of medical training, and passage of a licensing test; its “ideal” standard stipulated five years of medical school (including one year of basic sciences, later pushed into the “premedical” curriculum in college) and a sixth of hospital internship. In an effort to identify and pressure weaker institutions, the AMA council began grading medical schools according to the record of their graduates on state licensing examinations; later it extended the evaluation to include curriculum, facilities, faculty, and requirements for admission. In 1906 it inspected the 160 schools then in existence and fully approved of only 82, which it rated Class A. Class B consisted of 46 imperfect, but redeemable, institutions, while 32, beyond salvage, fell into Class C. The results of the survey were disclosed at an AMA meeting, but were never published for fear of the ill will they would create. Professional ethics forbade physicians from taking up cudgels against each other in public; it would have been unseemly for the AMA to have violated its own code. Instead, the AMA council invited an outside group, the Carnegie Foundation for the Advancement of Teaching, to conduct a similar investigation.97 The foundation agreed, and chose for the task a young educator, Abraham Flexner, who had taken a bachelor’s degree at Johns Hopkins and whose brother Simon was a protégé of William Welch and president of the Rockefeller Institute for Medical Research.

Well before Flexner’s report was published in 1910, the number of medical schools had begun to decline, dropping from a high point of 162 in 1906 to 131 four years later, a loss of almost one fifth. The turnabout came as the steadily rising requirements set by state licensing boards and other authorities gradually altered the economics of medical education for students and schools alike. The new requirements extending the length of medical training imposed increasingly large opportunity costs on prospective physicians. The academic year, time almost wholly lost for earnings, went from four to eight or nine months, and the total period of training from two years, possibly without high school, to four, then five, and eventually more than eight years beyond high school. Under the emerging system, young doctors could scarcely hope to be making a living on their own before age thirty. Higher tuition fees added to the change. The combined rise in indirect and direct costs produced a long-term decline in the number of medical students. This was especially evident among many schools of the second and third rank that later became extinct.98 They could ill afford losses of enrollment. Medical schools were then facing greatly increased expenses under the new requirements for modern laboratories, libraries, and clinical facilities. No institution could defray all these costs out of their tuition charges, and since the commercial schools had no other source of income, they went under. These changing economic realities, rather than the Flexner report, were what killed so many medical schools in the years after 1906.

The proprietary medical colleges faced a Hobson’s choice. If they ignored the new standards for medical education, their diplomas would cease to be recognized by state licensing boards and students would lose any incentive to enroll. If, on the other hand, they tried to comply with the standards, they would be rewarded with fewer students and higher costs because of the more stringent preliminary requirements, longer period of training, and more expensive facilities and equipment. Only a few courses of action were available to them. One option was to seek a merger with the medical school of a private or state university, which could draw on income from endowments or state assistance. Many second-rank schools did exactly that. Another option was, quite simply, fraud—to pretend to comply with the new standards without following through and incurring the expense. Many did that too. The commercial schools that resisted merger or bankruptcy were almost inevitably forced into misrepresentation.

This was the setting for the Flexner report. Accompanied by the secretary of the AMA Council on Medical Education, Flexner visited each of the nation’s medical schools. As a representative of the Carnegie Foundation, thought to be on a scouting mission for the philanthropist, he no doubt had doors opened to him that otherwise would have been closed. To desperate deans and professors, the name Carnegie must have called up dancing visions of endowment plums. If so, the daydreams must have quickly vanished on publication of Flexner’s famous Bulletin Number Four. Though a layman, he was much more severe in his judgment of particular institutions than the AMA had been in any of its annual guides to American medical schools. The association was constrained by possible suspicion of its motives; Flexner felt no such compunctions. Repeatedly, with a deft use of detail and biting humor, he showed that the claims made by the weaker, mostly proprietary schools in their catalogues were patently false. Touted laboratories were nowhere to be found, or consisted of a few vagrant test tubes squirreled away in a cigar box; corpses reeked because of the failure to use disinfectant in the dissecting rooms. Libraries had no books; alleged faculty members were busily occupied in private practice. Purported requirements for admission were waived for anyone who would pay the fees. None of this was really new. But while the problems were ancient, they now had a different meaning. In the 1800s, medical schools did not need to pretend to have all the facilities that were being demanded in 1910. (Even Harvard, after all, had no physiology laboratory before 1870.) Now many of the schools claimed to be what they clearly were not; and in doing so, they implicitly acknowledged the legitimacy of the standards that Flexner was exacting of them and made themselves more vulnerable to public exposure and embarrassment.

As Flexner saw it, a great discrepancy had opened up between medical science and medical education. While science had progressed, education had lagged behind. “Society reaps at this moment but a small fraction of the advantage which current knowledge has the power to confer.” America had some of the world’s best medical schools, but also many of the worst. Flexner’s recommendations were straightforward. The first-class schools had to be strengthened on the model of Johns Hopkins, and a few from the middle ranks had to be raised to that high standard; the remainder, the great majority of schools, ought to be extinguished. America was oversupplied with badly trained practitioners; it could do with fewer but better doctors.99 This was also the view of professional leaders, but it would be mistaken to dismiss Flexner as an agent of the AMA. He was a man of strong intellectual commitments, which guided him in a long career of educational reform. The closing of medical schools greatly enhanced the market position of private physicians, but Flexner himself had an aristocratic disdain for things commercial. And precisely because of this high-minded, unmercenary spirit, his report more successfully legitimated the profession’s interest in limiting the number of medical schools and the supply of physicians than anything the AMA might have put out on its own.

So much credit—and blame—has been awarded to Flexner for the demise of small medical colleges in the first decades of the century that it may be somewhat difficult to put his report in perspective. The schools were condemned primarily by the changes in licensing rather than by Bulletin Number Four. At most, Flexner hastened the schools to their graves and deprived them of mourners. He himself recognized the primacy of economic considerations. Nearly half of the medical schools, he reported, had an annual income below $10,000; their existence was precarious. They were unable to comply, as he wrote, “even in a perfunctory manner with statutory, not to say scientific, requirements and show a profit.”100 The schools were at the end of their tether; at that point, it was relatively easy to strangle them.

The process of consolidation in medical education moved apace in the decade after 1910. By 1915 the number of schools had fallen from 131 to 95, and the number of graduates from 5,440 to 3,536. Mergers were common among Class A and B schools; Class C schools were often disbanded for want of students. In five years, the schools requiring at least one year of college work grew from thirty-five to eighty-three, or from 27 percent of the total in 1910 to 80 percent in 1915. Licensing boards demanding college work increased from eight to eighteen. In 1912 a number of boards formed a voluntary association, the Federation of State Medical Boards, which accepted the AMA’s rating of medical schools as authoritative. The AMA Council effectively became a national accrediting agency for medical schools, as an increasing number of states adopted its judgments of unacceptable institutions. In the fall of 1914, a year of college work as a prerequisite for admission became essential for a Class A rating from the AMA; two years of college were required in 1918. By 1922 thirty-eight states were requiring two years of college in preliminary work, the number of medical schools had fallen to 81, and graduates to 2,529.101 Even though no legislative body ever set up either the Federation of State Medical Boards or the AMA Council on Medical Education, their decisions came to have the force of law. This was an extraordinary achievement for the organized profession. Only a few decades earlier, many people had believed that the decentralized character of American government precluded any effective regulation of medical education. If one state raised its requirements, students would simply gravitate to schools elsewhere. Short of federal intervention, control seemed impossible. But the medical profession had carried its effort to every state, and its success was a measure of how far it had come since the mid-1800s.

The consolidation never went as far as Flexner or the AMA wanted it to go. Bulletin Number Four recommended that the number of medical schools be reduced to thirty-one; actually, more than seventy survived. Flexner would have left about twenty states without any medical schools, but this proved politically unacceptable. Legislatures stepped in to maintain at least one institution in their state. Had the United States been as centralized in its educational system as European countries, there might well have been fewer survivors.

Whatever its influence on public opinion, the Flexner report crystallized a view that proved immensely important in guiding the major foundations’ investments in medical care over the next two crucial decades. In a sense, the report was the manifesto of a program that by 1936 guided $91 million from Rockefeller’s General Education Board (plus millions more from other foundations) to a select group of medical schools. Seven institutions received over two thirds of the funds from the General Education Board. Though the board represented itself as a purely neutral force responding to the dictates of science and the wishes of the medical schools, its staff actively sought to impose a model of medical education more closely wedded to research than to medical practice. These policies determined not so much which institutions would survive as which would dominate, how they would be run, and what ideals would prevail.102

State legislatures wanted medical schools to supply local needs for physicians, but they generally could not be persuaded to invest in research or in building national institutions. Their purposes were limited—and quite understandably so: Research in medicine is typically a “public good,” and an individual state, like a particular corporation, is unlikely to recover enough of the gains to society at large to justify the costs to itself. Hence state legislatures and private corporations will almost always rationally underinvest in basic scientific research. The situation of philanthropists, on the other hand, was entirely different. Their interest lay in legitimating their wealth and power by publicly demonstrating their good works. Medical research and education advertised their moral responsibility in ways congruent with the cultural standards of an age that increasingly revered science. As they did business on a national scale, so they did philanthropy.103

The assimilation of medical education into the universities drew academic medicine away from private practice. During the nineteenth century, the medical schools had been organizations of the dominant practitioners in a community. In the twentieth century, academic and private physicians began to diverge and represent distinctive interests and values. A pivotal step in the differentiation of the two groups was the creation of the first full-time academic positions in clinical medicine. Beginning in the 1870s, the laboratory sciences at Class A medical schools had been placed on a full-time basis, but clinical instruction had continued in the hands of physicians who also maintained private practices. This arrangement had one notable advantage for the medical schools: It held down costs. At the University of Pennsylvania in 1891, while professors in the laboratory sciences were receiving $3,000 a year, the senior clinical professors were paid only $2,000. Under the old system of dividing up student fees among the faculty, they would have taken in three or four times as much. But their incomes from private practice had risen because, as specialists, they were able to command higher fees for consultations. Clinical professorships had now become desirable almost entirely for their indirect value in augmenting private consulting practices, rather than for their direct income.104 However, the time and attention these professors diverted to their private patients disturbed those who wanted to improve clinical teaching and research. Why, Simon Flexner and others asked, should academic positions in clinical medicine require less commitment than positions in the laboratory sciences? In 1907 Dean Welch of Johns Hopkins gave his support to full-time clinical professorships; Osler, now at Oxford, dissented, warning that teacher and student might become wholly absorbed in research and neglect “those wider interests to which a great hospital must minister.” It would be “a very good thing for science, but a very bad thing for the profession.”105 But prodded by the General Education Board, some medical schools made clinical teaching full-time. Chicago, Yale, Vanderbilt, and Washington University in St. Louis restructured their clinical departments to meet the board’s condition for grants. However, the board’s insistence on full-time appointments aroused resentment, and the policy was dropped in 1925.106

As American medical education became increasingly dominated by scientists and researchers, doctors came to be trained according to the values and standards of academic specialists. Many have argued that this was a mistake. They would have preferred to see only a few schools like Johns Hopkins training scientists and specialists, while the rest, with more modest programs, turned out general practitioners to take care of the everyday ills that make up the greater part of medical work. But this was not the course that American medical education followed; the same curriculum and requirements were established for all students. The emphasis on the basic sciences initially ran counter to the inclinations of many in the profession. Bigelow’s initial reaction to Eliot’s reforms at Harvard in 1870 was typical of a widespread aversion to basic science among physicians. Even after 1900 the traditionalists did not give up without a fight. At schools like the University of Pennsylvania and Washington University, there were intense and occasionally bitter struggles for control between the old-line practitioners and the insurgent party of research scientists.107 The foundation-sponsored victory of the Johns Hopkins model prevented American medicine from remaining as practical in its orientation as might have been its natural tendency. On the other hand, Flexner would have preferred medical education to have more of the flexibility of graduate education in the arts and sciences; he felt that the uniformity of medical education stifled creative work. In the years after his report was published, he became increasingly disenchanted with the rigidity of the educational standards that had become identified with his name.108

The Aftermath of Reform

The new system greatly increased the homogeneity and cohesiveness of the profession. The extended period of training helped to instill common values and beliefs among doctors, and the uniformity of the medical curriculum discouraged sectarian divisions. Under the old system of apprenticeships with solo practitioners, doctors acquired more idiosyncratic views of medicine and formed personal attachments with their preceptors rather than their peers. Hospital internships generated a stronger sense of shared identity among contemporaries. In 1904, when the AMA first investigated internships, it estimated that about 50 percent of physicians went on to hospital training; by 1912 75 to 80 percent of graduates were estimated to be taking internships. The AMA published its first listing of internships in 1914, and by 1923, for the first time, there were enough openings to accommodate all graduates.109

The profession grew more uniform in its social composition. The high costs of medical education and more stringent requirements limited the entry of students from the lower and working classes. And deliberate policies of discrimination against Jews, women, and blacks promoted still greater social homogeneity. The opening of medicine to immigrants and women, which the competitive system of medical education allowed in the 1890s, was now reversed.

The influx of women into the medical profession had already begun to ebb before publication of the Flexner report. By 1909 only three women’s medical colleges still existed; the total number of women medical students, including those at coeducational schools, had dropped to 921 from 1,419 fifteen years earlier. The growing number of women doctors in the late nineteenth century may have been partly a product of Victorian concerns about the propriety of male physicians examining women’s bodies. Conversely, the fall in their number may have stemmed partly from the waning of the Victorian sensibility. In his 1910 report, Flexner thought the declining numbers of women reflected declining demand for women doctors or declining interest among women in becoming physicians. Others, however, have since pointed to the active hostility of men in the profession. As places in medical school became more scarce, schools that previously had liberal policies toward women increasingly excluded them. Administrators justified outright discrimination against qualified women candidates on the grounds that they would not continue to practice after marriage. For the next half century after 1910, except for wartime, the schools maintained quotas limiting women to about 5 percent of medical student admissions.110

Before the Flexner report, there had been seven medical schools for blacks in the United States; only Howard and Meharry survived. Blacks also faced outright exclusion from internships and from hospital privileges at all but a few institutions. The scarcity of opportunities for training and practice had a material impact. In 1930 only one of every 3,000 black Americans was a doctor, and in the Deep South, the situation was even worse—in Mississippi, blacks had one doctor for every 14,634 persons.111

In the controversy over the reform of medical education, one objection frequently raised against eliminating the proprietary medical colleges was that they provided poor communities with doctors and poor children with an opportunity to enter medicine. Flexner denied in his report that the “poor boy” had any right to enter medicine “unless it is best for society that he should,” and he made no allowance for the inability of low-income communities to pay for the services of highly trained physicians. From a medical school in Chattanooga, Tennessee, one doctor responded, “True, our entrance requirements are not the same as those of the University of Pennsylvania or Harvard; nor do we pretend to turn out the same sort of finished product. Yet we prepare worthy, ambitious men who have striven hard with small opportunities and risen above their surroundings to become family doctors to the farmers of the south, and to the smaller towns of the mining districts.” The graduates of the larger schools, he added, could never be expected to settle in these communities. “Would you say that such people should be denied physicians? Can the wealthy who are in a minority say to the poor majority, you shall not have a doctor?”112

But that was, implicitly, what they did say.

Flexner insisted in his report that a kind of “spontaneous dispersion” would spread the graduates of the top medical schools to the four winds.113 On this matter, he proved quite wrong. Doctors gravitated strongly to the wealthier areas of the country. A 1920 study by the biostatistician Raymond Pearl showed that the distribution of physicians by region in the United States was closely correlated with per capita income. Doctors, Pearl concluded, behaved the way all “sensible people” might be expected to. “They do business where business is good and avoid places where it is bad.”114

The declining output of medical schools aggravated shortages of physicians in poor and rural areas, but regional inequalities in the availability of physicians had actually been increasing since the Civil War. Between 1870 and 1910, the poorer states lost physicians relative to population, while the wealthier states gained them. For example, in 1870 for every doctor in South Carolina there were 894 persons, compared with 712 persons per doctor in Massachusetts; by 1910, the number of people per doctor had risen to 1,170 in South Carolina and fallen in Massachusetts to 497. The disparities between cities and rural areas were also growing.115

These widening inequalities reflected the changing economic realities of medical practice I discussed in the previous chapter. Where local transportation improved, the market for medical services expanded. The development of hard roads and public transportation and the spread of telephone systems were far more rapid in the wealthier, more urban states. On the basis of such strictly ecological considerations, these areas could support a higher population of doctors. As railways and autos became common in rural areas, the village physician who formerly enjoyed a quiet local monopoly was exposed to the competition of doctors and hospitals in nearby towns and cities. The shift in distribution that began in the late nineteenth century was a response to these underlying changes in the market.

The increasing cost of medical education ensured that many small towns and rural areas would lose the services of any physician. In the twenties, articles began to appear in the popular press about the “vanishing country doctor.” A study by AMA President William Allen Pusey showed that more than a third of 910 small towns that had physicians in 1914 had been abandoned by doctors by 1925. “As you increase the cost of the license to practice medicine you increase the price at which medical service must be sold and you correspondingly decrease the number of people who can afford to buy this medical service,” wrote Pusey. He expressed particular concern about data he had collected showing that irregular practitioners were settling in the counties abandoned by physicians.116

In the twenties, even Flexner became convinced that the distribution of physicians was a more serious problem than he had originally anticipated. Through the General Education Board he supervised a study that showed a growing gap in medical service between town and city. In 1906 small towns (population 1,000 to 2,500) had 590 people per doctor, while large cities (population over 100,000) had 492. By 1923 the small towns had 910 people per doctor, the large cities 536: The small towns’ deficit had grown from about 20 to 70 percent.117 The study insisted, however, that there was still an overall surplus of doctors since many physicians were underemployed.

After the turn of the century, the supply of physicians did not keep pace with the population as a whole. According to Census data, in 1900 there were 173 physicians per 100,000 population, but only 164 in 1910. (Somewhat different AMA statistics give lower figures, 157 and 146 respectively.) By 1920 the ratio of doctors to population was down to 137 per 100,000 and ten years later to 125 per 100,000, where it bottomed out for the next two decades.118

Though physicians had succeeded in controlling their own numbers, they could not prevent rival practitioners from winning legal protection and staying in business. Despite vehement medical opposition, osteopaths and chiropractors were able to obtain licensing laws in nearly every state. Even where the chiropractors were unsuccessful in gaining statutory approval, they practiced openly, sometimes in greater numbers than in states where they were licensed. At the end of the twenties, there were an estimated 36,000 sectarians in practice,119 compared to about 150,000 physicians—or about the same ratio as homeopaths and Eclectics bore to regular physicians fifty years earlier. However, the sectarian practitioners of the twentieth century were in a vastly different situation from their forerunners. In winning licensing privileges, the new sectarians were usually unable to win access to hospitals or the right to prescribe drugs. Unlike the homeopaths in the mid-nineteenth century, they did not represent a serious challenge to the profession. According to a survey of nine thousand families carried out over the years 1928 to 1931, all the non-M.D. practitioners combined—osteopaths, chiropractors, Christian Scientists and other faith healers, midwives, and chiropodists—took care of only 5.1 percent of all attended cases of illness.120 Physicians finally had medical practice pretty much to themselves.

THE RETREAT OF PRIVATE JUDGMENT

Authority over Medication

Medical practitioners, of whatever kind, were not the only source of treatment available on the market in the nineteenth century. The patent medicine makers, whose advertisements were ubiquitous in the popular press, also offered therapy and advice. Since nineteenth-century practitioners often prepared their own medicines, the patent medicine companies were their direct competitors. The companies, furthermore, not only sold drugs, but also distributed guides to health and invited the puzzled and the sick to write them for advice about their medical problems. From the standpoint of financial resources, they were a more formidable alternative to regular physicians than were the medical sects. The money they spent on advertising assured a wide distribution to their propaganda and induced many newspapers to defend them.

The nostrum makers were the nemesis of the physicians. They mimicked, distorted, derided, and undercut the authority of the profession. While they often claimed to be doctors themselves, or to operate health institutes or medical colleges, or to have the endorsement of eminent physicians, they also frequently insinuated that the profession was jealously conspiring to suppress their discoveries. The contrasts they drew were vivid. Doctors wanted to cut people up or give prolonged treatment, while their “sure cure” would instantly provide relief. Physicians charged high fees; their remedies were cheap. When new scientific ideas appeared, the patent medicine makers were quick to exploit them. In the late 1880s, an ingenious Texan, William Radam, promoted a Microbe Killer that played upon public misunderstanding of the recent discoveries of Pasteur and Koch. Consisting nearly entirely of water—except for traces of red wine, hydrochloric and sulphuric acid—Microbe Killer was supposed to cure all diseases by destroying germs inside the body. By 1890 Radam had seventeen factories producing the Killer. Doctors, he explained, tried to deceive the public by doing elaborate and useless diagnoses: “Diagnosing disease is simply blindfolding the public.” Reflecting on Radam’s success, the historian James Harvey Young notes the irony that the age in which physicians could for the first time accurately explain much disease “was the very age in which patent medicines reached their apogee.”121

The patent medicine makers played upon the changing forms of discontent with physicians. Advertising for such popular remedies as Lydia Pinkham’s Vegetable Compound—introduced in 1876 for “FEMALE WEAKNESSES,” “All Weaknesses of the generative organs of either Sex,” and “all diseases of the Kidneys”—frequently appealed to fears of medical treatment, especially surgery. In 1879, as Sarah Stage reports in her history of the Pinkham company, the firm began inviting readers to “Write Mrs. Pinkham” about their medical complaints (a practice that continued even after Lydia Pinkham died in 1883). One woman suffering from a prolapsed uterus wrote, “Dr. tells me I can have the trouble removed but thought I would write and ask you if the Compound would do it before I submitted to an operation with Doctor’s tools, a thing I have not much faith in.” The company replied, “By all means avoid instrumental treatment for your trouble. Use the Compound as you have been using it—faithfully and patiently—and it will eventually work a cure. . . .” In the late 1890s the Pinkham company began increasingly to appeal to Victorian modesty to draw women away from doctors. “Do you want a strange man to hear all about your particular diseases?” asked one advertisement with the headline, “The Doctor Did No Good.” And the company promised, “Men NEVER See Your Letters.”122

From its founding, the AMA was at odds with the patent medicine business. It divided drugs into “ethical” preparations of known composition advertised only to the profession, and patent medicines of secret composition sold directly to the public. (Most “patent” medicines were actually not patented since a patent required disclosure of the formula; technically, they were “proprietary” drugs whose trademarks were protected by copyright.) Initially, the AMA rejected as unethical any secret formula or any private appropriation of medical knowledge or techniques, which it maintained ought to belong collectively to the profession. However, the AMA was powerless to enforce these views. In 1849 the association resolved to create a board to evaluate nostrums but proved unable to do so for lack of resources. In the late nineteenth century, proprietary drugs became more widely used, and professional concern about them intensified. Advertisements for such drugs filled the medical journals as well as the newspapers, and doctors, though often ignorant of their composition and effects, increasingly prescribed them. A survey of New York drugstores showed a steady increase in nostrums and machine-made tablets, as a proportion of physician prescriptions, from less than 1 percent as late as 1874 to 20 to 25 percent by 1902.123 In 1900 the AMA launched a campaign to make the “legitimate” proprietary remedies “respond to the ethics of medicine” by forcing their manufacturers to disclose all formulae and cease public advertising. Its Journal announced that it would stop publishing all notices of offending drugs when current advertising contracts expired. And it urged physicians not to prescribe, nor other medical journals to advertise, either secret preparations or drugs “advertised directly to the laity.”124 However, no major campaign materialized, and the drug companies continued to advertise in many medical journals, which, like newspapers, depended on them for revenues.

Between 1900 and 1910, three changes enabled the medical profession to wrest control of the flow of pharmaceutical information. First, and perhaps most important, muckraking journalists and other Progressives joined physicians in a crusade for regulation of patent medicines as part of a more general assault on deceptive business practices. Second, as a result of its growing membership, the AMA finally acquired the financial resources to create its own regulatory apparatus and to mount a major effort against the nostrum makers. And, third, the drug makers were forced to recognize that they depended increasingly on doctors to market their drugs because of the public’s increased reliance on professional opinion in decisions about medication.

Public reliance on professional opinion may have been stimulated by muckraking revelations about how dangerous many patent medicines were. Beginning about 1903, domestic magazines like the Ladies’ Home Journal continually warned women about the imprudence of self-medication. Edward Bok, the journal’s editor, pointed to drugs and syrups containing opium, cocaine, and alcohol, which unsuspecting mothers used themselves or gave to their children. “The physician’s fee of a dollar or two, which the mother seeks to save, may prove to be the costliest form of economy which she has ever practiced.”125

Probably the most famous investigations of the drug industry in American history began appearing in Collier’s Weekly in October 1905. In two series—the first on patent medicines, the second on medical quacks—the muckraking reporter Samuel Hopkins Adams explored the cynical deceptions of medicine makers and medicine men who sold dangerous and addictive drugs. Adams attacked 264 individuals and companies by name, giving detailed evidence, such as laboratory reports showing drugs were worthless and burial notices of people who gave testimonials to drug companies and then died from the diseases that were supposed to have been cured. In an article on headache powders containing the deadly drug acetanilid, Adams listed people who had taken them and died soon after, and he warned, “There is but one safeguard in the use of these remedies; to regard them as one would regard opium, and to employ them only with the consent of a physician who understands their true nature.”126 The message underlying the exposés was that commercial interests were dangerous to health and that physicians had to be trusted. In the first article of the series, Collier’s reprinted a poster from a Chicago drugstore showing two figures: a healthy workingman “before using” and a skeleton “after using” “Hoodwink’s Sarsaparilla or any other old ‘Patent Medicines.’” Below was written:

MORAL

Don’t Dose Yourself with secret Patent Medicines, Almost all of which are Frauds and Humbugs. When sick Consult a Doctor and take his Prescription: it is the only Sensible Way and you’ll find it Cheaper in the end.

The muckrakers utterly discredited the claims of the patent medicine companies to provide personal medical advice. In the Ladies’ Home Journal, a young reporter, Mark Sullivan, wrote about “How the Private Confidences of Women Are Laughed At” and “How the Game of Medical Advice Is Worked.” With devastating effect, Bok reprinted notices indicating that the patent medicine makers rented the letters of women seeking confidential advice to companies that compiled mailing lists. Next to a copy of an advertisement urging women to write Lydia Pinkham, he published a picture of her tombstone showing she had been dead twenty years.127 “The whole ‘personal medical advice’ business,” wrote Samuel Hopkins Adams, “is managed by rote, and the letter that you get ‘special to your case’ has been printed and signed before your inquiry ever reached the shark who gets your money.”128

The second part of Adams’ series, dealing with quack physicians, portrayed them as fakes and parasites on human misery, promising illusory cures for tuberculosis, cancer, and drug addiction. Some of these doctors, Adams suggested, used addictive drugs themselves. “How shall the public protect itself against quackery?” Adams asked.

Any physician who advertises a positive cure for any disease, who issues nostrum testimonials, who sells his services to a secret remedy, or who diagnoses and treats by mail patients he has never seen, is a quack. . . . Shut your eyes to the medical columns of the newspapers, and you will save yourself many forebodings and symptoms. Printer’s ink, when it spells out a doctor’s promise to cure, is one of the subtlest and most dangerous of poisons.129

The AMA distributed over 150,000 copies of “The Great American Fraud” over the next five years. Adams’ series was to the proprietary drug makers and advertising doctors what the Flexner report five years later would be to the proprietary medical schools: a withering investigation of deceit by commercial interests that contributed to the consolidation of professional authority.

In 1906, on the heels of “The Great American Fraud” and Upton Sinclair’s novel The Jungle exposing adulteration in the meat-packing industry, Congress passed the Pure Food and Drug Act. The act marks the beginning of federal drug regulation, but the law affected only the most arrant fakes. It did not require the disclosure of all contents, except in the case of narcotics; it only banned statements on the label of a drug about its composition that were “false and fraudulent.” This rule did not initially apply to claims about the effectiveness of drugs, nor to statements made in newspaper advertisements. After some initial caution, drug makers discovered they could resume making bold claims, even intimating that their drugs now met a federal standard of purity and effectiveness. But although the law initially amounted to little, another regulatory system was also being established at the time that would, for the next several decades, be more consequential.

In 1905, after definitively closing its Journal to patent medicine advertisements, the AMA established a Council on Pharmacy and Chemistry to set standards for drugs, evaluate them, and lead the battle against nostrums. As part of this effort, it set up a laboratory and maintained close contact with the federal Bureau of Chemistry, which tested products under the food and drug law. This was one of several new undertakings the AMA’s growing financial strength permitted. The council’s publication New and Nonofficial Remedies became widely used by medical journals in setting advertising policies and by doctors in prescribing. When one company refused to submit its products for examination, a member of the council remarked that its work would be simplified if it could “induce all the objectionable manufacturers to commit this form of suicide.”130

To have its drugs accepted, a company had to comply with the AMA council’s rules. Not only were drugs forbidden whose manufacturers made false advertising claims or refused to disclose their drugs’ composition. The council also would not approve any drug that was directly advertised to the public, or whose “label, package or circular” listed the diseases for which the drug was to be used. Companies would have a choice of markets: If they wished to advertise a drug to doctors, they could not advertise it to the public or instruct laymen in its use. For such drugs, the public would have to turn to physicians.

The AMA also institutionalized the work of the muckrakers. It set up an office to pursue fraudulent drugs and shame publishers of journals and newspapers into dropping all advertisements of patent medicines. The association denied that any distinctions could be made among patent medicines: “[T]here is no such thing as an unobjectionable ‘patent medicine’ advertisement in a newspaper,” the editor of its journal declared.131 The struggle to suppress such advertising put the profession in the position of demanding that newspapers sacrifice a lucrative source of income for the sake of public health and public respectability. It is a measure of the profession’s new authority that, despite the financial loss, many newspapers began to censor patent medicine advertisements and rule out those listed as frauds by the AMA. A few states passed laws making it illegal for newspapers to publish any advertisements for doctors. The magnitude of the AMA’s achievement was evident by 1919, when the U.S. Public Health Service sent out a circular to 20,000 periodicals and found that more than 19,000 refused to carry any advertisements for doctors.132

Neither federal regulation nor the AMA prevented proprietary drug companies from marketing drugs to the public; nor did they bar people from self-treatment. But the drug companies now labored under more rigid constraints about the claims they could make. The federal law was amended in 1912 to cover fraudulent claims of effectiveness and administratively extended in the 1920s to cover newspaper advertising as well as labels. In this period, the patent medicine makers beat a steady retreat. By 1915 the Pinkham company, for example, omitted any reference to prolapsed uterus in its advertising, and ten years later all mention of female disorders disappeared. The label now said it was “Recommended as a Vegetable Tonic in conditions for which this preparation is Adapted.” The AMA official in charge of the nostrum campaign suggested it might just as well read, “For Those Who Like This Sort of Thing, This is the Sort of Thing That Those People Like.”133 Before regulation, scientific medicine had to compete with the claims of patent medicine companies, and amid this cacophony its voice was not always audible. Drug regulation turned down the volume of patent medicine claims and allowed scientific medicine to be heard more clearly.

Recognizing that public opinion had shifted, the patent medicine companies became more deferential to the medical profession. In the 1919 edition of The People’s Common Sense Medical Advisor, Dr. R. V. Pierce, who had been one of the targets of Adams’ investigations, conceded that he was not so “presumptuous” as to claim his book could make “every man his own physician.” Urging his readers to consult a physician immediately in serious illness, he wrote, “No man can with advantage be his own lawyer, carpenter, tailor and printer; much less can he hope to artfully repair his own constitution.”134

As physicians became more authoritative, many drug companies found it wiser to address their appeals for new products to the profession. But to do so, they were obliged to comply with the AMA’s terms and withdraw advertising for those products from the public. In 1924 the AMA Council on Pharmacy and Chemistry ruled that a drug could be denied approval if a company derived much of its earnings from other products that were not in compliance with AMA guidelines.135 The council did not want to let companies play both sides of the street with different drugs. Consequently, companies had to opt entirely for one side or the other.

The AMA’s regulatory system did not merely augment the federal effort. The logic of the 1906 law was to improve the functioning of the market by making consumer information more accurate.136 The logic of the AMA’s regulatory system was to withhold information from consumers and rechannel drug purchasing through physicians. This shift meant a structural change in the market rather than simply an improvement in its functioning, and it gave physicians a larger share of the purchasing power of their patients.

The profession also extended its authority into other markets related to health. When manufacturers introduced infant food preparations in the late 1800s, they advertised widely in newspapers and magazines as well as in the medical press. The directions were simple: To prepare Nestlé’s Milk Food, introduced in the United States in 1873, a mother had only to add water. Like the patent medicine companies, the infant food producers represented an alternative to physicians in an area of decision making that doctors and reformers believed required professional rather than commercial control. “The proper authority for establishing rules for substitute feeding,” wrote a noted pediatrician in 1893, “should emanate from the medical profession, and not from nonmedical capitalists.”137

The shift to dependence on physicians in infant feeding followed the same pattern as in the use of drugs. Increasingly, the child-care literature counseled parents to consult a physician about their baby’s diet. In the 1910s manufacturers discovered that advertising exclusively to the medical profession on its terms could be a more efficient way to market their products than by attempting to reach a far more diffuse public. When Mead Johnson began selling a milk modifier called Dextri-Maltose in 1912, it advertised it only to physicians; no directions were enclosed for the mother. The success of Dextri-Maltose and another such product, writes Rima Apple, “demonstrated to other companies that such advertising policies could result in a satisfactory compromise between the needs of the manufacturers to sell their products, and the desire of the physicians to control the distribution and use of the infant foods.”138 Introducing a new product in 1924, Nestlè advertised in the AMA’s Journal that it would be “sold only on the prescription or recommendation of a physician. No feeding instructions appear on the trade package.” Mead-Johnson put the point directly in its medical advertising: “When mothers in America feed their babies by lay advice, the control of your pediatric cases passes out of your hands, Doctor.” Since Mead-Johnson advertised only to doctors, it shared the same interest as physicians in persuading mothers to follow professional advice.