Medicine and research play vital roles in maintaining the health of individuals within societies and thus are fundamental components in the overall functioning of societies. The division of medicine into specialties has been especially important in facilitating the diagnosis and treatment of both noncommunicable and communicable diseases. Likewise, clinical research, drug development, and the development of new diagnostic tools have greatly enhanced the ability of physicians to deal with a wide range of diseases. These developments have been further supported by investigations of drug resistance and by safety engineering and bioengineering, the latter of which encompasses the generation of devices such as hearing aids and prostheses, which have significantly improved the quality of life for otherwise disabled persons.
At the beginning of World War II it was possible to recognize a number of major medical specialties, including internal medicine, obstetrics and gynecology, pediatrics, pathology, anesthesiology, ophthalmology, surgery, orthopedic surgery, plastic surgery, psychiatry and neurology, radiology, and urology. Hematology was also an important field of study, and microbiology and biochemistry were important medically allied specialties. Since World War II, however, there has been an almost explosive increase of knowledge in the medical sciences as well as enormous advances in technology as applicable to medicine. These developments have led to more and more specialization. The knowledge of pathology has been greatly extended, mainly by the use of the electron microscope; similarly microbiology, which includes bacteriology, expanded with the growth of such other subfields as virology (the study of viruses) and mycology (the study of yeasts and fungi in medicine). Biochemistry, sometimes called clinical chemistry or chemical pathology, has contributed to the knowledge of disease, especially in the field of genetics where genetic engineering has become a key to curing some of the most difficult diseases. Hematology also expanded after World War II with the development of electron microscopy. Contributions to medicine have come from such fields as psychology and sociology especially in such areas as mental disorders and mental handicaps. Clinical pharmacology has led to the development of more effective drugs and to the identification of adverse reactions. More recently established medical specialties are those of preventive medicine, physical medicine and rehabilitation, family practice, and nuclear medicine. In the United States every medical specialist must be certified by a board composed of members of the specialty in which certification is sought. Some type of peer certification is required in most countries.
Expansion of knowledge both in depth and in range has encouraged the development of new forms of treatment that require high degrees of specialization, such as organ transplantation and exchange transfusion; the field of anesthesiology has grown increasingly complex as equipment and anesthetics have improved. New technologies have introduced microsurgery, laser beam surgery, and lens implantation (for cataract patients), all requiring the specialist’s skill. Precision in diagnosis has markedly improved; advances in radiology, the use of ultrasound, computerized axial tomography (CAT scan), and nuclear magnetic resonance imaging are examples of the extension of technology requiring expertise in the field of medicine.
To provide more efficient service, it is not uncommon for a specialist surgeon and a specialist physician to form a team working together in the field of, for example, heart disease. An advantage of this arrangement is that they can attract a highly trained group of nurses, technologists, operating room technicians, and so on, thus greatly improving the efficiency of the service to the patient. Such specialization is expensive, however, and has required an increasingly large proportion of the health budget of institutions, a situation that eventually has its financial effect on the individual citizen. The question therefore arises as to their cost-effectiveness. Governments of developing countries have usually found, for instance, that it is more cost-efficient to provide more people with basic care.
The Industrial Revolution greatly changed, and as a rule worsened, the health hazards caused by industry, while the numbers at risk vastly increased. In the United Kingdom the first small beginnings of efforts to ameliorate the lot of the workers in factories and mines began in 1802 with the passing of the first factory act, the Health and Morals of Apprentices Act. The factory act of 1838, however, was the first truly effective measure in the industrial field. It forbade night work for children and restricted their work hours to 12 per day. Children under 13 were required to attend school. A factory inspectorate was established, the inspectors being given powers of entry into factories and power of prosecution of recalcitrant owners. Thereafter there was a succession of acts with detailed regulations for safety and health in all industries. Industrial diseases were made notifiable, and those who developed any prescribed industrial disease were entitled to benefits.
The situation is similar in other developed countries. Physicians are bound by legal restrictions and must report industrial diseases. The industrial physician’s most important function, however, is to prevent industrial diseases. Many of the measures to this end have become standard practice, but, especially in industries working with new substances, the physician should determine if workers are being damaged and suggest preventive measures. The industrial physician may advise management about industrial hygiene and the need for safety devices and protective clothing and may become involved in building design. The physician or health worker may also inform the worker of occupational health hazards.
Modern factories usually have arrangements for giving first aid in case of accidents. Depending upon the size of the plant, the facilities may range from a simple first-aid station to a large suite of lavishly equipped rooms and may include a staff of qualified nurses and physiotherapists and one or perhaps more full-time physicians.
In many societies special facilities are provided for the health care of pregnant women, mothers, and their young children. The health care needs of these three groups are generally recognized to be so closely related as to require a highly integrated service that includes prenatal care, the birth of the baby, the postnatal period, and the needs of the infant. Such a continuum should be followed by a service attentive to the needs of young children and then by a school health service. Family clinics are common in countries that have state-sponsored health services, such as those in the United Kingdom and elsewhere in Europe. Family health care in some developed countries, such as the United States, is provided for low-income groups by state-subsidized facilities, but other groups defer to private physicians or privately run clinics.
Prenatal clinics provide a number of elements. There is, first, the care of the pregnant woman, especially if she is in a vulnerable group likely to develop some complication during the last few weeks of pregnancy and subsequent delivery. Many potential hazards, such as diabetes and high blood pressure, can be identified and measures taken to minimize their effects. In developing countries pregnant women are especially susceptible to many kinds of disorders, particularly infections such as malaria. Local conditions determine what special precautions should be taken to ensure a healthy child. Most pregnant women, in their concern to have a healthy child, are receptive to simple health education. The prenatal clinic provides an excellent opportunity to teach the mother how to look after herself during pregnancy, what to expect at delivery, and how to care for her baby. If the clinic is attended regularly, the woman’s record will be available to the staff that will later supervise the delivery of the baby; this is particularly important for someone who has been determined to be at risk. The same clinical unit should be responsible for prenatal, natal, and postnatal care as well as for the care of the newborn infants.
Most pregnant women can be safely delivered in simple circumstances without an elaborately trained staff or sophisticated technical facilities, provided that these can be called upon in emergencies. In developed countries it was customary in premodern times for the delivery to take place in the woman’s home supervised by a qualified midwife or by the family doctor. By the mid-20th century women, especially in urban areas, usually preferred to have their babies in a hospital, either in a general hospital or in a more specialized maternity hospital. In many developing countries traditional birth attendants supervise the delivery. They are women, for the most part without formal training, who have acquired skill by working with others and from their own experience. Normally they belong to the local community where they have the confidence of the family, where they are content to live and serve, and where their services are of great value. In many developing countries the better training of birth attendants has a high priority. In developed Western countries there has been a trend toward delivery by natural childbirth, including delivery in a hospital without anesthesia, and home delivery.
Postnatal care services are designed to supervise the return to normal of the mother. They are usually given by the staff of the same unit that was responsible for the delivery. Important considerations are the matter of breast- or artificial feeding and the care of the infant. Today the prospects for survival of babies born prematurely or after a difficult and complicated labour, as well as for neonates (recently born babies) with some physical abnormality, are vastly improved. This is due to technical advances, including those that can determine defects in the prenatal stage, as well as to the growth of neonatology as a specialty. A vital part of the family health care service is the child welfare clinic, which undertakes the care of the newborn. The first step is the thorough physical examination of the child on one or more occasions to determine whether or not it is normal both physically and, if possible, mentally. Later periodic examinations serve to decide if the infant is growing satisfactorily. Arrangements can be made for the child to be protected from major hazards by, for example, immunization and dietary supplements. Any intercurrent condition, such as a chest infection or skin disorder, can be detected early and treated. Throughout the whole of this period mother and child are together, and particular attention is paid to the education of the mother for the care of the child.
A part of the health service available to children in the developed countries is that devoted to child guidance. This provides psychiatric guidance to maladjusted children usually through the cooperative work of a child psychiatrist, educational psychologist, and schoolteacher.
Since the mid-20th century a change has occurred in the population structure in developed countries. The proportion of elderly people has been increasing. Since 1983, however, in most European countries the population growth of that group has leveled off, although it is expected to continue to grow more rapidly than the rest of the population in most countries through the first third of the 21st century. In the late 20th century Japan had the fastest growing elderly population.
Geriatrics, the health care of the elderly, is therefore a considerable burden on health services. In the United Kingdom about one-third of all hospital beds are occupied by patients over 65; half of these are psychiatric patients. The physician’s time is being spent more and more with the elderly, and since statistics show that women live longer than men, geriatric practice is becoming increasingly concerned with the treatment of women. Elderly people often have more than one disorder, many of which are chronic and incurable, and they need more attention from health care services. In the United States geriatrics is now generally recognized as a subspecialty of family medicine.
Support services for the elderly provided by private or state-subsidized sources include domestic help, delivery of meals, day-care centres, elderly residential homes or nursing homes, and hospital beds either in general medical wards or in specialized geriatric units. The degree of accessibility of these services is uneven from country to country and within countries. In the United States, for instance, although there are some federal programs, each state has its own elderly programs, which vary widely. However, as the elderly become an increasingly larger part of the population their voting rights are providing increased leverage for obtaining more federal and state benefits. The general practitioner or family physician working with visiting health and social workers and in conjunction with the patient’s family often form a working team for elderly care.
In the developing world, countries are largely spared such geriatric problems, but not necessarily for positive reasons. A principal cause, for instance, is that people do not live so long. Another major reason is that in the extended family concept, still prevalent among developing countries, most of the caretaking needs of the elderly are provided by the family.
The physician working in the field of public health is mainly concerned with the environmental causes of ill health and in their prevention. Bad drainage, polluted water and atmosphere, noise and smells, infected food, bad housing, and poverty in general are all the public health physician’s special concern. Perhaps the most descriptive title this type of doctor can be given is that of community physician. In the United Kingdom this physician has been customarily known as the medical officer of health and, in the United States, as the health officer.
The spectacular improvement in the expectation of life in the affluent countries has been due far more to public health measures than to curative medicine. These public health measures began operation largely in the 19th century. At the beginning of that century, drainage and water supply systems were all more or less primitive; nearly all the cities of that time had poorer water and drainage systems than Rome had possessed 1,800 years previously. Infected water supplies caused outbreaks of typhoid, cholera, and other waterborne infections. By the end of the century, at least in the larger cities, water supplies were usually safe. Food-borne infections were also drastically reduced by the enforcement of laws concerned with the preparation, storage, and distribution of food. Insect-borne infections, such as malaria and yellow fever, which were common in tropical and semitropical climates, were eliminated by the destruction of the responsible insects. Fundamental to this improvement in health has been the diminution of poverty, for most public health measures are expensive. The peoples of the developing countries fall sick and sometimes die from infections that are virtually unknown in affluent countries.
Persons dissatisfied with the methods of modern medicine or with its results sometimes seek help from those professing expertise in other, less conventional, and sometimes controversial, forms of health care. Such practitioners are not medically qualified unless they are combining such treatments with a regular (allopathic) practice, which includes osteopathy. In many countries the use of some forms, such as chiropractic, requires licensing and a degree from an approved college. The treatments afforded in these various practices are not always subjected to objective assessment, yet they provide services that are alternative, and sometimes complementary, to conventional practice. This group includes practitioners of homeopathy, naturopathy, acupuncture, hypnotism, and various meditative and quasi-religious forms. Numerous persons also seek out some form of faith healing to cure their ills, sometimes as a means of last resort. Religions commonly include some advents of miraculous curing within their scriptures. The belief in such curative powers was in part responsible for the increasing popularity of the television, or “electronic,” preacher in the United States, a phenomenon that involves millions of viewers. Millions of others annually visit religious shrines, such as the one at Lourdes in France, with the hope of being miraculously healed.
The remarkable developments in medicine that have been brought about in the 20th century, especially since World War II, have been based on research either in the basic sciences related to medicine or in the clinical field. Advances in the use of radiation, nuclear energy, and space research have played an important part in this progress. Some laypersons often think of research as taking place only in sophisticated laboratories or highly specialized institutions where work is devoted to scientific advances that may or may not be applicable to medical practice. This notion, however, ignores the clinical research that takes place on a day-to-day basis in hospitals and doctors’ offices.
Although the most spectacular changes in the medical scene during the 20th century, and the most widely heralded, have been the development of potent drugs and elaborate operations, another striking change has been the abandonment of most of the remedies of the past. In the mid-19th century, persons ill with numerous maladies were starved (partially or completely), bled, purged, cupped (by applying a tight-fitting vessel filled with steam to some part and then cooling the vessel), and rested, perhaps for months or even years. Much more recently they were prescribed various restricted diets and were routinely kept in bed for weeks after abdominal operations, for many weeks or months when their hearts were thought to be affected, and for many months or years with tuberculosis. The abandonment of these measures may not be thought of as involving research, but the physician who first encouraged persons who had peptic ulcers to eat normally (rather than to live on the customary bland foods) and the physician who first got his patients out of bed a week or two after they had had minor coronary thrombosis (rather than insisting on a minimum of six weeks of strict bed rest) were as much doing research as is the physician who first tries out a new drug on a patient. This research, by observing what happens when remedies are abandoned, has been of inestimable value, and the need for it has not passed.
Much of the investigative clinical field work undertaken in the present day requires only relatively simple laboratory facilities because it is observational rather than experimental in character. A feature of much contemporary medical research is that it requires the collaboration of a number of persons, perhaps not all of them doctors. Despite the advancing technology, there is much to be learned simply from the observation and analysis of the natural history of disease processes as they begin to affect patients, pursue their course, and end, either in their resolution or by the death of the patient. Such studies may be suitably undertaken by physicians working in their offices who are in a better position than doctors working only in hospitals to observe the whole course of an illness. Disease rarely begins in a hospital and usually does not end there. It is notable, however, that observational research is subject to many limitations and pitfalls of interpretation, even when it is carefully planned and meticulously carried out.
The administration of any medicament, especially a new drug, to a patient is fundamentally an experiment: so is a surgical operation, particularly if it involves a modification to an established technique or a completely new procedure. Concern for the patient, careful observation, accurate recording, and a detached mind are the keys to this kind of investigation, as indeed to all forms of clinical study. Because patients are individuals reacting to a situation in their own different ways, the data obtained in groups of patients may well require statistical analysis for their evaluation and validation.
One of the striking characteristics in the medical field in the 20th century has been the development of new drugs, usually by pharmaceutical companies. Until the end of the 19th century, the discovery of new drugs was largely a matter of chance. It was in that period that Paul Ehrlich, the German scientist, began to lay down the principles for modern pharmaceutical research that made possible the development of a vast array of safe and effective drugs. Such benefits, however, bring with them their own disadvantages: it is estimated that as many as 30 percent of patients in, or admitted to, hospitals suffer from the adverse effect of drugs prescribed by a physician for their treatment (iatrogenic disease). Sometimes it is extremely difficult to determine whether a drug has been responsible for some disorder. An example of the difficulty is provided by the thalidomide disaster between 1959 and 1962. Only after numerous deformed babies had been born throughout the world did it become clear that thalidomide taken by the mother as a sedative had been responsible. In hospitals where clinical research is carried out, ethical committees often consider each research project. If the committee believes that the risks are not justified, the project is rejected.
After a potentially useful chemical compound has been identified in the laboratory, it is extensively tested in animals, usually for a period of months or even years. Few drugs make it beyond this point. If the tests are satisfactory, the decision may be made for testing the drug in humans. It is this activity that forms the basis of much clinical research. In most countries the first step is the study of its effects in a small number of health volunteers. The response, effect on metabolism, and possible toxicity are carefully monitored and have to be completely satisfactory before the drug can be passed for further studies, namely, with patients who have the disorder for which the drug is to be used. Tests are administered at first to a limited number of these patients to determine effectiveness, proper dosage, and possible adverse reactions. These searching studies are scrupulously controlled under stringent conditions. Larger groups of patients are subsequently involved to gain a wider sampling of the information. Finally, a full-scale clinical trial is set up. If the regulatory authority is satisfied about the drug’s quality, safety, and efficacy, it receives a license to be produced. As the drug becomes more widely used, it eventually finds its proper place in therapeutic practice, a process that may take years.
An important step forward in clinical research was taken in the mid-20th century with the development of the controlled clinical trial. This sets out to compare two groups of patients, one of which has had some form of treatment that the other group has not. The testing of a new drug is a case in point: one group receives the drug, the other a product identical in appearance, but which is known to be inert—a so-called placebo. At the end of the trial, the results of which can be assessed in various ways, it can be determined whether or not the drug is effective and safe. By the same technique two treatments can be compared, for example a new drug against a more familiar one. Because individuals differ physiologically and psychologically, the allocation of patients between the two groups must be made in a random fashion; some method independent of human choice must be used so that such differences are distributed equally between the two groups.
In order to reduce bias and make the trial as objective as possible the doubleblind technique is sometimes used. In this procedure, neither the doctor nor the patients know which of two treatments is being given. Despite such precautions the results of such trials can be prejudiced, so that rigorous statistical analysis is required. It is obvious that many ethical, not to say legal, considerations arise, and it is essential that all patients have given their informed consent to be included. Difficulties arise when patients are unconscious, mentally confused, or otherwise unable to give their informed consent. Children present a special difficulty because not all laws agree that parents can legally commit a child to an experimental procedure. Trials, and indeed all forms of clinical research that involve patients, must often be submitted to a committee set up locally to scrutinize each proposal.
In drug research the essential steps are taken by the chemists who synthesize or isolate new drugs in the laboratory; clinicians play only a subsidiary part. In developing new surgical operations clinicians play a more important role, though laboratory scientists and others in the background may also contribute largely. Many new operations have been made possible by advances in anesthesia, and these in turn depend upon engineers who have devised machines and chemists who have produced new drugs. Other operations are made possible by new materials, such as the alloys and plastics that are used to make artificial hip and knee joints.
Whenever practicable, new operations are tried on animals before they are tried on patients. This practice is particularly relevant to organ transplants. Surgeons themselves—not experimental physiologists—transplanted kidneys, livers, and hearts in animals before attempting these procedures on patients. Experiments on animals are of limited value, however, because animals do not suffer from all of the same maladies as do humans.
Many other developments in modern surgical treatment rest on a firm basis of experimentation, often first in animals but also in humans. Among them are renal dialysis (the artificial kidney), arterial bypass operations, embryo implantation, and exchange transfusions. These treatments are but a few of the more dramatic of a large range of therapeutic measures that have not only provided patients with new therapies but also have led to the acquisition of new knowledge of how the body works. A controversial area of ongoing research is that of gene transplantation, which has the potential of providing cures for cancer and other diseases.
Developments in modern medical science have made it possible to detect morbid conditions before a person actually feels the effects of the condition. Examples are many: they include certain forms of cancer; high blood pressure; heart and lung disease; various familial and congenital conditions; disorders of metabolism, like diabetes; and AIDS. The consideration to be made in screening is whether or not such potential patients should be identified by periodic examinations. To do so is to imply that the subjects should be made aware of their condition and, second, that there are effective measures that can be taken to prevent their condition, if they test positive, from worsening. Such so-called specific screening procedures are costly since they involve large numbers of people. Screening may lead to a change in the life style of many persons, but not all such moves have been shown in the long run to be fully effective. Although screening clinics may not be run by doctors, they are a factor of increasing importance in the preventive health service. Periodic general medical examination of various sections of the population, business executives, for example, is another way of identifying risk factors that, if not corrected, can lead to the development of overt disease.
Forensic medicine is an area of science that deals with the application of medical knowledge to legal questions. The use of medical testimony in law cases predates by more than 1,000 years the first systematic presentation of the subject by the Italian Fortunatus Fidelis in 1598. Forensic medicine was recognized as a specialty early in the 19th century.
The primary tool of forensic medicine has always been the autopsy. Frequently used for identification of the dead, autopsies may also be conducted to determine the cause of death. In cases of death caused by a weapon, for example, the forensic pathologist—by examining the wound—can often provide detailed information about the type of weapon used as well as important contextual information. (In a death by gunshot, for example, he can determine with reasonable accuracy the range and angle of fire.) Forensic medicine is a major factor in the identification of victims of disaster, such as landslide or plane crash. In cause-of-death determinations, forensic pathologists can also significantly affect the outcome of trials dealing with insurance and inheritance.
In the 19th century, two other forensic specialties arose, namely, forensic psychiatry (which is used to determine the mental health of an individual about to stand trial, and, thus, his or her blameworthiness) and forensic toxicology. The forensic toxicologist gives evidence on such topics as intentional poisonings and drug use. The toxicologist has played an increasingly important role in matters of industrial and environmental poisoning.
Biomonitoring is the measurement of chemical compounds or their metabolites (breakdown products) in biological specimens. Biomonitoring measurements can be conducted on nonhuman biological samples, such as plants and animals, but use of the term is primarily associated with measuring foreign compounds in humans.
Biomonitoring is used in occupational settings, where workers are monitored for unsafe levels of toxic chemicals, often on a regular schedule. It is also used in clinical practice and, more commonly, in public health research, where it serves as a source of information on community exposures. Well-known examples of human biomonitoring include measuring alcohol content in exhaled breath with a breathalyzer, a device used in law enforcement; testing for drugs in urine via urinalysis, a method also used in law enforcement and in a variety of occupational settings; and measuring concentrations of lead in blood using blood analysis or of arsenic in nails or hair using exposure assessments.
The ability of a chemical to be detected depends on the characteristics of the substance, how it is metabolized by the human body, and whether methods are available to measure it accurately. Some compounds are almost entirely excreted from the body in a short period of time, whereas others accumulate in body tissues for decades or are transformed into more toxic compounds.
Since the 1990s, advances in instrumentation have made it possible to detect a greater number of chemicals in humans at lower costs, at lower levels, and using less-invasive procedures. From 1970 to 1992 the U.S. Environmental Protection Agency (EPA) ran the National Human Adipose Tissue Survey to test people for levels of fat-soluble environmental contaminants. The EPA used a minor surgical procedure to obtain fatty tissue from living persons and also analyzed postmortem fat specimens. Today fatty components of blood can be used to analyze the same compounds in humans at very low levels. For example, it is now possible to detect certain chemicals in parts per billion, parts per trillion, and, in some cases, parts per quadrillion.
Many biomonitoring studies have focused on the extent of community exposure to a specific environmental contaminant. In 1999 a study conducted as part of a breast-milk surveillance program initiated in Sweden in the early 1970s revealed that human milk levels of polybrominated diphenyl ethers (PBDEs), chemicals used as flame retardants in many consumer products, had doubled every five years through 1997. The chemicals were also detected in breast milk from women in Japan, Germany, the United States, and Canada, as well as in killer whales and polar bears in the wild. Although the human health effects of exposure to PBDEs were not clear, the European Union (EU) approved a ban on two forms of PBDEs (penta-BDE and octa-BDE) that went into effect in 2004. On July 1, 2006, the EU also effected a ban on a third form of PBDE, known as deca-BDE. The implementation of these bans was based on the results of tests in animals and on the results of biomonitoring in humans.
In the United States, a national biomonitoring surveillance program was initiated by the Centers for Disease Control and Prevention (CDC) as part of the National Health and Nutrition Examination Survey (NHANES). The NHANES, which has been performed annually since 1999, is used to obtain information on the health and nutrition of a random sampling of approximately 5,000 representative Americans. It also collects blood and urine samples from most participants. The CDC’s National Report on Human Exposure to Environmental Chemicals provides information on Americans’ exposure to different chemicals, with data sorted by age, sex, and ethnicity. The First Report, published in 2001, tested for 27 chemicals. The Fourth Report, published in 2009, contained results for 212 chemicals and included data from all the national health surveys conducted since 1999.
How biomonitoring data should be interpreted and the extent to which policy decisions should be based on data from reports such as those of the NHANES is a matter of debate. For the vast majority of measured chemicals there is no clear standard for what levels may be considered “safe,” and scientists in academia and industry disagree on whether—and which—chemicals may be harmful at low-level chronic exposures. With tens of thousands of chemicals in commerce, scientific knowledge regarding these questions is far from complete.
In the United States, environmental health activist organizations, such as the Environmental Working Group and the Toxic Free Legacy Coalition, have released a number of reports showing the biomonitoring results of hundreds of chemicals in human blood, in umbilical cord blood from newborns, and in human breast milk from small but not necessarily representative samples. The results have been used to call for the reform of chemical exposure laws in the United States.
There are a variety of ethical questions raised by biomonitoring. For example, given the scientific uncertainty regarding the health effects of exposure to many compounds, it is unclear whether information from biomonitoring should be used in guiding policy decisions. In the instance of NHANES and other research programs, there is debate about whether participants should have the right to know the results of the tests. Likewise, the communication of the results of biomonitoring studies to the public, particularly given the lack of knowledge about health effects, is also an area of concern.
Antibiotic resistance is the loss of susceptibility of bacteria to the killing (bacteriocidal) or growth-inhibiting (bacteriostatic) properties of an antibiotic agent. When a resistant strain of bacteria is the dominant strain in an infection, the infection may be untreatable and life-threatening. Examples of bacteria that are resistant to antibiotics include methicillin-resistant Staphylococcus aureus (MRSA), penicillin-resistant Enterococcus, and multidrug-resistant Mycobacterium tuberculosis (MDR-TB), which is resistant to two tuberculosis drugs, isoniazid and rifampicin. MDR-TB is particularly dangerous because it can give rise to extensively drug-resistant M. tuberculosis (XDR-TB), which requires aggressive treatment using a combination of five different drugs.
The potential for antibiotic resistance was recognized in the early 1940s, almost immediately after the first large-scale clinical applications of penicillin, the first antibiotic. Mass production of penicillin was part of the greater war effort of World War II, when the drug was used widely by military populations and by some small civilian populations. Along with penicillin’s effectiveness in the treatment of the wounded, the drug was lauded for lowering the rate of venereal disease among military personnel, since it was particularly potent against the bacterial organisms notorious for causing syphilis and gonorrhea. However, even before the war had ended, resistance to penicillin was already reported—first in 1940 by British biochemists Sir Ernst Boris Chain and Sir Edward Penley Abraham, who published a report about an enzyme capable of destroying penicillin, and again in 1944 by several scientists working independently, who reported a penicillin-inactivating enzyme that was secreted by certain bacteria. In the following decades, overuse and repeated exposure to antibiotic agents favoured the selection and replication of numerous strains of antibiotic-resistant bacteria.
There are several genetic mechanisms by which resistance to antibiotics can develop in bacteria. These mechanisms give rise to resistance because they result in biochemical modifications that alter certain bacterial cell properties that normally render the cell sensitive to an antibiotic. Examples of biochemical modifications that lead to resistance include the production of enzymes that inactivate the drug; the alteration of the protein, enzyme, or receptor targeted by the drug; the activation of drug efflux pumps that deliberately remove the drug from the cell; and the alteration of cell-wall proteins that inhibit drug uptake.
There are two important types of genetic mechanisms that can give rise to antibiotic resistance: mutation and acquisition of new genetic material. In the case of mutation, the rate at which resistance develops can be attributed to the rate at which bacteria mutate. A mutation is a permanent change in an organism’s genetic material. Mutations occur naturally when cells divide. Bacteria are especially prone to mutation because their genome consists of a single chromosome and because they have a high rate of replication. The more replications a cell undergoes, the higher the chance it has to mutate. The acquisition of new genetic material also is a naturally occurring process in bacteria. This process appears to be the most common mechanism by which resistance develops; it is facilitated by the fact that bacteria are prokaryotic organisms (which means that they do not have a nucleus protecting the genome) and by the presence of small pieces of DNA called plasmids that exist in a bacterial cell separate from the chromosome. Thus, the genetic material of bacteria is free-floating within the cell, making it open to gene transfer (the movement of a segment of genetic material from one bacterial cell to another), which often involves the transmission of plasmids.
In nature, the primary mechanisms of bacterial gene transfer are transduction and conjugation. Transduction occurs when a bacterial virus, called a bacteriophage, detaches from one bacterial cell, carrying with it some of that bacterium’s genome, and then infects another cell. When the bacteriophage inserts its genetic content into the genome of the next bacterium, the previous bacterium’s DNA also is incorporated into the genome. Conjugation occurs when two bacteria come into physical contact with each other and a plasmid, sometimes carrying a piece of the chromosomal DNA, is transferred from the donor cell to the recipient cell. Plasmids often carry genes encoding enzymes capable of inactivating certain antibiotics. The original source of the genes for these enzymes is not known with certainty; however, mobile genetic elements, called transposons (“jumping” genes), may have played a role in their appearance and may facilitate their transfer to other bacterial species. Because many of the plasmids carrying antibiotic-resistant genes can be transferred between different species of bacteria, widespread resistance to a specific antibiotic can develop rapidly.
The transmission of plasmids during conjugation has been associated with the generation of many different types of antibiotic-resistant bacteria. For example, conjugation involving a plasmid carrying the gene for resistance to methicillin (an antibiotic derived from penicillin) is suspected to have resulted in the generation of MRSA. Penicillin and methicillin work by weakening the wall of the bacterial cell; when the wall is compromised, the osmotic gradient between a bacterial cell’s cytoplasm and its environment forces the cell to lyse (break open). In MRSA the gene acquired through conjugation encodes a protein capable of inhibiting methicillin binding, preventing the drug from attaching to and disrupting its target protein in the bacterial cell wall. Another example is a plasmid carrying a gene that encodes the enzyme beta-lactamase. Beta-lactamase alters the structure of the penicillin molecule, rendering it inactive.
Transduction and conjugation result in a process called recombination. The new bacterial genomes that are produced from genetic recombination are called recombinants. Antibiotics do not create recombinants—antibiotic-resistant recombinants exist naturally by way of normal gene transfer events. However, antibiotics, and particularly the improper use of these drugs, provide selective pressure to bacterial colonies, whereby the most sensitive organisms are killed quickly, and the most resistant organisms are able to survive and replicate.
The prospects of scientists developing new antibiotics as fast as bacteria develop resistance are poor. Therefore, other measures have been undertaken, including educating the public about the proper use of antibiotics and the importance of completing a full regimen as prescribed. Improvements in diagnostic equipment to facilitate the isolation and detection of resistant bacteria such as MRSA in hospital settings have enabled rapid identification of these organisms within hours rather than days or weeks. In addition, although efforts to fight bacteria by targeting them with bacteriophages were largely abandoned with the discovery of penicillin and broad-spectrum antibiotics in the 1940s, the growing presence of resistance has renewed interest in these methods. In addition, a significant amount of phage-therapy research was conducted throughout the 20th century in regions within the former Soviet Union. As a result, today in Georgia, which was once under Soviet rule, bandages saturated with bacteriophages against staphylococcus are commercially available as topical treatments for wounds and burns. In the 21st century, researchers worldwide were working to develop other topical and systemic phage therapies.
A practical and extremely effective tool against the spread of antibiotic resistance is hand washing. The importance of hand washing was first realized in the 1840s by German-Hungarian physician Ignaz Philipp Semmelweis. Today, hand washing among medical personnel still is not as routine and thorough as it should be. In the early 2000s American critical-care physician Peter Pronovost developed a checklist for intensive care units that attending personnel could follow to ensure that every hand washing, antiseptic scrub, and surface disinfection required during medical procedures was performed, in order to prevent the spread of infection to hospitalized patients. Hospitals that have adopted these methods have lost fewer patients to complications caused by bacterial infections.
Safety engineering is the study of the causes and the prevention of accidental deaths and injuries. The field of safety engineering has not developed as a unified, specific discipline, and its practitioners have operated under a wide variety of position titles, job descriptions, responsibilities, and reporting levels in industry and in the loss-prevention activities of insurance companies. The general areas that have been identified as the major functions carried out by the professional safety engineer or safety professional are: the identification and appraisal of accident-producing conditions and practices and the evaluation of the severity of the accident problem; the development of accident and loss-control methods, procedures, and programs; the communication of accident and loss-control information to those directly involved; and the measurement and evaluation of the accident and loss-control systems and the modifications that are required to obtain optimum results.
The most recent trends in safety engineering include increased emphasis on prevention by the anticipation of hazard potentials; changing legal concepts with regard to product liability and negligent design or manufacture, as well as the developing emphasis on consumer protection; and the development of national and international legislation and controls, not only in the areas of transportation safety, product safety, and consumer protection but also in occupational health and environmental control.
Bioengineering is the application of engineering knowledge to the fields of medicine and biology. The bioengineer must be well grounded in biology and have engineering knowledge that is broad, drawing upon electrical, chemical, mechanical, and other engineering disciplines. The bioengineer may work in any of a large range of areas. One of these is the provision of artificial means to assist defective body functions—such as hearing aids, artificial limbs, and supportive or substitute organs. In another direction, the bioengineer may use engineering methods to achieve biosynthesis of animal or plant products—such as for fermentation processes.
Before World War II the field of bioengineering was essentially unknown, and little communication or interaction existed between the engineer and the life scientist. A few exceptions, however, should be noted. The agricultural engineer and the chemical engineer, involved in fermentation processes, have always been bioengineers in the broadest sense of the definition since they deal with biological systems and work with biologists. The civil engineer, specializing in sanitation, has applied biological principles in the work. Mechanical engineers have worked with the medical profession for many years in the development of artificial limbs. Another area of mechanical engineering that falls in the field of bioengineering is the air-conditioning field. In the early 1920s engineers and physiologists were employed by the American Society of Heating and Ventilating Engineers to study the effects of temperature and humidity on humans and to provide design criteria for heating and air-conditioning systems.
Today there are many more examples of interaction between biology and engineering, particularly in the medical and life-support fields. In addition to an increased awareness of the need for communication between the engineer and the associate in the life sciences, there is an increasing recognition of the role the engineer can play in several of the biological fields, including human medicine, and, likewise, an awareness of the contributions biological science can make toward the solution of engineering problems.
Much of the increase in bioengineering activity can be credited to electrical engineers. In the 1950s bioengineering meetings were dominated by sessions devoted to medical electronics. Medical instrumentation and medical electronics continue to be major areas of interest, but biological modeling, blood-flow dynamics, prosthetics, biomechanics (dynamics of body motion and strength of materials), biological heat transfer, biomaterials, and other areas are now included in conference programs.
Bioengineering developed out of specific desires or needs: the desire of surgeons to bypass the heart, the need for replacement organs, the requirement for life support in space, and many more. In most cases the early interaction and education were a result of personal contacts between physician, or physiologist, and engineer. Communication between the engineer and the life scientist was immediately recognized as a problem. Most engineers who wandered into the field in its early days probably had an exposure to biology through a high-school course and no further work. To overcome this problem, engineers began to study not only the subject matter but also the methods and techniques of their counterparts in medicine, physiology, psychology, and biology. Much of the information was self-taught or obtained through personal association and discussions. Finally, recognizing a need to assist in overcoming the communication barrier as well as to prepare engineers for the future, engineering schools developed courses and curricula in bioengineering.
The major branches of bioengineering are listed here.
• Medical engineering. Medical engineering concerns the application of engineering principles to medical problems, including the replacement of damaged organs, instrumentation, and the systems of health care, including diagnostic applications of computers.
• Agricultural engineering. This includes the application of engineering principles to the problems of biological production and to the external operations and environment that influence this production.
• Bionics. Bionics is the study of living systems so that the knowledge gained can be applied to the design of physical systems.
• Biochemical engineering. Biochemical engineering includes fermentation engineering, application of engineering principles to microscopic biological systems that are used to create new products by synthesis, including the production of protein from suitable raw materials.
• Human-factors engineering. This concerns the application of engineering, physiology, and psychology to the optimization of the human–machine relationship.
• Environmental health engineering. Also called bioenvironmental engineering, this field concerns the application of engineering principles to the control of the environment for the health, comfort, and safety of human beings. It includes the field of life-support systems for the exploration of outer space and the ocean.
• Genetic engineering. Genetic engineering is concerned with the artificial manipulation, modification, and recombination of DNA or other nucleic acid molecules in order to modify an organism. The techniques employed in this field have led to the production of medically important products, including human insulin, human growth hormone, and hepatitis B vaccine.
A hearing aid is a device that increases the loudness of sounds in the ear of the wearer. The earliest aid was the ear trumpet, characterized by a large mouth at one end for collecting the sound energy from a large area and a gradually tapering tube to a narrow orifice for insertion in the ear.
Modern hearing aids are electronic. Principal components are a microphone that converts sound into a varying electrical current, an amplifier that amplifies this current, and an earphone that converts the amplified current into a sound of greater intensity than the original. Early models were quite large, but when transistors replaced amplifier tubes and smaller magnetic microphones became available in the 1950s, it became possible to build very small hearing aids, some of which were constructed to fit within the frames of eyeglasses and, later, behind the earlobe or within the external ear.
Hearing aids have widely differing characteristics; requirements for suitable aids have been extensively investigated. The two characteristics of a hearing aid that most influence the understanding of speech are the amplification of the various components of speech sounds and the loudness with which the sounds are heard by the wearer. As regards the first characteristic, speech sounds contain many components of different frequencies, which are variously amplified by a hearing aid. The variation of amplification with frequency is called the frequency response of the aid. An aid need amplify sounds only within the range of 400 to 4,000 hertz, although the components of speech cover a much wider range. With regard to the second characteristic—the loudness with which sounds are heard—too loud a sound can be as difficult to understand as one that is too faint. The loudness range over which speech is understood best is wide for some users and narrow for others. Hearing aids with automatic volume control vary the amplification of the aid automatically with variations of the input. A binaural hearing aid consists of two separate aids, one for each ear. Such an arrangement can benefit certain users.
A prosthesis is an artificial substitute for a missing part of the body. The artificial parts that are most commonly thought of as prostheses are those that replace lost arms and legs, but bone, artery, and heart valve replacements are common, and artificial eyes and teeth are also correctly termed prostheses. The term is sometimes extended to cover such things as eyeglasses and hearing aids, which improve the functioning of a part. The medical specialty that deals with prostheses is called prosthetics. The origin of prosthetics as a science is attributed to the 16th-century French surgeon Ambroise Paré. Later workers developed upper-extremity replacements, including metal hands made either in one piece or with movable parts. The solid metal hand of the 16th and 17th centuries later gave way in great measure to a single hook or a leather-covered, nonfunctioning hand attached to the forearm by a leather or wooden shell. Improvement in the design of prostheses and increased acceptance of their use have accompanied major wars. New lightweight materials and better mechanical joints were introduced after World Wars I and II.
One type of below-knee prosthesis is made from plastic and fits the below-knee stump with total contact. It is held on either by means of a strap that passes above the kneecap or by means of rigid metal knee hinges attached to a leather thigh corset. Weight bearing is accomplished by pressure of the prosthesis against the tendon that extends from the kneecap to the lower legbone. In addition, a foot piece is commonly used that consists of a solid foot and ankle with layers of rubber in the heel to give a cushioning effect.
There are two main types of above-knee prostheses: (1) the prosthesis held on by means of a belt around the pelvis or suspended from the shoulder by straps and (2) the prosthesis kept in contact with the leg stump by suction, the belt and shoulder straps being eliminated.
The more complicated prosthesis used in cases of amputation through the hip joint or half of the pelvis usually consists of a plastic socket, in which the person virtually sits; a mechanical hip joint of metal; and a leather, plastic, or wooden thigh piece with the mechanical knee, shin portion, and foot. A great advance in fabrication of functional upper-extremity prostheses followed World War II. Arm prostheses came to be made of plastic, frequently reinforced with glass fibres.
The below-elbow prosthesis consists of a single plastic shell and a metal wrist joint to which is attached a terminal device, either a hook or a hand. The person wears a shoulder harness made of webbing, from which a steel cable extends to the terminal device. When the person shrugs the shoulder, thus tightening the cable, the terminal device opens and closes. In certain cases the biceps muscle may be attached to the prosthesis by a surgical operation known as cineplasty. This procedure makes it possible to dispense with the shoulder harness and allows finer control of the terminal device. The above-elbow prosthesis has, in addition to the forearm shell, an upper-arm plastic shell and a mechanical, locking elbow joint. This complicates its use, inasmuch as there must be one cable control for the terminal device and another control to lock and unlock the elbow. The most complicated upper-extremity prosthesis, that used in cases of amputation through the shoulder, includes a plastic shoulder cap extending over the chest and back. Usually no shoulder rotation is possible, but the mechanical elbow and terminal device function as in other arm prostheses.
A metal hook that opens and closes as two fingers is the most commonly used terminal device and the most efficient. After World War II the APRL hand (from U.S. Army Prosthetic Research Laboratory) was developed. This is a metal mechanical hand covered by a rubber glove of a colour similar to that of the patient’s remaining hand. Many attempts have been made to use electrical energy as the source of hook or hand control. This is done primarily by building into the arm prosthesis electrodes that are activated by the patient’s own muscle contractions. The electric current generated by these muscle contractions is then amplified by means of electrical components and batteries to control the terminal device. Such an arrangement is referred to as a myoelectrical control system.
Breast prostheses are used after mastectomy. External prostheses may be worn, but surgical reconstruction of the breast, involving implantation of a prosthesis, became increasingly common from the 1970s.