THE REFLECTED WORLD INSIDE US

Picture #17

One time a mathematics student from Warsaw turned up at our hospital and told us the following story: it was summer, and he was traveling on a local train across a meadow, in which, far away by the woods, a horse was grazing. As the train rolled slowly past the field, our student felt his nose itching and began to sneeze, tears came to his eyes and before he had left the meadow behind him he was overcome with breathlessness. I had already seen it several times before: people who have come into a room where the day before, only briefly and for the first time ever, a cat has been playing, and have had a violent attack of asthma; others who have felt weak or even fallen to the floor as they accidentally breathed in penicillin prepared for injections; and I have even read about people who have been covered in hives at the mere sight of a flower in a picture. All of them had allergies. But can they really be as severe as in the story about the horse?

We performed the typical test in such cases, injecting an extract of horsehair under the skin. Just in case, we used a dilution ten thousand times weaker than recommended. After a few minutes

Picture #18

the student’s arm began to swell violently, and before we had time to realize, the swelling was up to his armpit. We hurriedly applied tourniquets to protect the student from the consequences of the reaction.

After a while we invited the patient to come in for desensitization treatment. For many weeks we gave him intracutaneous injections of increasingly concentrated extracts of horsehair. I do not have to say what sort of dilutions we started with, and what precautionary measures we took. We repeated this treatment the following years. The result was a remarkable lessening of his symptoms when exposed to horses, though our student never became a jockey.

In taking this action we were following in the footsteps of Mithridates VI, king of Pontus. This tyrant, famous for his cruelty, imprisoned his mother, killed his brother and married his sister. On his orders, issued from Ephesus, all the Italic tribes in Asia Minor were murdered in a single day, almost a hundred thousand people altogether. Mithridates was also a polyglot, who apparently knew twenty-two languages; he had a reputation as a patron of the arts and science and a friend of artists. Like all tyrants, he was afraid of being poisoned. Wishing to prevent this from happening, he conducted numerous experiments on his relatives and subjects to research poisons and their antidotes. Then every day for years, in ever-increasing doses, he drank an infusion containing the fiftyfour most deadly poisons. His whole life he fought against Rome. Defeated by Antony in 66 bc, he never gave up the fight. But when his beloved son rose against him, he sank into despair and took a powerful draught of poison. However, it did not work —over the years the king’s organism had become immune. Then he ordered a slave to run him through with a sword, and fell dead on the spot.

♦ ♦♦

Following Mithridates’s example, we desensitize patients who are oversensitive to grass pollen, domestic dust, insects and animals. They are allergy sufferers. They are characterized by amazingly strong, dangerous reactions to substances in the outside world that are harmless to the rest of us. Desensitization was introduced into medicine in the early twentieth century. The first person to apply it on the North American continent was Robert A. Cooke. He had reasons for this —not just professional, but personal too. Cooke was brought up on a farm in New Jersey, and suffered from severe asthma from childhood onwards. Years later he recalled how he was “continuously ... inhaling little volcanoes of (burning) Himrod’s asthma powder and vomiting from syrup of ipecac.” Adrenalin was not yet known, nor was the word “allergy.” When he went to school and lived in a boarding house, the asthma died down, but it returned whenever he went back to the farm, to his family home. He became convinced that it was triggered by contact with the horses kept in the stables. However, in those days it was hard to avoid horses. Towards the end of his studies, on a graduate traineeship, he was obliged to spend six months working as a traveling doctor for the emergency service. In the early twentieth century the New York ambulances were horse-drawn vehicles. Every trip to see a patient ended for Cooke in an attack of breathlessness. Before saving the patient he had to save himself, which he did with recently introduced shots of adrenalin. “I put as much adrenalin under my skin as any human being,” wrote Cooke. But fate had an even bigger test in store for him. In 1907 he was helping to perform a tracheotomy. The operation was being carried out as a matter of urgency to save the life of a patient suffering from diphtheria, in those days a common and deadly illness. Just before the operation, as a prophylactic measure the operators were given an equine antitoxin against diphtheria, obtained by inoculating horses with diphtheria germs. Cooke fell to the ground on the spot and remained

unconscious for ten hours. Is it any surprise that after these experiences Robert Cooke became fully preoccupied by the new discipline that was being born right before his eyes —allergology? He established the first big department for asthma sufferers in New York, made important clinical observations and ranked among the fathers of American allergology. “First be a doctor, then an allergist” — his favorite saying could apply to all medical specialties.

In treating allergies, and many other illnesses that have inflammation at their core (and, nowadays, even arteriosclerosis is regarded as an inflammation!), the most powerful defenses are hormones produced by the outer casing (the cortex) of some small endocrine glands, the adrenal glands. We call them corticosteroids. At the root of their discovery lay a question that in 1936 Philip Showalter Hench, an internist from the Mayo Clinic, put to chemist Edward Calvin Kendall. The question related to Hench’s observation that in certain circumstances, a severe rheumatoid arthritis cures itself on its own without any drugs. Apparently Hench asked Kendall “if he could find a metabolite that was increased during pregnancies and during jaundice in view of his observation of the alteration of rheumatoid arthritis or asthma associated with these conditions.” Kendall took up the challenge, and the rest is history. After more than ten years of research, he isolated this substance and demonstrated that it consists of hormones produced by the adrenal cortex. When the purified hormones were injected into some patients, one of the greatest miracles known to medicine occurred. People with twisted joints, groaning with pain and chained to their beds, stood up smiling, and people suffering from asthma, suffocating day and night, could breathe freely for the first time in their lives. In Stockholm there was little delay —two years later, both the questioner and the man he questioned received the Nobel Prize.

♦ ♦♦

There are no roses without thorns. Corticosteroids, which revolutionized medicine in the blink of an eye, carried some dangerous side effects. Some patients suddenly grew fat, others bled from the stomach, and yet others developed diabetes. Long years had to pass before we learned, and only partly, how to avoid these complications or neutralize them. So, not surprisingly, there has been no letup in the search for asthma drugs. The most original researcher was a young doctor named Roger Altounyan. He worked for a small British pharmaceuticals firm and tested new drugs on himself— not just once, or a few times, not just for a week or a month, but thousands of times over and for years on end. He had suffered from asthma during his medical studies and told himself he was a much more genuine model of the illness than the sensitized mice everyone tested potential pharmaceuticals on in those days.

He was allergic to many substances, including guinea pig hair, which became the main ingredient in his famous “hair soup.” For four days he soaked the guinea pig hair, and then filtered the solution, thickened it and inhaled its vapors with the help of a nebulizer. Applied like this, the “hair soup” inevitably triggered asthma attacks in him, during which he made precise observations and took sensitive spirometric measurements. He conducted the experiments regularly, three times a week. He began in the early afternoon and finished in the evening. He attained a perfect repetition and standardization of his method, though sometimes the attacks he provoked were terribly dangerous and went on long into the night. Before administering his “soup” to himself, he would inhale new chemical compounds that were suspected of being able to weaken the attack or even prevent it. He was especially intrigued by extracts of an herb called Ammi visnagae, containing khellin, which was used to treat renal colic, because it expanded the smooth muscles of the ureters. Why did this particular plant, used in natural medicine for centuries by the peoples of the Mediterranean Sea basin, attract Altounyan so strongly? Did it remind him of

the exemplary hospital in Aleppo, Syria, run by his doctor-father, where he even treated Lawrence of Arabia? In an eight-year period starting from 1957, Altounyan triggered over a thousand asthma attacks in himself and tested two hundred and two new chemical compounds. Finally, when he was sure he had hit upon the right form of khellin, he gave it to several patients and suffered a total failure —it had no effect at all. Only a few weeks later did he discover that for unknown reasons the incorrect chemical compound had been prepared for him. He repeated the experiment, this time with complete success. The drug, called Intal, was introduced to the market in the late 1960s and was a best seller for a long time in asthma treatment. Altounyan also designed an ingenious manual inhaler for controlling intake of the drug. Without a doubt, his experience as a pilot helped him with this —he flew fighter planes and bombers in the RAF, won the Air Force Cross and was a pilot instructor to the end of the war. The soul of the inhalator was a tiny propeller, and the device itself was called not a Spitfire, but... a Spinhaler.

I once spent a pleasant summer evening talking to Altounyan and his wife at my flat in Krakow. There opposite me sat someone extremely familiar to me from my childhood games, because a boy named Roger had been the hero of my favorite children’s novel, Swallows and Amazons. With bated breath I used to follow in the wake of his little sailing boat as he and his peers set off on mysterious escapades to search for adventure and discover new lands. The amusements I had joined in with as a reader took place on Coniston Water in the Lake District in northern England, where Roger Altounyan’s grandparents lived. Their friend and neighbor Arthur Ransome used to keep a close eye on the children’s games on the water, and later depicted them in Swallows and Amazons.

Roger laughed at my account of those “joint” expeditions of ours from the distant past, but finally we had to go back to asthma.

I decided to take the bull by the horns, and asked him exactly how the drug he discovered works. “Why does it work in asthma?” And then he replied: “I’ll give you an answer if you can tell me what asthma is.”

Roger Altounyan found out what asthma is when, as a medical student at Cambridge, he went to see a doctor with the first attack of breathlessness he had ever had in his life. The doctor made his diagnosis and prescribed Phenobarbital, a soporific sedative, and ephedrine, which relaxes the smooth muscles. Then he pointed to his head and said: “The whole illness is in here.” That was the prevailing belief at the time —the paradigm, as we would say nowadays. Doctors believed asthma was a disease of the central nervous system. Years later when Altounyan and I had our conversation at my flat, he was teaching his students in London and I was teaching mine in Krakow that at the heart of asthma lay a spasm of the muscles of the bronchial tubes, their oversensitive reaction to an extremely varied range of stimuli. No one said a word about the central nervous system. Nowadays, more than thirty years since that meeting, we teach something completely different: that asthma is a chronic inflammatory process of the airways. So here we have three diametrically different views of one of the most common illnesses. Doesn’t medicine walk on shifting sands? A pragmatist would respond by saying that even if we have failed to capture the essence of the disease in a net of concepts, over the years we have taken great strides in treating asthma. It is true. Anyone trying to justify the conceptual difficulties and trouble in understanding asthma, and the variability of theories preached from the lectern, would certainly point out the diversity of the illness, which has been compared to love, because it is hard to define although everyone recognizes its symptoms.

For asthma is a thoroughly heterogeneous illness that finds its reflection in a wide variety of types, each individually identified

KORE

within medicine. And so we speak of asthma that is allergic or nonallergic, seasonal or all-year-round, exercise-induced, brittle, nocturnal, professional or corticosteroid-resistant, as well as other kinds of asthma. These names alone indicate that we lack a single criterion of identity. The illness is variable and capricious — in some people it can run for a long time without any symptoms and not show up at all in the most sensitive clinical tests, while suffocating others for months, making it impossible for the doctor to seek the causes. In addition, the forms of asthma briefly mentioned here quite often change their shape, like clouds in the sky, combining common elements, and then turning into other ones again. One of the most common elements determining asthma is atopia. Watch out, Reader, because it could affect you too. Every fifth European or American has features of atopia. Fortunately, that is never synonymous with having the illness, but it indicates a very common occurrence of this genetic trait.

The word “atopia” was used in New York in the early 1920s to define a family tendency towards allergic reactions —unexpected and incomprehensible, and sometimes having a dramatic course, to the horror of the patient and those around him. These reactions and the related illnesses were completely baffling, even for medics who over the centuries have grown accustomed to the oddities of nature.

Atopos: the word is aptly chosen. It means different, separate, deviating from the norm, unusual. Alcibiades uses this word in Plato’s Symposium to define Socrates. Were the New Yorkers who decided to introduce the word into medicine aware of its ancient history? We do not know, but we do know that at almost exactly the same time, two doctors in what was then the German city of Breslau conducted an experiment that half a century later gave rise to the isolation of the causative factor of atopia. Heinz Kiistner,

who was starting work at the university’s Hygiene Unit, told his supervisor Otto Prausnitz that about fifteen minutes after eating cooked fish, he became covered in hives. As the same symptoms occurred in several of his blood relatives, the two scientists began to suspect that the mystery might lie in the blood. So they decided to give a subcutaneous injection of several drops of Kiistner’s blood serum to Prausnitz, who ate fish with relish and no difficulty. Nothing happened. But when twenty-four hours later they injected a trace of extract of cooked fish in the same spot, his arm went red and became covered in hives. Other healthy volunteers reacted in a similar way. Thus the sensitivity could be transferred to a healthy person, though not to guinea pigs. The two experimenters drew the conclusion that a person who is sensitive to fish carries antibodies in the blood (which they named precipitins) against fish antigens. Precipitins injected into a healthy person attach themselves under the skin (nowadays, we know that they fix onto the membrane of special “explosive” cells called mastocytes), and wait for the antigen that is an extract of fish meat to come through. Then they combine with these antigens to cause an explosive reaction. Precipitins proved extremely difficult to isolate; it took fortyfive years for them to be obtained, highly purified, from the blood. They were found to be a protein in the immunoglobulin group, and were identified with the letter E. Thus atopia is a genetic tendency towards overproduction of immunoglobulin E. Precipitins are often aimed at common antigens, and when additional factors are at work, they can lead to the development of hay fever, hives or asthma. Our math student and Doctor Cooke, who were both so severely allergic to horses, must have been producing special antibodies against equine tissues in high concentrations, whereas hay fever sufferers or those who are allergic to dust produce immunoglobulin E against the pollen from grasses, weeds and some trees.

♦♦♦

It is possible to cure illnesses using antibodies. Emil von Behring adopted this idea: by injecting animals with diphtheria bacteria, he then obtained special antibodies from them that inactivated these bacteria. Given to patients, they saved their lives at a severe stage of the illness, which often ended in death otherwise. For this research, in 1901 Behring was the first person in history to win the Nobel Prize for medicine. In its statement, the Nobel Committee wrote that he had “placed in the hands of the physician a victorious weapon against illness and deaths.” The eminent continuer of this research, Paul Ehrlich, said that “the immune substances ... in the manner of magic bullets, seek out the enemy.” Moved by these events, George Bernard Shaw wrote a play called The Doctor's Dilemma , in which the title hero claims that the future of scientific therapy belongs to immunology, whereas “drugs are a delusion.”

The antibodies acquired by inoculating horses and other animals were heterogeneous, however, and as a result they sometimes caused highly allergic reactions in the patients. Scholars and doctors began to dream of producing pure immunoglobulins on a large scale. To make this dream come true, it proved helpful to introduce the in vitro cultivation of cells of a particular tumor (multiple myeloma, or plasmacytoma), which produces immunoglobulins, but ones that are devoid of immunological properties. In the mid1960s an ingenious technique was devised for combining two cells into one. And so from one myeloma cell and one immune system cell that secretes a specific immunoglobulin, a hybrid was obtained that produced an unlimited quantity of pure, homogeneous antibodies of a defined specificity. They were called monoclonal antibodies. Efforts began to seek applications for them in diagnostics and therapy. As always, the first success boosted the research. Rituximab, a drug used to treat non-Hodgkin’s lymphoma — malignant tumors in the lymphatic system —proved a spectacular success. Several others followed. More and more often, instead of the

full antibodies, which are long particles shaped like the letter Y, small pieces of them are applied. So, for example, a short section of antibody is fastened onto an anti-cancer drug, which recognizes the “signatures” of tumors, in other words, the individual receptors on their surface. Thus the antibodies become a navigator, the warhead for a shell full of the drug, which they guide accurately to its target. There are hundreds of these and similar antibodies in pre-clinical or clinical trials. Although they have not yet become a magic weapon, if he were still alive George Bernard Shaw could boldly write his next play about them.

Over the last few pages the words “antigen” and “antibody” have appeared a number of times. Alongside lymphocytes, they are the actors on the stage of this chapter, which talks about allergy as derived from the study of immunity, in other words, immunology. The antigen comes to us from the outside world and encounters an antibody produced by the lymphocytes, which extinguishes and inactivates it. We call the sub-group of antigens that trigger allergy “allergens.” They are a small part of a vast set, including bacteria, viruses and everything else that is alien to us, or not part of our “self.” And the “self” is whatever the immune system recognizes as its own, as belonging to its body. This distinction protects us from being invaded by the world, and is crucial to our existence. Yet there are situations in which the doctor uses powerful drugs to stupefy the immune system. So it is in transplant medicine, where we put glasses on the immune system to make it start to see an alien, transplanted organ as its own. Then instead of sending millions of killer cells to attack the new arrival, it accepts it as an integral part of its body. The range of organs or tissues being transplanted is growing rapidly, as well as the number of operations. The most spectacular transplants involve the heart. How can we fail

to admire forty-two-year-old American Kelly Parking, who eight years after receiving a new heart climbed to the top of the Matterhorn and came down again, without any medical care? Or the Canadian man who had a heart transplant at the age of twenty-six, and twenty years later completed the Olympic triathlon distance in three hours and twenty minutes, coming seventy-fifth out of a hundred and twenty-six starters ?

Of course, a transplant carries risks. In 1968 Andrzej Wajda made an extremely funny comedic film about them, with the English title Roly Poly. It was based on a story by Stanislaw Lem, and the main role was played by Bogumil Kobiela. The hero is a race car driver who, after lots of accidents in car races, has various organ transplants. We see his psyche gradually changing, until in the final scene, when the word “bone” crops up in the conversation, he attacks the man who said it and bites his hand, because he has some organs taken from a dog inside him too. In a foreword added to this novella much more recently, Lem explained that over twentyfive years ago he could not have known that a transplant from dog to man is impossible. But he seemed not to have noticed (though can he really have failed to notice anything?) that he foresaw how features transfer from donor to recipient with amazing insight. Not long ago, a leukemia patient was described as having acquired hay fever along with a bone marrow transplant; he had never suffered from it before, but it was the bane of the donor’s life. However, considering the bone marrow saved his life, perhaps he found it easy to come to terms with the hay fever, which turned up every spring from then on.

The definition of the “self” —which in psychology has replaced the soul, without removing the semantic, physiological and existential problems related to it —in terms of immunology is simple.

Let us repeat: the “self” is whatever the immune system accepts as its own, as belonging to its body. All right, but what about the rest of the world? How do we recognize it? How do we keep it at a distance? How do we guarantee this unimaginable wealth of antibodies, each of which is directed only and exclusively at one single antigen among billions?

For decades it was believed that the antibodies we carry inside us —whether free ones in the form of gamma-globulin, or those anchored in the lymphocytes —form their individuality in an encounter with an alien antigen when it gets inside us. In other words, we thought they were characterized by flexibility, or plasticity, as if they were made of plasticine, on which an alien invader (the antigen) then stamped its mark and molded them to make them fit it exactly, as a key fits a lock. And then, by multiplying, they bonded with it closely and inactivated it.

This theory reached its apogee in the 1950s. It was the paradigm supported by the great Linus Pauling, among others, who had already won two Nobel Prizes by then, but it collapsed like a house of cards thanks to a Dane named Niels Jerne. He worked for a Copenhagen producer of serums and inoculations, where he was involved with anti-diphtheria serums. When a horse was inoculated with diphtheria germs, its large organism produced significant amounts of antibodies, which were then given to patients as a serum — such as Cooke was given, we remember, who was sensitive to horses and almost died. Jerne’s task was to standardize immunological serums. He hit upon a fundamental problem for standardization: a lack of proportion between successive dilutions of serums and the strength of their action. Each time the dilutions came out slightly different. Then he formed a hypothesis that the equine serums he was testing contained special antibodies before the horse was inoculated. The inoculation merely caused a tremendous multiplication of these antibodies. Then Jerne demonstrated that his

supposition was true. The phenomenon he discovered could have two explanations. The first one was obvious and convincing: earlier on, the horse had already come across the bacteria with which it was later inoculated, and preserved the memory of that contact in the form of a small number of antibodies in its blood. This seemed all the more obvious since horses and other animals used in experiments were especially prone to exposure to bacteria. Don’t we all carry such memories inside from unnoticed encounters with bacteria or viruses?

But Jerne thought of another eventuality. Perhaps, he wrote, the horse’s or man’s animal system comes into the world equipped with antibodies against all possible microorganisms? Maybe it has them ready in advance, before encountering any potential invader? And this proved to be the truth; eventually it was sealed with a Nobel Prize. And so the immune system is able to react individually to practically every microorganism, and moreover, to every antigen in the world around us, because it is suitably equipped for this purpose in the mother’s womb. Hundreds of millions of B lymphocytes, each with a different antibody on the surface, are just waiting to encounter their antigens. The individuality of these antibodies does not have to be great, just enough to bond to the antigen in the first place. It rapidly increases within the progeny of those first lymphocytes. In recent years scientists have explained the molecular mechanism of this seemingly paradoxical phenomenon; we have a limited yet fairly large number of genes at our disposal, and with them we are able to produce billions of different antibodies.

How is such a highly original theory born? Where does the inspiration come from? Ten years after making this discovery, Jerne described an evening in Copenhagen, when he was on his way home from the laboratory. He was considering how io 17 gammaglobulin particles are able to provide a specificity of antibodies. Before reaching home the theory now familiar to us had fallen into

KORE

would surely have applauded this reasoning, because he wrote: “Gradually it has become clear to me what every great philosophy so far has been: namely, the personal confession of its author and a kind of involuntary and unconscious memoir.”

When we think of Jerne’s theory— its scientific name is clonal selection—we are overcome with admiration and amazement, similar to what we feel as we gaze at a starry sky. This theory implies a connection with the starry sky and also with the closer world around us. It tells us that we are its reflection. The wise men and great poets sensed this, and their intuition anticipated science. Origen, the most famous theologian of the East, who “built the edifice of Christian knowledge with Greek bonding,” said: “Understand that you are another world in miniature and that in you are the sun, the moon, and also the stars.” Leibniz reckoned that any “individual substance” must contain a complete image of the universe, like a seed that conceals inside (if only as an image) the entire being that will later grow from it. Whereas Rainer Maria Rilke believed we are so similar to this world that if we were to sit still and keep quiet, it would be impossible to tell us apart from it. And so we have nothing to fear. It is not inconceivable, he says, that it is just as with those dragons from the oldest myths known to man, which at a decisive moment change into princesses. “Perhaps all the dragons of our life are princesses, who are only waiting to see us once beautiful and brave.” And finally Czeslaw Milosz, who wrote: “Perhaps the world was created by the Good Lord to reflect itself in the infinite number of eyes of living creatures, or, what is more probable, in the infinite number of human consciousnesses.”

I imagine that the molecules that form antibodies, in bonding with an antigen, must vibrate according to the laws of physics, and thus they resound inside us. So, as in music, all together they create

a canon, a mirror image canon, in which the second, following voice imitates the first voice perfectly. So perhaps we are playing the canon of the universe? And perhaps, like the harmony of the celestial spheres, this music resounds inside each of us, but none of us can hear it, except for the poets, to whom certain themes and inner rhythms break through from the depths of the body and dictate the meter of the poetry to them like a musical daemon. From the moment we are born we carry the world inside us. We are its matrix, and if the world were ever to perish, it could find itself within us and regenerate from us.

Ernst Haeckel Cystoidea

An illustration from the book Kunstformen der Natur, 1904

Picture #19
Picture #20
Picture #21

In Mikhail Bulgakov’s novel The Master and Margarita there is a scene where Christ, who in this story is called Yehuda Ha-Notsri, is standing before Pontius Pilate. All morning the Procurator of Judea has been suffering from a painful headache. He has a hemicrania, a migraine, which he has never confessed to anyone. Then comes the famous question: “What is truth?”

Ha-Notsri replies: “At this moment the truth is chiefly that your head is aching and aching so hard that you are having cowardly thoughts about death.... But the pain will stop soon and your headache will go.”

“Tell me,” says Pilate softly in Latin, “are you a great physician?”

Here we have an instant diagnosis, read from the patient’s face —an attribute of divinity, of course. But also the ideal to which the doctor should aspire. Does this connotation sound arrogant or even blasphemous? I do not think so. In various world religions the gods have the power to pervade man, to look into his deepest secrets, as part of the gift of healing and reviving, skills in the sphere of the miraculous. The Bible gives us some examples. And did not

Asclepius have the art of resurrecting the dead? Who other than Isis, the most powerful goddess of ancient Egypt, brought Osiris back to life, although he had been chopped to bits? Echoes of these miraculous diagnoses and cures lie hidden in the doctor’s vocation.

The desire to perform a miracle, to break the chains of everyday normality, to be free of the laws that bind us, was there at the dawn of medicine. This was what magic was for — the common stem from which both medicine and art originate. Magic was a system based on the omnipotence of the word: “A correctly uttered magic spell can bring health or death, rain or drought, call up the spirits or reveal the future.” At a later point in time the art of the word, aided by experience, was joined by a second element: thought, an attempt to understand. From then on reason became a companion to the art of healing, and went along with it for thousands of years, until it produced science, which led to the flourish of biology and medicine. Naturally, it influenced the art of healing, though it has never supplanted or replaced it. Nowadays, of course, in the era of technological revolution, we sometimes tend to assume it has.

Efforts to understand the nature of things came earlier. It was at the turn of the seventh and sixth centuries bc that practical skills were converted into scientific theory. This was apparently achieved, in the Greek colonies on the coast of Asia Minor, by one of seven wise men, Thales of Miletus. Whenever the Greek philosophers sought their progenitor, they looked back in time and came to him. This was as far as they could go into the past — Thales was the first person to think of the unity of the universe, because the object of his investigations was nature. He was chiefly interested in its origins, so he asked from what bodies (nowadays, we would say matter) it had hatched and developed. Finding an answer was hopelessly difficult, because how could Thales know what was there at

the beginning of the world? However, he deserves credit not for providing answers, but for posing questions, something we value highly in science to this day. He also aspired to explaining some phenomena. This had already been done by mythology before him, but the point is that his method of explaining was different.

A story has survived about him that tells how one night he was in the garden, staring at the stars; absorbed by his observations; he was so deeply lost in thought that he fell into a well. Then he heard laughter —loud, resonant laughter. It was a Thracian servant girl, who was laughing at him for seeking paths in the sky, but failing to walk the Earth.

A few years later, in nearby Ephesus, a memorable scene took place. It was night, and there in the temple before the goddess stood Heraclitus. He had brought the fruits of his life’s work, a great book of wisdom. The goddess looked at the visitor. A copy of her statue, kept at the Naples Museum, allows us to envisage this scene more vividly. Entirely made of black wood, she has collars and necklaces around her neck and is wrapped in ethereal robes; from the waist down she is encased in cloth, richly adorned with pictures of flora and fauna. She is stuck inside it, as in a box, with only the tips of her black toes protruding. Her hands are turned outwards towards the visitor, as if she wants to greet him, present or show something to him. Her torso is naked, covered in several rows of ample breasts. She is Artemis, equated with the Egyptian Isis, the goddess of nature, a powerful queen of the Mediterranean world and the source of creative force —the Mother of all Nature, as Apuleius calls her. She looms out of the darkness of prehistory— mysterious, strange and inscrutable. We see her, centuries later, in pictures, frescoes and statues. In Raphael’s work she has been doubled and supports both sides of the throne of Philosophy in the fresco called

The School of Athens on the vault of the Stanza della Segnatura in the Vatican Palace. She adorns the covers of famous books by Antony van Leeuwenhoek, whose treatise on microscopy describes the interiora rerum , the insides of things. She opens an eighteenth-century French translation of Lucretius’s De Rerum Natura, published at the Sorbonne as De la Nature des Choses. As drawn by Bertel Thorvaldsen, she occupies the title page of a work that Alexander von Humboldt gave to Goethe. And finally she gazes as us from the cover of a long epic poem by Erasmus Darwin, grandfather of Charles, The Temple of Nature, or, the Origin of Society. On these and many other images the veil covering her has been drawn aside or even removed. This is done by the hand of Apollo, the spirit of Epictetus or, more and more often as the centuries go by, Science, represented as her high priestess. And so the secrets of Nature are discovered. There before us stands the naked truth.

We do not know if Artemis-Isis drew aside the hem of her veil before Heraclitus in the temple. Nor do we know the title of the work he laid at her feet. However, we do have reason to believe it contained the powerful concepts that he introduced into philosophy: the idea of divine reason pervading an ever-changing world full of contrasts. There in the temple he spoke the words written in his book, saying: “Nature likes to hide” {ovtn; KpvTtrscrOai quid). This laconic remark, pregnant with meaning, was the object of numerous interpretations. What did “Nature” mean to Heraclitus? It does not seem to have signified a large set of natural phenomena subject to laws. This meaning took shape later on. Whereas he may have thought of nature as meaning the essence of things, their main constitutional feature. And he may also have thought about the origin of things, about their beginnings, and how they came to be. Experts on ancient Greek maintain that for Heraclitus “the word

0 vm<; ( phusis ) could designate birth, while the word KpintnarBou (kruptesthai), for its part, could evoke disappearance or death.” And so this aphorism may have concealed Heraclitus’s characteristic amazement at the transience of things and people, who come into being and disappear, are born, and then die. This is the firm belief that the structure of the world is woven from contradictory though mutually supporting elements. The Stoics related his words to the gods, hidden within myths. Later on they served as explanations for the difficulties of the natural sciences, justified the exegesis of biblical texts or defended pagan beliefs and pointed out the violence inflicted on Nature by the mechanization of the world. The closer to the modern day, the more the mysteries hidden behind Isis’s veil came to be related to the mystery of existence.

Heraclitus’s words, like Nature of whom they spoke, were hidden within themselves. They were an enigmatic aphorism, a riddle, a wise saying from ancient Greece, sounding like a prophecy by the Delphic oracle. They formed an extremely powerful metaphor that was preserved in language. They shaped the concept of Nature, of which we are actually a part; they spoke of discovering her laws and seeking the truth about the character of science. And there were scholars, including Immanuel Kant and especially Francis Bacon, who put Nature before a tribunal, announcing that the truth should be wrung out of her, “under the torture of experiments.” At the other extreme stood Goethe, who warned of the danger concealed behind the veil of “Nature the Sphinx.” Whereas Nietzsche, jokingly pointing out that decency demands us not to want to see everything naked, wrote: “One should have more respect for the bashfulness with which nature has hidden behind riddles and iridescent uncertainties.” And then he added: “Perhaps truth is a woman who has grounds for not showing her grounds?”

In the Hippocratic Corpus , dating from the fifth century bc, we read that to know Nature, “one cannot know anything certain ...

from any other quarter than from medicine,” and that “this knowledge is to be attained when one comprehends the whole subject of medicine properly, but not until then; and I say that this history shows what man is, by what causes he was made, and other things accurately.” The author was convinced that by means of art one could wring signs out of Nature, clinical symptoms — but “without damage.” In these words we hear Greek moderation and the Hippocratic command, “Above all do no harm” ( Primum non nocere). “Because moderation,” as Pythagoras said, “means not doing harm.”

In the Musee Royal des Beaux Arts in Brussels there is a painting by Pieter Brueghel the Elder called The Fall of Icarus. It is painted in shades of green, reflecting the illuminated green of the sea picking up flashes of the sun, which is setting beyond the horizon. We gaze at the sea from the gentle shore above a small bay. In the foreground a ploughman is following his plough, lower down a shepherd is tending his sheep and a fisherman is leaning forwards right at the water’s edge. An elaborate three-masted ship is sailing across the bay. The fall of Icarus is not disturbing this peaceful scene. Close to the shore, between the ship and the fisherman, all we can see of the boy who has fallen into the sea is his legs sticking out of the surface and an outstretched hand. There are a few feathers floating in the air. No one seems surprised; no one is taking any notice or reacting. Except perhaps a partridge that has perched on a branch over the water behind the fisherman’s back, and has fixed its gaze on the disappearing boy. This indifference, this lack of understanding, astounds W.H. Auden:

In Brueghel’s Icarus, for instance: how everything turns away

Quite leisurely from the disaster; the ploughman may

Have heard the splash, the forsaken cry.

But for him it was not an important failure; the sun shone As it had to on the white legs disappearing into the green Water; and the expensive delicate ship that must have seen Something amazing, a boy falling out of the sky, had somewhere to get to and sailed calmly on.

Icarus does not die alone; he dies among people. Brueghel found a brilliant way to express the truth about man, about “the phenomenon of the world’s indifference, which belongs to fundamental experiences.” We do not want to see suffering, so we turn our backs on misfortune. It always comes at the wrong time; it obstructs and nags like a thorn, even though it is not stuck under our skin. It catches us off guard. And only the doctor, the nurse or the hospital chaplain comes out early to meet it. They have to preserve their sensitivity to avoid becoming participants in the scene depicted by Brueghel.

Sensitivity has a special place in medicine. On the one hand, as doctors, we have to put on a layer of armor; otherwise, we would never be able to cope with all the misery and suffering around us. Otherwise, the doctor would start to cry with the patients, after an hour’s work he would be good for nothing and the surgeon would break down at the operating table. We put on this armor every day, both doctors and nurses.

On the other hand it carries a risk, because over time it can impose a lack of empathy, a feeling of indifference. But in fact it is emotion that sends the first impulse to stir the doctor into action. I imagine, Reader, that if we were walking along together, and someone were to fall over, we would both instantly bend down to give him a hand and raise him to his feet; at most, I might know a bit more about how to help him. That’s an ordinary, human reflex, isn’t it? Something is bound to react inside us. It should be automatic, but life does not depend on reflex responses alone, because

we are not machines. You have to nurture this sensitivity in yourself—to have a sensitive heart. It is not often talked about, because sensitivities are mainly expected from artists. Perhaps the connections between medicine and art are apparent at this level too.

Being sensitive enables us to be open towards another person, ready to admit him. The sick “open up spaces for mercy. By their illness and suffering they call forth acts of mercy and create the possibility for accomplishing them.” But how difficult they can be! Imagine it is night-time, and the ambulance brings in yet another gibbering, drunken patient. Or the hospital corridor is teeming with people in wet coats who have been waiting hours to be admitted and examined. They are fed up to the back teeth, but are they the only ones? And yet how much each of them can bring into our lives! What a happy surprise there is in store for us when a quiet person, whom we have looked down on in our arrogant way, turns out to be wonderful! I once had an experience of this kind, and it had a medical connection.

It happened during the 1981-1983 martial law period in Poland, which was also known as “the Poland-Jaruzelski War.” In Solidarity we prepared a First of May counter-demonstration to spite WRON —the Military Council for National Salvation, as the ruling body headed by General Jaruzelski was called. The Polish word wrona also means “crow,” so people kept singing street songs about the green crow,” because of the green military uniforms worn by its leaders, the people responsible for martial law, as well as their mouthpieces, the television presenters.

It was 1 May 1983, and so for May Day the authorities had organized speeches in support of martial law in Krakow Marketplace,

next to the Town Hall Tower. The Soviet consul and some other government people were already on the tribune. The clock struck ten, and from the forest of loudspeakers surrounding the Marketplace came a solemn announcement: General Jaruzelski, First Secretary of the Central Committee of the Polish United Workers’ Party, president of the Council of Ministers and chairman of WRON, was going to make a speech. There was a moment of silence and anticipation, but instead of the general’s voice from the loudspeakers out came ... a long, raucous noise of crows cawing. It had worked! The cawing filled the Marketplace, echoing down the neighboring streets and carrying a long way, as if it were only going to stop at the much-hated headquarters of WRON, in Warsaw. Hordes of people just then emerging from Saint Mary’s church were overcome with a frenzy of joy. In a broad wave we set off towards the party tribune, which the dignitaries were starting to abandon in confusion. We had no bad intentions; we were just carried away by joy and elation. However, I did not know that the head of the procession was being filmed by hidden cameras. A few days later, in Warsaw, I was dismissed from my job as deputy vice-chancellor of the Medical Academy and forbidden to teach my classes. “And now,” said the Minister of Health, “you can look forward to a trial for inciting riots against the people’s authority.” Crowds of Cracovians came to the trial. We stalled for time, because there was talk of an amnesty likely to coincide with the approaching anniversary of the PKWN Manifesto on 22 July (the PKWN was the Polish Committee of National Liberation, a Soviet-sponsored body that governed areas of Poland newly liberated from German occupation in 1944).

Finally, the decisive moment came: identification of the accused on the basis of some photographs, apparently not in very sharp focus. A witness was brought into the courtroom, and in came Captain Mieczyslaw Dec from the Medical Academy’s Military

KORE

Studies department, whose head was the academy’s military commissar—during martial law Jaruzelski imposed one on every academic institution, to keep an eye on their senior staff. “Oh no, it’s him —it couldn’t be worse,” I thought. And back came the memory of the morning long ago when he had caught three of us medical students playing cards on the back bench during a talk in the jampacked Military Studies lecture hall. An hour later we were standing at the front of the assembly. “What have you got to say?” asked Lieutenant Dec. “The ace of hearts is missing, Sir,” replied my friend Janek, holding our pack of cards that had just been returned to him.

Do I have to write what happened after that? Do I have to say that he could never forget us and our arrogance for all the years that followed? Because we had Military Studies classes once a week for the entire course of our medical studies.

And here he was in the courtroom, years later, in a captain’s uniform by now, as a witness before the tribunal. One of the three judges showed him the photographs and asked: “Who do you recognize here?” After a long pause the Captain replied: “I can’t see, I didn’t bring my glasses —I didn’t know they’d be needed.” “Take mine!” cried the judge, removing his glasses from his nose. The Captain calmly took them, tried them and replied: “Too weak.” They had to take a break in the trial. And although I was eventually convicted, it happened with some delay, three days before the amnesty, and I managed to avoid prison. None of us had expected such courage and ingenuity on the part of the captain, who took a very big risk. I have great admiration for him to this day.

This story had a third and final act. It was 1990, and I was starting a new job as vice-chancellor of the Medical Academy, having been freely elected to the post after the fall of communism. An application came in for a small salary raise before retirement, which the applicant was due to take soon. It was signed “Colonel Dec.”

Dec? I had not seen him since that day at the trial. I had thought of meeting with him to say thank you, but I knew that at the time it would have been incriminating for him, and later on ... I forgot. “Please ask the Colonel to come and see me,” I said to the office manager, who shot me a glance from over a pile of correspondence without hiding her surprise. The matter was self-evident, trivial, the raise was due...

There we sat face-to-face. “Colonel,” I said, “after all these years I want to say thank you and tell you how much I admire your courage.” For a while he said nothing, and then he looked me in the eye and quietly asked: “Citizen Vice-Chancellor, may I have permission to go now, Sir?” “Colonel,” I boomed, “permission granted!” He stood straight as a ramrod and clicked his heels together. About turn, and Colonel — Captain — Lieutenant Dec left the final act of the story that had brought us together.

This tale in three parts reminds me of the Solidarity union we formed and that we lived by throughout the 1980s. We were the latest generation of Poles to fight for independence, a fact that none of us doubted in the slightest. Of course, the words “trade unions” were always tripping off our tongues, but it was clear the aim was liberation. We had gulped down the breath of freedom when Solidarity first erupted. And even in the later, increasingly dismal 1980s, when the regime did everything it could to “atomize” society, we experienced the “unity of hearts” we heard about from our Pope, without whom none of it would have happened. It was to him, to the Vatican, that I twice flew from Krakow in 1981. One time a friend came with me, a brilliant scholar, who was later to prove his integrity during martial law. But at that point, when there were shortages of everything in Poland, and western universities were opening their doors to us, he was troubled by the question of

whether to stay in the West. He sought an answer from the Pope, and asked the question. John Paul II looked deep into his eyes and answered: “Isn’t it already enough that I have had to stay here?”

In 1984, a renowned Italian professor of pharmacology named Rodolfo Paoletti sought me out, and in a concerned tone spent a long time explaining: “Give it a rest. You have no chance at all. You are in for the fate of the Balkans under the Turks — Soviet occupation for the next three centuries. Better send your children to Russian schools.” Although his motives were well-meant and realistic, I replied with the traditional Italian bent elbow. Did any of us really expect communism to collapse in our lifetime? But it did, and we won. The independence that followed proved difficult, and our everyday world was crippled. The national capacity for resistance, the ability to put up opposition, ceased to be of any use. Suddenly we felt a lack of people —in politics, the courts and schools; in every area of life — following the terrible devastation of war, as well as the devastation that had gone on for decades ever since the war had ended. Personal interests began to grow, and unrestrained selfishness. We soon came to believe that “triumph over the powers of evil does not make a good person of anyone,” as Joseph Brodsky put it. If only we had had as much success in the years of work on internal renewal as we did with independence.

For those of us living here in the 1980s, medicine was different. Sometimes a patient on a hospital ward would steal his neighbor’s medicine because he knew there was not enough of it for him. The faucets were always disappearing, because they were unattainable in the city. Not to mention that food was apportioned on ration cards, or that toilet paper was always in short supply all over the country, and then there was the gray, ashen dust that got into everything — it was under our eyelids, between our teeth, in the hospital, on the streets and in the houses. We organized the distribution of medicines and clothes that humanitarian organizations

sent by truck from abroad (it was French and German people in particular who brought them to Krakow, brave, courageous men and women who subjected themselves to various forms of mortification), we bandaged and hid people who had been injured in street demonstrations, we printed and distributed leaflets and forbidden books and we built up an underground organization at the local and national level. For many of us, there was nothing more important than Solidarity.

But, meanwhile, in the world outside medicine did not stop and wait for us. It was busy making amazing advances and tackling illnesses. It was probing deep inside the human body, turning more and more often towards the exact sciences in order to understand its own discoveries.

In the past two centuries the final judgment on illness was played out on the stage of the theatrum anatomicum. The autopsy, supported by microscope research, established a verdict that was irrevocable. It confirmed, or refuted, the clinical diagnosis, revealing new areas of knowledge of mankind. This was still the case in our grandfathers’ and fathers’ day. But by the time I was a student, biochemistry was taking the lead, and the causes of illnesses were being discerned in the transformation of chemical compounds during metabolism and in its disorders. Increasingly often, mysteries were being unearthed from inside the human body —without opening it up. Methods for producing images of the organs flourished. The large-scale collection of tissue samples from live patients also began, to be used to recreate the picture of an illness. Today, several decades on, we seek explanations for the phenomena occurring in the human body at the molecular level, by examining the particles that form proteins and DNA ... And so from inspecting the entire body we are penetrating deeper and deeper, into the ever

KORE

tinier elements that form each of us. What may be in store for us at the deepest level? Mathematical equations that describe the physical world of man — that is how some would answer.

It is not the first time doctors have turned to mathematics. In the seventeenth century the connection between these two disciplines seemed so close that on the title pages of their works (and also on their epitaphs), many doctors wrote the honorable title “medicus mathematicus” after their names. So, for example, at Saint Wojciech’s church in Krakow, on a late Renaissance tombstone we read: “Valentino Fontano medico matb[e-matico].” Shakespeare calls the doctor who examines Lady Macbeth as she tries to wash the blood from her hands in a sleepwalking trance a “Doctor of Physics, and in my childhood the old people in Subcarpathia used to say “physicist” instead of “doctor.”

Where did it come from? From a fascination with the progress of mathematics, physics and mechanics — from admiration for Newton, Descartes and William Harvey. The symbol of a turning point in science was Galileo. Until then, to explain the phenomena of nature, scholars simply cited the traditional authorities. From his time onwards they began to rely on their own observations and experiments. Admiration for Galileo became so widespread that in 1737, when his remains were transported to the upper nave of Santa Croce church in Florence, the middle finger of his right hand was separated from the rest of his body to be kept as a holy relic. Now it is displayed in the Museo di Storia della Scienza. On the cylindrical alabaster foot of the chalice in which it lies is the inscription: Do not spurn the remains of this finger, whose right hand showed mortals paths on the horizon and celestial bodies never seen before.” Indeed, “we see ... Galileo’s finger dabbling in all our current scientific pies.” Galileo was also the spiritual father of the entire schools of iatromechanics and iatromathematics, which in the sixteenth and seventeenth centuries tried to make medicine into an exact science.

In this period a doctor and professor in Padua and Venice named Santorio Santorio constructed not only thermometers, pulsimeters and hygrometers, but also scales so large that “he sat in them himself, and even lived there (with a bed, a desk, etc.), while weighing and measuring everything, like Galileo.” In time, these ideas and measurements of his led to the study of metabolism. Santorio’s pupils —the iatrophysicists, worked on the mechanics of the body, comparing the blood circulation system to hydraulic machines and reckoning the nerves are small tubes in which juices circulate.

But does biology, and with it medicine, have its own universal laws? Or is that the exclusive domain of physics? The views of biologists and physicists have taken a different shape in this regard. Let us consider the laws of Mendel, the so-called central dogma of genetics, or the “law” of natural selection. Exceptions to these laws have not caused alarm or sent biologists back to the drawing board to formulate new laws that cover the exceptions. Instead, they have come to be a reminder of how complex biology really is.

Nowadays, however, when physicists, mathematicians, IT experts and engineers are sought out by biologists and doctors, when university systems biology departments are springing up like mushrooms after rain, questions about principles and laws are taking on practical significance. Can one really speak of the emerging principles, or maybe even laws, of networks? Of a kind that would cover some incredibly complex systems, such as metabolic reactions, or the paths of signals inside a cell? Can there really be a “universal architecture,” representing “one of the very few universal mathematical laws of life”?

Perhaps, as they enter all this virgin territory, the physicists will develop new analytical tools better suited to biological systems. Or maybe on the contrary, their concentration on biology will lead to

a change in epistemological aims and an abandonment of the search for universal laws.

We are going down to deeper and deeper levels of knowledge that describe mankind more and more accurately and precisely, and we are approaching the deepest level, the very foundations. But does this lowest level really exist? Isn’t it like numbers? For example, if we look at a set of positive real numbers, there is no smallest number in that set. Beyond the one that seems infinitely small, there is always an even smaller one waiting. It may be similar with the hierarchies of knowledge. We are doomed to keep descending into the depths, but we will never be sure we have reached the limits, because there may not be any.

When I was a student, the proton was regarded as the elementary particle. Then quarks were discovered, and in comparison the proton became a complex structure. Some people suspect that even the electron is composite. So is matter infinitely divisible? Will we keep on finding smaller and smaller particles, more and more elementary, because there is no limit where the division stops? Maybe the elements that form the fabric of the world are not particles, but are like strings. Multidimensional space at the deepest level would contain nothing but taut membranes — quantum fields. Vibrating continually and breaking the symmetry, they would emit particles and rays. Each field would send out its own vibrations and the universe would be one super-rich polyphony. It would resound, if not with Plato’s “harmony of celestial spheres,” then with the modern “quantum canticle.”

From behind these elevated concepts that lie beyond the reach of everyday intuition peeps Greek thought. The Greeks consolidated their enchantment with the order and harmony prevailing in nature by extending the expression kosmos to mean the entire

universe. The word kosmos originally meant “beautiful,” or “decorative,” evidence of which remains in the word “cosmetics.” The Greeks noticed this unusual harmony in the construction of living organisms, and they sensed that “the world was an organism rather than anything else.” What is the arche of the world? they asked. And so they instilled in research into nature the instinct to look for the source, the beginning, the very first principles. The physicians’ dreams of a single, unifying theory, with a hidden equation at the very bottom of mankind, are an echo of those dreams. Just like the dreams of modern physicists about the theory of everything, in other words, a theory that focuses all the basic laws about the forces of nature in a sequence of equations. Though the majority regard them as a utopia, these dreams live on. This reverie about the ultimate theory returns like an echo in John Donne’s words:

If ever any beauty I did see,

Which I desired, and got, ’twas but a dream of thee.

In the sphere of music, in a sense, Richard Wagner came close to a unifying theory with his idea of a Gesamtkunstwerk. In his work music grew out of poetry, and the two of them found their combination in the theater. Thus a space was created where the performers and the audience were united. And Bayreuth —endlessly permeated with music, the gravitation of leading tones and chromatic harmony to an unprecedented degree —was his Gesamtkunstwerk come true.

From the dawn of philosophical thought, the question of whether we can understand the world by breaking it down into its elemental parts has been the subject of heated debate. Various answers have been provided, but it is hard to deny that the flourish of the natural

KORE

sciences is closely connected with reductionism. The tempestuous development that began with the introduction by Galileo of the isolated system continues to this day. Just as in physics and chemistry, reductionism has contributed to the phenomenal development of biology and medicine, reflected in the number of clinical specialties. A hundred years ago neurology and dermatology had already begun to emerge from internal medicine, the queen of medical sciences, and half a century ago cardiology, gastrology, nephrology and many others budded forth. When I was a student, my professors and lecturers used to say proudly: “I am an internist,” or “I am a surgeon,” and they not only examined the patient, but (after an eventual case conference), they were not afraid to undertake his treatment. Nowadays, if a patient with knee pain applies to the family doctor, he does not even tell him to roll up his trouser leg, but sends him to an orthopedist, who sends him to a rheumatologist, who in his turn sends him to a physiotherapist, etc. Although a narrow specialty is necessary, especially at the university level, it should not obscure a thorough look at the whole of the patient’s problems.

Only recently did doubt appear as to whether reductionism was approaching the limits of its potential, because it is an excellent way to research linear reactions. Meanwhile, in the realm of scientific interests dynamic, highly complex, self-organizing systems are appearing. We are perceiving them in the theory of evolution, chaos theory and quantum mechanics, in synergistic effects (which are indeed frequent in medicine), and in the growing field of systems biology. They slip through the net of concepts in which we try to catch them, and dodge out of the way before being trapped in mathematical algorithms. Their components —so abundant that they are hard to count are internally linked in a non-linear way. Reductionism fails to describe them. We are starting to look for principles that would raise us up, from the deep levels of knowledge we

THE ARCANA OF ART AND THE RIGORS OF SCIENCE

have already achieved to a point where we can embrace the whole, which is something more than the sum of the individual elements. The movement is changing direction, turning back the opposite way. We are talking about “emergence” and looking out for the principles of non-linear thermodynamics, known as “the physics of creative processes.” The conviction is spreading and taking root that only non-linear equations can describe the birth of novelties, comprehend and perceive the structures of a whole that is richer than the sum of its parts. Will this really be possible? Let us not forget Democritus’s warning: “Do not try to understand everything, because you will find everything incomprehensible.” And there is a lot of truth in the tongue-in-cheek remark that natural scientists—physicists, chemists and biologists —work effectively by day as reductionists, and “by night they devote themselves to dreaming about the theory of everything.”

The staggering achievements of medicine are inarguably the result of its metamorphosis into a science, enclosing biological research in the rigors that apply to the exact sciences. Even in purely clinical departments where it would seem especially difficult, “evidencebased medicine” is flourishing. Ways of testing drugs in the hospital and of evaluating their efficacy have been strictly established, while also preserving objectivity and impartiality. These standards have been extended to other forms of treatment too, including surgery, rehabilitation, etc. The test results are subjected to severe, critical analysis by groups of specialists, even entire institutes recruited for this purpose, and ultimately —once consensus has been achieved —proclaimed to be recommended standards, principles for medical procedure. Increasingly often, they have the adjective “global” in their name, implying an ambition to cover all the countries in the world. They are regularly amended, and

have unquestionably contributed to a crucial rise in the standards of medicine. Some people are offended by their schematism, especially clinicians with a flair for science, though they are not in fact principally aimed at them. Among them we find some rather drastic opinions, such as: “There is no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus. Period.” Of course, standards, recommendations and consensus all collapse in ruins the moment there is a breakthrough discovery, as happened recently in gastrology.

In the late 1970s Robert Warren, a pathologist at the University of Perth in Australia, working independently, noticed the presence of tiny, curved bacteria in the stomachs of deceased patients. Where there were most of them, the mucous membrane showed features of inflammation. Warren involved a young doctor called Barry Marshall in his research. After laborious attempts they succeeded in cultivating the bacteria in the laboratory. They christened it Helicobacter pylori.

No one wanted to believe them. How could bacteria live in the most unfriendly place in our bodies, full of a water solution of hydrochloric acid (pH 1.0-2.0) and digestive enzymes? They gathered a lot of proof, including evidence that Helicobacter is the cause of ulcers. But the reviewers were extremely critical. The Lancet and other journals refused to accept their work for publication. At this point, after an initial gastroscopy that showed he had a healthy stomach, Marshall drank a small glass of freshly cultivated bacteria. His stomach ached, he felt dizzy, he vomited and his breath smelled like a sewer. A gastroscopy showed inflammation and ulceration, which was cured with antibiotics. Then The Lancet accepted the work for publication, and in 2005 the two scholars won the Nobel Prize for discovering the cause of stomach and duodenal ulcers, as

well as a new, effective way of treating them. It is worth adding that Helicobacter has been close to us since the dawn of time. When our ancestors emigrated from Africa, they already carried these bacteria inside them. Like a hitchhiker it has moved about with us for at least sixty thousand years.

The blindness of The Lancet can be explained by exaggerated caution. Is it really exaggerated? Indeed, not so long ago the world was gazing in admiration at the Korean scientist Hwang Woo-suk. But after being published in the top scientific journals, the results of his research on how we can obtain stem cells from the cloned cells of human embryos turned out to have been faked, entirely fabricated. At roughly the same time an article appeared in The Lancet from Oslo about a new potential treatment for cancer of the mouth. It described observations made by studying nine hundred patients, and proved entirely fictitious. The lead author admitted fakery. What role his thirteen co-authors played, including his own twin brother, we can only guess. But let us take pity on The Lancet and stop listing its disastrous errors at that.

The more prestigious the journal, the finer the sieve of reviewers through which a paper must force its way before it appears in print. Harsh reviewers feel safe because they are protected by anonymity. Yet they do make some incredible mistakes. One of these involved the story of Jacques Benveniste. When I first met him, towards the end of my studies, he had already had a racing accident, but he enthusiastically encouraged me to drop medicine and take up motor racing. Some fifteen years later he was head of a laboratory at the Pasteur Institute in Paris. Brilliant and charismatic, by then he had the discovery of the platelet-activating factor to his name. One day I was extremely surprised to read a work of his in Nature , the leading scientific journal, about “water memory.” He

claimed that water serving as a solvent preserves a memory of the substances that have reacted in it. The memory continued to last when all trace of the reacting substances had long since gone, and the water itself had been diluted so much that no more than a droplet of the original solution could have remained. The U.S. National Institutes of Health sent three investigators to Paris, specialists in relentlessly tracking down scientific fraud. But they failed to repeat their usual experience at Benveniste’s laboratory. He himself spent many long years conducting a debate on the pages of the journals in his efforts to obtain proof of “water memory.”

A similar verification of research took place three hundred years ago. In the archives of the Royal Society in London, a fierce debate has been preserved in the form of correspondence between one of the Society’s first members — Hevelius, the famous Polish astronomer from Danzig —and a member of the Society’s board —Robert Hooke, the brilliant experimenter who was the first person to see cells under a microscope. Hooke had tried to prove that Hevelius’s astronomical observations could not be precise, because he did not use a telescope equipped with sights or a micrometer. To settle the dispute, Edmund Halley was sent from London to Danzig; one of the youngest members in the history of the Society, he was the man whose name was later given to the famous comet. After two months of testing Hevelius’s observations in minute detail, Halley confirmed their reliability. And so the finest atlas of the sky in Europe, one copy of which Hevelius gave to the king of Poland, Jan III Sobieski, and another to the king of France, Louis XIV, was genuine. No documents have been preserved at the Royal Society showing how Hooke reacted to “the verdict confirming that his adversary was right.”

Nowadays, expenditure on scientific research in biology and medicine is reaching astronomical sums and the armies of scientists are growing in arithmetical progression. Apart from the brilliant

exceptions, how can we tell who is who among the masses of “scientific workers” out there? Who should be given grants for their research, and who should be promoted? In short, how do we measure recognition in science? How to weigh up success is the question on everybody’s lips. In the global village everyone bares all in the search for applause. If you cannot hear the bravos you might as well not be alive at all. Here is what the Polish poet Cyprian Norwid wrote about it:

These days success is an idol—he has spread wide His sorcery like a map of planet Earth;

Even ancient victory has now stepped aside,

In spite of her eternal worth!

A hundred years ago David Hilbert set the criterion for excellence for a scientific work. It is proportional to the number of works that have been rendered completely irrelevant, or that can be skipped as a result of this new, outstanding work appearing. Thus it creates a higher standard for examining a scientific issue. Theoretical physicists look forward to this sort of work. The brilliant physicist Andrzej Staruszkiewicz writes as follows about the Copenhagen interpretation of quantum physics: “Here we have a real intellectual logjam that has worn everybody out and has had a disastrous effect on the whole of theoretical physics, which has lost its ontological clarity of world vision; anyone who contributes to removing this logjam will be doing humanity a great service.”

For several decades the measure of success has been the response, the publicity that a scientific publication receives. It can be counted up and presented numerically as the number of citations gained, in other words, how often a given publication has been cited in the professional journals, especially the most prestigious ones. A whole new trend is developing called bibliometry, or scientometry. Comparing the frequency of citations by authors in

various fields of science is risky; the so-called “h” for Hirsch index, which was introduced in 2005 and rapidly gained popularity, was designed to streamline them. But even within the scope of this field itself, doubts can exist. How are we to take the words of the great contemporary mathematician, winner of the 1998 Fields Medal (the equivalent of the Nobel Prize in mathematics), Timothy Gowers, who declared: “Most mathematical publications are incomprehensible to most mathematicians”?

The huge popularity of the citations index can be explained by the fact that it appeals to a major feature of human nature — vanity. “Vanity,” wrote Blaise Pascal, “is so anchored in the heart of man that a soldier, a soldier’s servant, a cook, a porter brags and wishes to have his admirers. Even philosophers wish for them. Those who write against it want to have the glory of having written well; and those who read it desire the glory of having read it. I who write this have perhaps this desire, and perhaps those who read it ...” With the radicalism typical of his thinking, Pascal suggested that scientific works should be published anonymously, without giving the author’s name, cutting them off from the anticipation of applause, from vanity and all-embellishing amour propre. One of his contemporaries made the following comment on his proposal: “It’s easy for him to speak. If he publishes a work anonymously, even so everyone in Europe will know who it is by!”

Zbigniew Herbert’s words sound like an echo of Pascal:

The Old Masters

did without names.

their signatures were

the white fingers of the Madonna

Let us leave the index of citations in peace, although it would be a very good thing in Poland if the ministries, universities and scientific institutes chose to make regular use of it in assigning research

grants. But let us try to find other ways of measuring the value of scientific work. Nothing is better than time, that is for sure. Time separates the wheat from the chaff. But we refuse to wait, we simply cannot, because we won’t be here any more when the truth is revealed. We want it all here and now. However, there are no recipes for scientific discovery or for success. Max Delbriick, a brilliant physicist who introduced scientific thought, analytical and quantitative, to biology, reckoned that in performing an experiment we should admit a certain degree of freedom, some flexibility, in order to perceive the unexpected, the surprise that is worth far more than the expected result. He called this “the principle of limited sloppiness.” The British and the Americans use the word “serendipity,” a lucky hit, an unexpected discovery, which does not mean an accidental one at all. That was how Ryszard Gryglewski discovered prostacyclin. Professor John Vane had given him a sample of unstable chemical compounds (professionally called endoperoxides PGG 2 and PGH 2 ) to investigate in which organs (and whether at all) they produce thromboxane A 2 , a substance that had just been discovered in the small blood cells, thrombocytes. So, with varied results, Gryglewski tested ground-up cells (homogenates) from various animal organs. When he added the test compounds to some arterial homogenates (microsomes of the aorta), thromboxane was not produced and nothing happened. Most of us would have regarded that as a negative result and moved on. But he noticed that even so, there were less of the added compounds. Could they have changed into something else? He sought the advice of chemists and looked in various books, but he couldn’t find the answer. Then he had a brilliant thought. Maybe something so volatile was produced that it immediately dissolved at room temperature? So he set a trap for that “something” by repeating the experiment on ice. This time the detector system showed the appearance of something that was unknown. It was prostacyclin. In the British laboratory where we were working, at first no one wanted to believe in

this “Polish hormone which sort of is, but isn’t,” as they called it. In a series of quick, ingenious experiments Gryglewski and his colleagues provided proof of the existence of prostacyclin — an important natural defensive substance that protects our arteries. Its effect on the human system was then defined and introduced into therapy here in Krakow.

Intuition helps us to sense reality, as it were, to guess at it, or even see it. We value it highly, though it eludes definition.

The word has its origins in the Latin intueri , meaning “to examine.” It is this intuitive flash of the subconscious, this “short-cut around reason” that sometimes lets the doctor see what is going on in the patient’s body. We experience a sort of inner revelation, an insight into the heart of the matter, into things that had seemed hidden from sight... Looking at my own everyday medical work, I am sure I do not devote a lot of time to thought, like a mathematician or a physicist. No —medical practice is not just a rational act. As I listen to the history of the illness, as I make observations, perform tests, make my diagnosis and implement treatment, I feel like an elk moving through a forest it knows pretty well: it catches scents and sounds, and looks for clues ... It tracks these impressions and uses them to build up a picture that will enable it to react as precisely as possible. There is often something that makes me wonder, because it diverges from the norm and doesn’t form a familiar shape. Then the mystery works away inside me, only to come back unexpectedly at night, or a few days later, sometimes again and again, with a solution that may not be right at all.

And as for rationalism ... One morning I am on my daily ward round. I enter a ward where a forty-year-old female patient is lying in bed. She complains of sleeping badly — she cannot get to sleep before four am. “Then I read your book, Catharsis ,” she says. “Aha, and it sends you to sleep,” I say. “No, it doesn’t,” the

embarrassed patient tries to defend herself. “Maybe it’s this pillow that’s stopping you from sleeping?” She is lying on a pillow embroidered with a drawing by Polish satirical cartoonist Andrzej Mleczko, showing the devil tempting a woman. The patient looks at me incredulously.

A week later I come to see her again, in the same bed, in the same ward. “I’m sleeping well now,” she says. “I changed my pillow.”

When a new scientific discovery is made, we see something no one has ever seen before us, sometimes that no one has even imagined might exist. Whereas in clinical practice there are times when we dream of noticing something we have already seen before. This happens when days or even weeks go by, but the scattered symptoms of an illness refuse to form a pattern, we cannot make a diagnosis and we do not know what is wrong with the patient. But maybe we have seen it before? Isn’t it a repeat, a rerun, but of what?

When my younger son reached the age of five, he started taking an interest in everything I was doing. One evening I was working on a lecture. He wanted me to tell him about it. “It’s a good thing, that sort of lecture,” he said once he had heard me out. “You put it together once, and then you keep repeating it to the students every year.” “I never repeat myself!” I replied indignantly. Next day he came up to me with a clean sheet of paper. “Sign here, please.” I signed. Then from behind his back he pulled out my ID card and showed me the signatures, on the card and on the sheet of paper. They were identical. Then he asked triumphantly: “So you never repeat yourself?”

So it is as if the doctor uses one hemisphere of the brain —the creative one —to seek new solutions, while using the other to watch out for repetitions. And so he develops his art, his skills. When we

talk of first-rate skills, of the heights of the medical art, we think of the virtuosity of a surgeon. Comparing him with a virtuoso pianist or violinist does not seem far off the mark. And just as in music, this excellence sometimes has family roots in surgery too. Alexis Carrel is a good example. He developed vascular surgery, creating the foundations for organ transplants. His work was crowned with the Nobel Prize for medicine in 1912, the first time it had been awarded to a scientist working in the United States. Operating on the blood vessels is a fine art. To prick small arteries without causing bleeding and then join their ends together, with a straight stitch and diagonally, pulling the thread gently but firmly —all this demands exceptional dexterity. And what if the operating field shrinks, as we operate on small creatures — mice, guinea pigs, or ... babies? It is as intricate as making lace. Alexis Carrel, a Frenchman who emigrated to the United States, learned this art from the lacemakers of Lyons, one of whom was his mother.

Watching a brilliant surgeon at work, actively participating in an operation, can be a kind of aesthetic experience. There is not a single superfluous movement, just fluency, confidence and rhythm. This rhythm takes over the entire company, and the whole team starts to move as if in a trance. It looks as if they could close their eyes and the operation would keep going on its own. But if an unforeseen obstacle arises, followed at once by another, and then yet another — each of which is capable of ruining everything in an instant —you have to face up to them and deal with the unexpected. The surgeon must have the presence of mind to take a leap into the dark, a split-second decision. Whether or not to jump into deep water —like the question that suddenly confronted Lord Jim in Joseph Conrad’s novel and the narrator in Albert Camus’s The Fall. Because the summit of surgery,” as a talented young cardiac surgeon once told me, “is not achieved by phenomenal dexterity.” On the summit, as at Wimbledon, we find the very best players.

Each of them hits the ball perfectly and knows all the tricks of tennis. But the one who is master of the situation wins. He never loses his head, he never gets lost and he never lets himself get confused. He sees the inevitability and irreversibility of the situation created by his movements, and controls the field of play. He is like a sculptor who aims to extract a figure from a block of marble, but knows that one wrong move of the hammer and chisel will write off the entire operation.

There was a time, about two hundred years ago, when science and art were woven together in biology by thick threads, natural observations and speculations that were sometimes hard to distinguish. This rich conglomerate produced Goethe’s and the German Romantics’ Naturphilosophie. They were fascinated by the profound, mysterious similarity between forms of Nature that lay hidden within its diversity. To explain the shapes and morphic features repeated in countless variants, they coined the concept of the archetype. And they examined the archetype, this “conception of Nature,” in its transformations and metamorphoses, uniting poet and scholar, and both of them with Nature. Darwinism put an end to this philosophy of Nature, although Darwin himself was a great admirer of Humboldt, who advocated Naturphilosophie and was a friend of Goethe. Humboldt was convinced there were patterns, basic forms concealed behind the richness of the world of plants. He sought them out, brought them to light and even defined their number as sixteen. For him they were like recurring musical themes, in which the species and families of plants played the variations. Alexander von Humboldt —traveler, romantic adventurer, author of a powerful synthesis of natural history knowledge about the Earth and the universe —preceded Darwin on a scientific voyage to the sub-equatorial countries of South America, a voyage that

also took him five years. Darwin read his famous diaries from the voyage when he was still a student at Cambridge. He took the first two volumes with him on the Beagle. In his notes from the expedition he wrote: “I am at present fit only to read Humboldt; he like another Sun illumines everything I behold.” And years later he added: “I never forget that my whole course of life is due to having read and reread as a Youth his Personal Narrative.” He remained enchanted by Humboldt’s prose and shared his delight in Nature. He unconsciously imitated his style to such an extent that his sister Caroline pointed it out to him: “you have without perceiving it got to embody your ideas in his poetical language.” It is worth remembering this Darwin — sensitive, romantic and passionate. How different he is from the image history has handed down to us, in the widely distributed posthumous portrait, from which a venerable old man glares at us like a terrifying Old Testament prophet.

A hundred years later, the “philosophy of Nature” came back like an echo in the work of Ernst Haeckel, a professor of zoology in Jena who was an oceanographer. Haeckel had wanted to be a landscape painter, but he became obsessed with sea creatures. He spent twelve years studying amoebae, documenting four thousand species of them, a truly astounding number. He made no plans to write a manual based on his own scientific research, but presented his discoveries in the form of art. He was fascinated by the mysterious forms of Nature, including those that are only visible under a microscope, and devoted hundreds of drawings to them. He drew and painted in color, decoratively, in Art Nouveau style, and in 1904 he published a work entitled Kunstformen der Natur (“The Shapes of Art in Nature”), which delighted all Europe. Amid a wealth of shapes and colors, our vision is usually drawn to the centrally positioned specimen, from which the other varieties radiate in the shape of a fan. It is the core form that contains the essence of the structure. This is “organic crystallography,” according to

THE ARCANA OF ART AND THE RIGORS OF SCIENCE

Haeckel, who reckoned all forms of increasing complexity were derived from a common model. From his illustrations, which are among the very finest ever produced on the topic of nature, order, symmetry and hierarchy shine out. They reveal a logic and a purpose that we would seek in vain within the struggle for existence, within natural selection. Haeckel’s famous theory (known as “biogenetic law”), that evolution recurs in the embryonic development of organisms, was an attempt to find a unifying formula within the world of Nature. Nowadays, some people accuse him of lending symmetry to creatures viewed under the microscope as well as with the naked eye, of creating ideal, Platonic beings. “Their very beauty betrays them,” they say. As if beauty could not exist within Nature, or outside art.

Even today, when genetics provides us with rational explanations for hidden though striking similarities, we can hear a note that sounded for the first time in Jena. Doesn’t evolution converge, doesn’t it reach for the same solutions, even in species that are distant from each other? some people ask. And they claim that the convergence of solutions is universal, despite immeasurable genetic possibilities. And so despite an endless number of roads, “life navigates to precise end-points.” We should say that these original views are isolated, and as they break free of the standard convictions of the evolutionists, they are criticized for smacking of creationism. Yet something is changing as a result of the most recent discoveries, concerning milk intolerance, for example. In childhood milk is our most important food. But in adulthood most people (except for those of European origin) cease to assimilate lactose, which is the sugar in milk. It happens because the gene that oversees production of the enzyme that breaks lactose down into simple, assimilable components goes quiet and gets switched off. Without it milk and its products are hard to absorb, so they not only cease to taste good to us, they even irritate the alimentary

KORE

canal. In 2002 a mutation was described among the Finns, which causes this same gene controlling the synthesis of lactose to remain active after childhood, rather than dying out. So adult Finns can consume milk without any trouble. Two years later, among those inhabitants of Kenya and Tanzania who assimilated milk well, the identical “Finnish mutation” could not be found. Yet the same gene as in the Finns had changed and mutated in them too, except that it happened at an adjacent point. And so one single gene underwent various mutations in different parts of the world, resulting in exactly the same effect. This has been recognized as “the best example of convergent evolution in human beings.”

Simon Bening, Antonio de Holanda The Genealogical Tree of the Kings of Aragon, 1530-1534 British Library, London

\V

Picture #22
Picture #23

Laetum non omnis finit, or “Death does not end everything,” we read on the grave of Joseph Brodsky at San Michele cemetery in the Venice Laguna. Franz Kafka defined death lightheartedly, almost jokingly, as “an apparent end that produces real suffering.” Both these comments hint at the idea of an indestructible element, the eternity that a man carries inside him and that does not die with him. Ovid simply called it the soul, for only “souls are all exempt from power of death” ( [morte carent animae). The English poet Thomas Hardy also considered the capacity of physical, bodily features to survive. He saw them recurring from generation to generation—the shape of the eyelids, the line of the lips, a smile, the timbre of a voice —they said “no” to impermanence and conquered time. And in a poem entitled Heredity he wrote as follows:

I am the family face;

Flesh perishes, I live on,

Projecting trait and trace Through time to times anon,

And leaping from place to place Over oblivion.

One of the most famous genetic features to manifest itself for eighteen generations was the conspicuous protruding lower lip of the Habsburgs. Yet it cannot have deprived the family’s princesses of charm, as six of them married Polish kings, from Kazimierz the Jagiellonian to Zygmunt III. They were the personification of the maxim “ Belli gerant alii, tu,felix Austria, nube!” (“Let others wage war, you, fortunate Austria, marry!”). For centuries marriages extended the hereditary estates of the Habsburgs far and wide, giving them the Netherlands, the Spanish throne, and the crowns of Hungary and the Czech lands.

Hanging on the dining room wall in our apartment is a portrait of my mother in early youth. We had looked at it all our lives without actually seeing it, and certainly without connecting it in any way with my daughter Ania. But when Ania turned fourteen, one day we noticed that she was starting to resemble the portrait! Over the next few days the similarity became striking. The young woman in the portrait and the girl in the room facing her looked hke mirror images of each other. So it was for a fortnight, but then they began to draw apart and diverge, like clouds passing over the

mountains. And just as it had appeared, the similarity vanished, never to return again.

Charles Darwin explained heredity as follows: each mature organ produces tiny particles, “gemmules,” in which its essence is contained. They amass in the reproductive cells and are passed on to the progeny, recreating within it the organs from which they grew. In the early twentieth century, when Thomas Hardy wrote his poem Heredity, people were mindful of the experiments conducted

GENETICS AND CANCER

by the Moravian monk, Gregor Mendel; by crossing peas in the monastery garden he had discovered the basic laws of genetics. At the same time August Weismann established that we carry two cell lines within us: somatic and embryonic, separate lines between which there has always been a partition. And so he revealed to us an incredible continuum, a continuity that reaches back from every cell living today into the abyss of time and indicates a relationship between us and all living creatures inhabiting the Earth, both in the animal and the plant worlds.

Fifty years later came the discovery of the double helix of DNA, the treasure store of the genetic code, hidden deep within the nucleus of the cell. And in another fifty years, at the start of the current century, the code was deciphered, letter by letter, three billion letters in all, and ... posted on the Internet. This spectacular result, the fruit of the work of hundreds of scientists, concentrated in two rival research groups, has become the symbol of our era, the third millennium. It has been showered in high-flown epithets, which echo with such phrases as “the book of life” and “the Bible of Nature.” One Nobel Prize winner even declared: “we will know what it is to be human,” to which another one retorted ironically: “We have the sequence now (or most of it, at least), and what we know principally is that we are stunningly similar to chimpanzees in the makeup of our genes.”

Fifty-five years after the discovery of the structure of DNA and its publication in the journal Nature, James Watson decoded the genome and published the results in the same journal. It was he, together with Francis Crick, who by cutting out pieces of cardboard to make three-dimensional models of DNA, imagined that they are arranged in the shape of a double helix. The fact that the genome was “decoded” by the great Nobel Prize winner has powerful symbolic significance. However, more symbolic than biological, because having a recording of the three billion letters of his

DNA sequence does not even allow us to work out such simple features of Watson as his height, not to mention his predisposition to certain illnesses. Certainly, as time passes we often refer to this publication. But if we want to talk about Watson himself, we will have to rely — in the manner of modern historians — “on what Watson wrote, said and did during his lifetime, rather than on the order of the base pairs in his genome.”

At the beginning of 2008 only two people in the world had had their individual genetic sequences decoded (the other person apart from Watson was the American geneticist Craig Venter). It is estimated that at the turn of 2011 and 2012 there were about 30,000 people in this category. The phenomenal rise in their numbers is the result of technological progress. Powerful, ultra-fast sequencers are being introduced, in other words, machines that can read the order of the letters in a genome, known as next-generation sequencing technologies. The price of decoding is falling, and is at present less than ten thousand dollars. And therefore shall we too soon be “decoded,” shall we all become the owners of our “personal genetic evidence”? And will medicine take on “personalized” colors? Does this mean that by sequencing our own genome we shall be able to draw conclusions about our susceptibility to particular illnesses, and if we do fall sick, shall we be able to apply “personalized” drugs, in other words, drugs tailored to the individual patient? This vision of the future has many supporters within contemporary medicine.

After the decoding of the genome, doctors and biologists — and in their wake patients and their families — believed we had succeeded in finding within the genome the loci , or sites of illnesses, especially some common ones. This would be a specific configuration of genes, perhaps mutant, which would mean a predisposition to the development of frequent multifactor diseases. But this did not happen. The entire genome was thoroughly screened,

and associations were sought between certain sections of it and a given illness. It was laborious, lengthy research, involving thousands or even tens of thousands of patients. Certain connections were found between genetic variants and particular illnesses, but their force, the strength of the links, was not great. They did not determine the development of the disease, or enable an earlier diagnosis. These associations build up into more and more complex images of the development of illnesses, but they do not bring any decisive answers. Neither arteriosclerosis, nor in particular coronary disease, neither asthma, rheumatoid arthritis or schizophrenia, Alzheimer’s disease or multiple sclerosis —the plagues of modern civilization — revealed their essential secrets (we shall shortly talk of cancers separately). They vary in this respect from rare congenital diseases such as hemophilia, cystic fibrosis or alpha-1 antitrypsin deficiency —in all of which it has been possible to show the mutations of individual genes and their definitive causal connection with the disease.

Nevertheless, in the United States prediction of the risk of common diseases, based on variations in genome (called “single nucleotide polymorphisms,” or SNPs) is available on the Internet from over sixty companies. Direct sale of kits in retail stores has been blocked by the FDA, which has taken the view that consumers should only be able to assess clinical genetic tests through a doctor in order to avoid any misunderstanding about the significance of the results. At present, for most conditions “an individual’s risk will be changed only slightly by the results of the tests.” However, with technology rapidly improving, “an all-inclusive price of $1,000 per genome is soon likely to become a reality.” Interpreting anyone’s complete genome sequence will be a great challenge, since rare variants seen only in that person will carry uncertain consequences.

Ten years after the decoding of the genome, it was judged that the clinical repercussions of the discovery were “modest.” So has this epoch-making discovery failed to translate into clinical medicine? Hasn’t it left its mark on medical practice? Is it different from the other great discoveries? Indeed, after Ernst Chain and Howard Florey described the curative effectiveness of penicillin in 1941, it saved the lives of thousands of people. The same thing happened with hormones of the adrenal cortex, synthesized in 1946. So is it fair to say that transferring the great discovery connected with the decoding of DNA into medical practice will take several generations? This claim seems overly pessimistic. The matter really has proved extraordinarily complicated, and has caused many difficulties and disappointments. Fascinated by the great discovery, we have let ourselves believe that the goal has been reached and the summit climbed, as if the way forward is well mapped out and visible. Meanwhile, science always takes us on a journey where no point that we reach is the end of the line, but just a stop that reveals a road winding on into the unknown. In the case of genetics there have been many surprises in store along that road.

Shortly after the decoding of the genome, it became clear that genes represent at most two percent of DNA, so they were compared to oases scattered among the desert sands. This desert consists of motifs that are repeated tens of thousands of times over. The simplest ones are tandems, built of two letters in the code, for example, GT, GT,..., GT. Others are incomparably more complicated. Many of them are mobile and move from place to place, like the shifting sands of the desert. No one knows why we have this desert inside us. The molecular biologists have found themselves in the same situation as the astronomers —for about twenty years they have known that the universe is not nearly as empty as it seems, but

GENETICS AND CANCER

is filled with strange “(non-baryonic) matter that is different from ours,” and also a mysterious energy. Invisible to our eyes, they are defined by the adjective “black.” We conclude that they exist on the basis of their dual effect: on the movement of distant galaxies and on the acceleration of space. The ignorance of the molecular biologists is comparable with the astronomers’ ignorance. Just as over ninety-five percent of matter and energy in the universe is “black” and mysterious, so too “desert” DNA remains a mystery, hidden in impenetrable darkness.

This “black” DNA —to use the astronomers’ adjective —is often called “junk,” in other words, scrap or even garbage. Until recently it did not attract any attention. Who would want to rummage in the garbage? Some sort of molecular vagrant, perhaps? Yet we can assume the number of these vagrants is going to grow fast, because there is treasure hidden in the garbage heap. It might be a storage container for ready-to-use segments for “nature’s evolutionary experiments.” Consider the duplication of genes, for example. One gene remains unchanged, while the other undergoes a mutation. Does this cause changes in the organism, and of what kind? We would have to wait for a repeat of the same experiment. But if mutant genes of exactly this kind are ready and waiting within the scrap, in easy reach and instantly available for use, the entire process is speeded up. There may also be elements of the highest rank hidden within the scrap heap which regulate the expression of genes.

Transposons are candidates for this specific role. What amazing ... creatures, I would like to say! Transposons are mobile genetic elements that cut themselves out of the genome of their own accord and move, sometimes a long way, down the thread of DNA, and then inscribe themselves within it again in a different place. They can also jump horizontally, from cell to cell, or even from one

species to another, especially among the simplest organisms. “Scissors” (transposases) code themselves, guaranteeing a precise excision in the thread of DNA, and so does “glue,” which allows them to re-embed themselves again, sometimes after a very long journey. Sometimes they are called “sailors,” because they sail along the genome by themselves, without the complicity or help of other proteins.

In man they form almost half the genome, but only about one percent of them have preserved their mobility and can still jump from point to point. The rest are dormant. On the other hand, in simple organisms, such as bacteria and fruit flies, the percentage of these active elements is much higher. Presumably, transposons were the motor of evolution. Indeed, their wanderings about the genome are not neutral. By inscribing themselves in a new region of the DNA, they caused the genes to mutate or the chromosomes to be rearranged, and if these features were advantageous to the host, they were retained in succeeding generations. Once they had colonized vertebrates, the transposons were rendered inactivate, put to sleep. Yet this one percent in man, the one that is not dormant, may be the cause of illnesses. And so it is suspected that hemophilia, Duchenne muscular dystrophy and cancers — of the esophagus as well as the nipple and ovaries — “might develop as a result of SINE or LINE inscribing themselves in certain genes or their close neighborhood.” These abbreviations stand for categories of transposons; for example, the genome of each of us contains about half a million copies of LINE (“long interspersed nuclear elements”), of which from fifty to a hundred are still mobile, capable of moving.

The gene first emerged as an idea, to materialize years later. It is still with us, though sometimes we start wondering whether we shouldn’t send it back into the world of ideas. It was first heard

of in 1909, from the lips of a Danish botanist called Wilhelm Johannsen, who coined the name “gene” for a hypothetical element conditioning an innate feature. Soon after, scientists discovered, in fruit flies, that chromosomes determine inheritance, and they imagined that genes are inherent within them, like beads strung on a thread. The next fact to be accepted was that they are built of DNA, whose structure was discovered in 1953 by James Watson and Francis Crick. Inheritance can be summarized in shorthand as follows: an organism has the instructions for how to create its successor enclosed within the capsules of the gametes. These instructions are passed on to the fertilized egg cell and gradually reveal themselves within it, until a descendant comes into being. The instructions are in the shape of the double DNA thread, are located in the chromosomes and are written in a four-letter code that defines the shape of the organism and all its functions. Individual sentences from these instructions are rewritten as orders, which are carried by special envoys — called mRNA, where “m” is for messenger —from the nucleus of the cell to the cytoplasm, where proteins are formed out of amino acids. The flow of information is transparent and in one direction: from DNA to RNA and onwards —to the site of protein synthesis. The assignment is unambiguous: one gene creates one enzyme (one protein).

In recent years this lucid pattern, known as the central dogma of molecular biology, has started to be filled with such a wealth of unexpected details that at times it looks as if its central line is becoming obliterated. Let us mention at least three discoveries that have shaken it to its foundations. Firstly, each gene is built of a few to about a dozen blocks, which can be put together in various configurations during their conversion, their “rewriting” into mRNA. And so from one gene various amino acids and proteins

can arise. The unambiguity of the instructions — within the thread of DNA —gets blurred. Secondly, there is no rule saying that the activated messengers (mRNA) will reach their destination with the message. On the contrary, many of them will go quiet before setting off on the journey, because the DNA thread will send short segments after them, called micro-RNA, which will lock onto them and silence them. Thirdly, mRNA can not just be messengers carrying instructions passed down from above, but can also carry messages in themselves that they hand on from generation to generation, without the involvement of headquarters, i.e., the DNA. And finally, there are the genes themselves. We are ceasing to regard them as closed, limited units of inheritance, even if they are made up of small blocks inside. They do not seem to have a start or end, they overlap over extensive areas and we cannot see their limits because they form a continuum. These views are reflected in the definition of a gene recently proposed by twenty-five eminent specialists: “a locatable region of genomic sequence, corresponding to a unit of inheritance, which is associated with regulatory regions, transcribed regions and/or other functional sequence regions.” We have come a long way from the simple definition of a gene from almost a hundred years ago.

The primitive aborigines of New Guinea have contributed to undermining the accepted laws of genetics. They have a custom of eating the brain of a dead relative at the wake, which causes them to contract a dangerous neuro-degenerative disease called “kuru.” The disease is triggered by particular proteins, called “prions.” They are also the cause of mad cow disease and scrapie in sheep. Prions do not contain nucleic acids; they are neither DNA nor RNA. Thus they are different from all previously known infectious factors — bacteria and viruses. Yet they are carriers of information, and are able to pass this information on to the cells. They belong to the elements of inheritance that are not subject to Mendel’s laws. They

operate by changing the recipient’s spatial structure. And so one prion protein can affect another, causing a change in its conformation (in other words its three-dimensional, spatial shape), and this change is then passed on to other proteins. This new mechanism regulates the activity of some enzymes and cooperates in the creation of long-term memory. However, it does not seem to play a role in the development of cancers.

Among the modern breakthrough discoveries in genetics on which doctors are pinning particular hopes are the small particles of RNA known as micro-RNA. It was for them that in 2006 Andrew Z. Fire and Craig C. Mello won the Nobel Prize. They develop within the DNA in a natural way, but we already know how to synthesize them in a test tube. With the help of micro-RNA we can switch off individual genes, changing the flow of genetic information. Until now, such possibilities were limited. At one end of the information flow we have corticosteroids at our disposal (natural or synthetic hormones of the adrenal cortex). However, their action is not selective. They shape the transcription process (the “rewriting” of RNA from DNA), suppressing some genes while stimulating others. At the other end of the information chain, in other words, the translation of mRNA into polypeptides (the start of protein synthesis), the intervention possibilities have already been exploited to a large degree. So, for example, antibiotics suppress the translation of bacterial ribosomes, which fortunately are different from human ones.

Perhaps in about a dozen years we will have drugs at our disposal that work by using micro-RNA to silence genes. Given to the patient, they would block the expression of proteins that cause incurable illnesses. There are many genes whose silencing could bring therapeutic benefits. The problem is that although microRNA are so small, as their name emphasizes, they arouse the

organism’s defensive immune response, which in this case is undesirable. Several ingenious strategies have already been devised to bypass this defense. Preliminary, cautious clinical tests were undertaken in 2007 in two illnesses where topical application is advisable. These are macular degeneration (direct injection into the vitreous body of the eye) and respiratory syncytial virus, an infection that is common in children (application by nasal spray).

When asked if cancer is hereditary, we reply — usually with relief — “But of course not!” Naturally, we may have heard of some unusual family blighted with cancer, and physicians have a lively interest in this sort of family. But those are rather exceptional cases. It is also possible, though it happens extremely rarely, for a pregnant woman suffering from a cancer to pass the illness on to her fetus (e.g., leukemia, melanoma). In these cases, however, the cancer cells pervade the fetus directly through the placenta. So can we be sure cancer has nothing to do with genetics? The most important achievement in oncology in the past few decades has been to prove the falsity of this claim and to demonstrate beyond all doubt that cancers do originate within the genes —in altered, mutant genes. In the substantial majority of cases they are somatic mutations; in other words, they occurred during the individual life, and do not involve the embryonic line, so they are not passed on to descendants.

Many factors cause cancerous mutations of the DNA. They pervade us by various routes from the surrounding world. As early as 1761, one London doctor conjectured that taking snuff caused nasal cancer to develop. Fourteen years later, his colleague Percival Pitt, who has gone down in the annals of medicine for all time, noticed the frequent appearance of cancer of the perineum in British chimney sweeps and ascribed it to contact with soot, which

GENETICS AND CANCER

contains not fully burned carbon residues. With the dawning of the Industrial Revolution, and then the technological one in the twentieth century, these first observations, which nowadays we would call epidemiological, were increased by the addition of coal dust, asbestos, aniline dyes and many others. In the early 1950s the first observations were published in the United States indicating that in habitual smokers the likelihood of developing lung cancer is forty times higher. Half a year later they were confirmed in Britain, and so began the crusade against smoking that is still going on today.

Scientists needed an experimental model for prompting cancer. In 1915 it was provided by two Japanese, who discovered that all you had to do was to rub tar into the ear of a rabbit. They were lucky in choosing a species of animal that is particularly sensitive to this carcinogen as well as the right spot to apply it. But they also proved commendable for persistence in their research. To trigger the cancer, they regularly smeared the tar in exactly the same spot, every other day for one hundred days. In his elation after making the discovery, one of the two experimenters, Katsusaburo Yamagiwa, wrote a haiku in superb calligraphy: “I have made cancer. / I have proudly proceeded / several steps forward.” History judged that he had a legitimate reason to be proud, because he showed the way forward towards identifying carcinogens and proved that tumors develop as a result of events that are repeated many times over.

The history of breakthrough research in cancer abounds in unusual characters. Let us offer three examples. In 1903, twenty-five-yearold Walter Sutton, a medical student at Columbia University, published a paper entitled “The Chromosomes in Heredity.” While keenly researching grasshoppers, he came to conclusions that are nowadays at the core of biology teaching in every high school: 1) all multicellular organisms contain double sets of distinctive

chromosomes; 2) as a result of division, descendant cells receive one chromosome from the father and one from the mother; and 3) sperm and egg cells get only one chromosome each, but this number doubles after fertilization. On the basis of these observations, he realized that chromosomes are the carriers of hereditary traits, and this breakthrough discovery puts him on a par with Mendel and Watson and Crick. Sutton created the entire science of cytogenetics and identified the genetic system of cells. All this in one student publication, because he never published another. He gave up scientific research in favor of surgery, and died young, at the age of thirty-nine.

A quarter of a century after him, a Texan named Hermann Joseph Muller discovered that X-rays, a recognized carcinogen, cause mutation of the genes in fruit flies (.Drosophila melanogaster). But this time there was no haiku —quite the opposite. The forty-one-year-old American s work was given an extremely critical reception that, combined with a difficult period in his marriage, drove him to depression. Without much delay, he took a large dose of sleeping pills and wandered into the woods outside the city of Austin. Like many other suicides, he had a note on him, which read: “My period of usefulness, if I had one, now seems about over.” Alarmed by his disappearance, his colleagues frantically began to comb the woods together, and next day they found him, in a coma, but alive. Fifteen years later, in 1946, Muller won the Nobel Prize for the discovery of experimental mutagenesis.

And finally, a visionary called Theodor Boveri, who was a professor of biology at the University of Wurzburg. In his first papers he had already offered the conjecture that cancer might be caused by damage to or a lack of chromosomes in the cell. It is not entirely clear where he got this idea from, as he had not been researching cancerous chromosomes. His work went unnoticed, but he continued his idea and developed it fully in a book published in l 9 l 4 , entitled The Origin of Malignant Tumors ifZur Frage der

Entstehung maligner Tumoren ), whose significance for biology and medicine has been compared to that of Newton’s Principia for classical physics. The book’s main thesis went: “The unrestrained tendency of cancer cells towards rapid proliferation may arise from the domination of the chromosomes that promote cell division. Another explanation for cancer is the presence of defined chromosomes that suppress cell division. Cancer cells would set off on the path of development if these restraining chromosomes were eliminated.” For modern biologists, geneticists and oncologists, these are “landmark conclusions” whose accuracy was confirmed over half a century later.

For nowadays we know that every human tumor that has been thoroughly examined is a combination of changes in two types of genes: proto-oncogenic and suppressor genes. They work in opposite ways. If the molecular reactions that control the life of a cell can be compared to a network of complex electrical circuits, there are two kinds of commutators within it. Stimulating one kind leads to a short circuit that, by keeping going, violently accelerates the flow of current (reaction). This is what the mutation of protooncogenes involves. The second kind are brakes that slow down the circulation. Eliminating them removes the brakes, and the current (reaction), now free of obstacles, takes on unusual speed.

The vast number of mutations is staggering. When the entire genetic record of a cancer cell from a lung cancer patient was read, 22,910 somatically acquired substitutions were identified in the genetic code (somatic mutations), while in another patient with a malignant melanoma, this number totaled 33,345. Yet this includes the disappearance of entire blocks of the recording (deletions) or the unexpected inclusion of new “words (insertions). In this panorama, it is no longer possible to speak of any kind of harmony of cells. The mutated genes blow up the tissue with their primitive energy, ultimately leading to the destruction of the organism.

These staggering discoveries in the field of the molecular biology of cancers that are revealing to us their origin and development have not yet found their reflection in therapy. Mortality because of cancer is today almost the same as it was fifty years ago (though in 2006 it showed a small downward trend for the first time), whereas mortality because of heart diseases, cerebrovascular diseases and infectious diseases has decreased by nearly two-thirds! The brilliant oncologist Harold Varmus reckons that surgery, chemotherapy and radiotherapy will remain the basis of cancer treatment for many years to come. They are becoming increasingly effective thanks to a technological revolution. It is all to do with aiming the drugs or radioactivity straight at the cancer without disturbing the surrounding area. So, for example, by using radioactive implants we are able to bombard the tumor directly, without touching the organs next to it. And what if we were to remove a healthy, delicate organ from the neighborhood of a tumor, an organ that would certainly be irreversibly damaged by the planned treatment, preserve it outside the system during the treatment, and once it was over, implant it back in its place? This actually happened in France recently, where a young woman suffering from cancer of the lymphatic system was obliged to undergo a draconian course of treatment, using radioactivity and drugs. There was no doubt at all that the proposed therapy would have destroyed her ovaries. So they were removed from her body and placed in liquid nitrogen at a temperature of minus one hundred and ninety-seven degrees Celsius. Then she underwent the treatment, which lasted for months on end but proved successful. At this point the patient’s ovaries were re-implanted in her abdominal cavity. The thirty-two-yearold woman became pregnant, and —cured of her cancer —gave birth to a healthy child!

However, the results are not always so extraordinary. Sometimes oncological treatment is cut short because each new stage

brings more suffering, or it is too late, and the tumor can no longer be cured. There is no textbook standard here, no stop sign. Where should we look for it? “In experience, understood as the memory of errors made.” This is what tells us when to take a risk or even to act in defiance of the accepted protocol, and when to abandon treatment. Another invaluable element is the patient’s own determination, his desire to fight the cancer, which the doctor can influence.

Conventional therapies, whether using chemical drugs, radioactivity, or surgery, aim to destroy the cancer cells, or at least limit their development. But they do not get to the source of the cancer; they do not try to attack or eradicate its cause. This sort of effective, causal action has only been undertaken in recent years, and that at the molecular level. It involves using drugs to intervene into the network of intercellular signals. What sort of a network is it? On the surface of the cell, in the membrane surrounding it, there are docking places, fastening points, for very many chemical substances, such as hormones or drugs. The chemical substances flow in from the bloodstream, or via the lymph or nerves, or develop locally. They anchor in the cell membrane in these “docks” that are specially prepared for them called receptors. The impetus of their “docking” releases a signal that travels from the surface of the cell to the nucleus, where it arouses selected genes within the DNA. However, it does not take a direct route. The signal is passed on by specific chemical molecules that create a sort of chain of transmitters. These “chains,” the paths along which the signals travel, are extremely numerous. They intersect and merge together; in short, they create a virtual network — a network of intercellular signals. Sometimes an individual transmitter, a component within the extremely complex chain, is faulty because an error, a mutation, has

KORE

developed in its DNA recording. In chronic myeloid leukemia the error is so dense that it can be seen under a microscope as a shortening of one of the chromosomes, which has been given the name Philadelphia. As a result, a faulty transmitter protein is formed that places itself at the very beginning of the chain of transmitters. It causes violent and permanent acceleration of transmission along several paths. The signals bombard the cell nucleus, which goes into overproduction of white blood cells in the bone marrow. The crazy transmitter can be suppressed, and the drug that can do it is called imatinib. It was introduced into therapy in the twenty-first century, ever since the fates of patients with a relatively common form of leukemia have changed. The precision of imatinib’s action is astounding. It targets just one single molecule out of hundreds in the network enclosed within the bone marrow cells and suppresses it, which translates into the clinical symptoms subsiding and a remission of the illness.

These sensational results in treating myeloid leukemia have opened the gates wide for further experiments of this kind. Particular attention is being paid to other proliferative blood diseases, defined by the term “myeloproliferative syndromes.” They are characterized by excessive production within the bone marrow of mature erythrocytes ( polycythemia vera), thrombocytes ( thrombocythemia essentiale) or collagen fibers ( myelofibrosis ). In 2005-2007 it was discovered that at the foundation of these long-recognized, classic diseases lies an acquired mutation of the gene that codes a certain element of the signals’ paths. In technical terms, this element is the protein kinase JANUS 2, and thus it is a particle from a broadly understood family of transmitters, whose malfunction we encountered in myeloid leukemia. We have not yet identified a drug that might suppress it or regulate its connections with other paths (as implied by its “Janus” face). But once they become accessible, the treatment of blood cancers will really be transformed

into selective, targeted therapy. Meanwhile, the search continues for mutations in other kinases. In 2007, five hundred and eighteen genes coding protein kinases in two hundred and ten cancer patients were examined, and about a thousand mutations were discovered. However, it is incredibly hard to judge their role in initiating cancer, because the genes that we research once the illness has already been diagnosed contain millions of cells with mutant DNA. Some of these mutations have arisen under the influence of the first malfunctions and play a role in the growth and spread of the tumor. They are like “drivers”; others are accidental mutations, and they are called “passengers.” In some forms of lung cancer (the so-called non-small-cell form) permanent damage occurs on the surface of a particular receptor. Drugs are already in clinical trials that aim to curb a hyperactive receptor, defined for short as the EGFR (epidermal growth factor receptor). In some patients an improvement is successfully achieved, and in others, after some time, resistance develops and a new mutation emerges. These observations are still relatively rare and short-lived. This field of activity is the focus of interest for the pharmaceuticals industry, because the receptor is the beginning of the signals’ paths.

Master Bertram Creation of the Animals, 1383 Kunsthalle, Hamburg