8
THE DARK SIDE OF ADDICTION
IT WAS A slow Thursday afternoon. Perched in my office, I had finished up the paperwork earlier than expected and hoped to get to work on our latest research report. But a kind of weariness held me back. Instead I remained seated behind my desk, watching the gray daylight wane and turn into November dusk. The street below my window got busier and busier as the afternoon neared its end. There was a feeling of emptiness to what I was doing, one that has remained with me through the years and easily comes over me if I don’t keep busy enough. It was never that way when I was a resident. Life could certainly be hell in those days, whether it was because one had to handle the admissions and discharges on a hectic ward or manage the flow of patients in the emergency room, but it never was boring. Preparing a report for the county’s health care board sure was. I sighed and got my feet down from the desk. One of our doctors had just come by and said that the emergency room had a lot of work. It was two floors down by the elevator, then through an empty clinic corridor, to reach the back door to the emergency room. At first the resident working the shift did not look particularly happy to have the director come by and create a disruption. But once I looked in the ledger at the nursing station and quietly said I could take care of the next patient, there were no objections.
The patient has stuck with me to this very day. In a way, it is somewhat embarrassing that I peg my knowledge to individual patient experiences this way. After all, I have for more than twenty years passionately taught and promoted approaches to addiction treatment that are firmly based on systematically collected data, not the kind of anecdote that doctors or therapists frequently substitute for solid evidence. And yet this is probably the time to fess up. Without the archetypical patient who embodies the data, mechanisms, and ideas, I would not be able to conceptualize any of it. This is a quiet declaration of love: love of real clinical work, of the critical clinical observation. Love of that almost intangible perception that on its own may be entirely flawed and merely reflect the physician’s preconceived notions but can also lead to systematic data collection and new knowledge.
The patient was a heroin addict with a scarf hiding injection marks on his neck. With that, I knew he had exhausted most of the veins we think of as places to inject. I could just imagine the rest. For a man, what is left at that stage is to inject into his genital veins. His general condition also reflected how far he had descended into misery, with dirty, disheveled clothes. There were pustules on his arms. A couple of them were open, and I realized that if I would ever be exposed to multiresistant staphylococci, this would be the day. He was shaking slightly even though we kept the exam rooms quite warm, and his nose was beginning to run with a thin discharge. Clearly he was beginning to go into opioid withdrawal. I got started on the somewhat obsessive ritual I always follow to obtain a history and do a physical. More often than one would think, this is what identifies signs or symptoms that are not obvious on a first look yet signal something that requires attention. Almost as important, the ritual itself helps establish a connection, a relationship that is a tool without which I frankly don’t think any patient can or should be treated. At least not a patient that is awake.
Because I was just pitching in to help, there was no rush. No other patient was waiting for me. I took my time, and because of that I was struck by things we otherwise don’t pay much attention to, things too philosophical to allow all the practical work to be done efficiently on a regular day at work.
The overarching question was simple. The extent of suffering experienced by my fellow human on the gurney was incomprehensible. He had already been hospitalized for overdoses several times. He was clearly well aware that next time could be the last if no one happened to be around to call an ambulance or do CPR. And if an overdose didn’t kill him, sharing needles the way he did, he could be sure to contract HIV sooner or later. This was after the retroviral medications had transformed an HIV diagnosis from a death sentence to a chronic disease for most patients. But as my colleague who ran the HIV clinic used to point out, treatment works only if the patient has a place to put the medication bottles. Among homeless heroin addicts, death from AIDS remained a reality then and still does. I could go on, but the big question should be obvious. Because of the slow pace of the exam and the good rapport we were developing, I did ask. Why? Why would anyone in his right mind expose himself to this misery? The patient was, after all, a sensible person, someone with whom I could have a coherent, intelligent conversation. After all my years in addiction medicine, it remained a mystery to me how someone who seemed so reasonable could keep making choices that were so terrible, over and over again. Was the high from heroin really so overwhelming that it could just not be resisted? “Hell no. You know, Doc … here’s the thing: when you start doing heroin, you really don’t have to take it. But it is true, you just love what it does for you. Now, after all these years? I hate this shit, and it doesn’t give me much of a high. It is just that somehow, it seems I can’t be without it.”
Despite the large number of scientific papers written on the topic, no one has in my experience captured what this chapter is about better than that heroin-addicted patient back in Stockholm.
“Give … wine unto those that be of heavy hearts.”1
The notion that people take drugs to alleviate emotional pain rings intuitively true with patients, treatment providers, and many others. Often referred to as a “self-medication” view of addiction, this notion has had interesting ups and downs through the years. Following it through these cycles will allow us to better understand the flaws inherent in a naive, original version of the theory. It will also pave the way for a better informed and more useful modern interpretation.
Two men are usually credited with a version of the self-medication theory of addiction in the scientific sense of the word. They arrived at the concept from positions that were as different as the two men themselves seem to have been. Edward J. Khantzian, a psychoanalyst at Harvard Medical School, thought that many heroin-dependent patients had experienced difficulties with aggression, and that these difficulties seemed to precede their use of addictive drugs. According to Khantzian, these patients often reported that heroin gave them relief from dysphoric feelings. Based on these clinical observations, he concluded that a predisposition to become addicted to heroin resulted from problems with controlling and directing aggression.
In psychoanalytical terms, the inability to control aggression was seen as a result of “ego weakness” and was thought to pave the way for opioid use as a means of coping with poorly controlled aggressive drives. It did so, however, at the cost of producing physical dependence. Khantzian claimed that methadone maintenance could effectively manage heroin addiction not only because it controlled withdrawal symptoms and drug cravings, but also because it relieved the dysphoric feelings that resulted from unsuccessfully trying to cope with aggression. Based on this theory, in 1974 Khantzian and his colleagues published the original formulation of what would become the self-medication hypothesis of addiction. Over subsequent years, Khantzian went on to incorporate into his theory cocaine addiction, which he held to be a way of coping with depressed mood. He ultimately also included alcohol, which he thought offered closeness and affection to people who were otherwise not in touch with their feelings.2 Overall, in this view, an addict’s choice of drug was not a coincidence. Instead it reflected selecting a drug that offered opportunities to make up for failures to, as psychologists call it, “self-regulate.”
A very different take on the self-medication theme originated with David F. Duncan of the Texas Research Institute of Mental Sciences. Duncan first noted that among individuals who engage in use of any addictive substance, only a minority go on to develop dependence. As discussed in a prior chapter, this observation was subsequently confirmed by the epidemiologist James Anthony at the Johns Hopkins School of Public Health. Duncan applied a rather straightforward behaviorist perspective to the use of addictive drugs but tried to sort out the different motivations that lead to recreational use or abuse, on one hand, and dependence, on the other. According to this perspective, use or abuse was driven by positive reinforcement, as also implied by studies showing that addictive drugs could activate and hijack brain reward systems. In dependence, however, drug use was instead thought to be driven by negative reinforcement.
Negative reinforcement is very different both from positive reinforcement and from punishment, with which it is often confused. To recap, positive reinforcement promotes behavior because an action is followed by desirable consequences. Punishment, of course, suppresses behavior by having it trigger aversive consequences. In contrast, negative reinforcement promotes a behavior because when that behavior is carried out, an unpleasant, aversive state is eliminated.3 If my head hurts and I feel irritable arriving at work, I have a powerful incentive to press the buttons that deliver coffee from the espresso machine in our lab. Alleviating the aversive states associated with caffeine withdrawal by resuming drug intake is a classic case of negative reinforcement. Do note, at this point, an important distinction. Positive reinforcement is for the most part an equal opportunity motivator. In contrast, negative reinforcement practices affirmative action of sorts. It only affects someone who is already in an aversive state that can potentially be alleviated.
In any of these flavors, a self-medication view of addiction assumed that people who go on to develop addiction had suffered from preexisting conditions that involve negative emotional states, such as depressed mood, anxiety, or dysphoria. That is a testable hypothesis. Once it became the subject of systematic research rather than theorizing, the data simply did not seem to support it. In large epidemiological studies, addictive disorders such as alcoholism were much more strongly associated with antisocial personality disorder, a condition characterized by impulsivity and callousness, than with depression or anxiety. People with bipolar disorder, or what used to be called manic depressive illness, did clearly have a dramatically increased risk of addiction, but they are less than 1 percent of the population, so their increased vulnerability to addiction could not account for more than a small fraction of the 10–15 percent of people in the general population who develop addiction at some point in their lives. And people with bipolar disorder run their highest risk of drinking or using drugs when they are manic, not depressed.4 Overall the bulk of addiction could simply not be attributed to conditions characterized by low mood or high anxiety.
Meanwhile clinical researchers found that use of drugs such as cocaine, heroin, or alcohol seemed to be producing anxiety and low mood, not the other way around. Negative emotions were most pronounced during acute withdrawal, which is when most patients are seen by doctors. If an alcohol-addicted patient managed to stay sober for somewhere between three and six weeks, the initially high levels of anxiety seemed to wear off in most cases. There was still a sizable fraction of people in whom these symptoms remained, but that fraction was no higher than that in the general population, where mood and anxiety disorders are quite common as well. Similar to the epidemiological data, these findings were not compatible with a view that conditions in which people suffer from negative emotions are a major predisposing factor for addiction. Instead it seemed that much of the association that clinicians and patients had observed had been the result of two things: transient withdrawal symptoms that were difficult to distinguish from independent anxiety or depression, and rates of genuine depression and anxiety that were no higher than in the general population.5 To make studies even harder, there was frequently a distortion that easily happens when people are asked to report things in their past. This well-known “recall bias” is particularly strong when dealing with questions of addiction because of the stigma associated with substance use. Patients are simply more likely to report anxiety and depression as causes of substance use than the other way around.
By the time I was finishing my clinical psychiatry training in the mid1990s, it had become widely accepted among practitioners of addiction medicine that talking about self-medication was mostly a distraction from maintaining sobriety or staying off drugs. General psychiatrists, patients, and the public of course continued to talk about self-medication. But then again, we knew that they were just not familiar with the data.
George Koob is quite a character. It took only a couple of weeks after I joined his lab in 1990 before he yelled at me in public. I wasn’t doing rat surgeries the right way, meaning, of course, his way. To reinforce the message, pinned on the wall in his office was a cartoon, given to George by a former trainee, in which a Koob-like character said, “I’m always right!” surrounded by rats that were rolling with laughter. A couple of things made up for this style. First, George is one of the most brilliant neuroscientists I have ever met. Second, he is a great mentor who has continued to support his former trainees long after we have left his lab. The list of scientists he has successfully helped launch is long. When he received the Marlatt Mentorship Award from the Research Society on Alcoholism in 2012, it was after many generations of former trainees had come together in nominating him.
I had arrived at the high-energy microcosm of the Koob lab at the Scripps Research Institute in La Jolla, California, mostly by chance. My research love was stress, anxiety, and the brain systems that mediate these states. I had spent my graduate time studying a single neurotransmitter, recently discovered at the time. This work led to some important discoveries and revealed a previously unknown antistress system in the brain. It was a great training experience, but once I completed my degree, I wanted to broaden my views. I was also uncertain about my next steps, because even though I loved psychiatry, I enjoyed medicine just as much. Studying the role of stress in immunity seemed relevant to both and allowed me to put off the choice between the two specialties. I was therefore thrilled to land a prestigious Norman Cousins fellowship from UCLA to study stress and immunity.
Unfortunately, a few months after my arrival, things were not working out at the famous monkey laboratory I had joined. The lab was in decline, with its director, a wonderful psychiatrist, in ill health. I was simply not getting anything done. To make it worse, I found Los Angeles to be claustrophobic and depressing. As a last resort before throwing in the towel and returning home to do clinical work, I wrote George, on the off chance that he might have an opening on short notice. His response was characteristically Koobian. “I am booked with post-docs for the next five years. But do come and give a talk.” I started in the lab only a few weeks later.
Two years prior to my arrival at Scripps, George and Floyd Bloom, neuroscience legend and department chair at the time, had jointly written a paper on the biological mechanisms of addiction that is still worthwhile reading.6 It reviewed the accumulating data on the role of brain reward systems in addiction but also introduced the concept that prolonged drug use itself triggered long-term changes, or “adaptations,” in brain function. It put forward the notion that these neuroadaptations fall into one of two broad categories. I have already touched on the first type, which affects the brain circuits that addictive drugs themselves primarily act on to produce their rewarding effects, such as the mesolimbic dopamine circuitry. Over time these systems become less responsive to drugs and also to natural rewards, creating, or perhaps exacerbating, a reward deficit syndrome. An example in this category would be the decrease in dopamine receptors discussed in the previous chapter. The authors called this type of change “within-systems adaptations,” which I still find to be a helpful and intuitive term.
But with prolonged drug use, another category of adaptations, they claimed, would get under way in other brain systems. These would come online when addictive drugs caused excessive activity of brain reward circuitry and would attempt to counter the rewarding drug actions. The prediction was based on the idea that an organism attempts to maintain a reasonably neutral set point, or at least a range around a set point, for most of its important physiological functions, such as body temperature or blood pressure. Perhaps it did so for psychological processes as well? Perhaps prolonged and excessive euphoria simply is not a good idea? If, as discussed, pleasurable states and their anticipation once evolved to guide behavior in pursuit of things we need, having reward systems turned on and stay on in prolonged overdrive clearly defeats that purpose. Other brain systems would almost have to kick in and attempt to bring down mood to a normal range or else one would no longer be able to let good outcomes guide behavior. These normalizing forces, called between-systems adaptations because they bring online systems other than those directly activated by drugs, could be viewed as an attempt by the organism to keep emotions around a reasonable baseline or, in the words of scientists, maintain “affective homeostasis.”
The notion of between-systems adaptations drew heavily on an influential psychological theory developed more than a decade earlier by the psychologist Richard Solomon, called the opponent process theory of motivation,7 and generously credited by Koob and Bloom. Over the quarter century that has followed, Koob has progressively developed the concept of these opponent, counteracting processes. One important advance was the realization that the time course of these processes is critical. The “b-processes” that in Solomon’s terms oppose the pleasurable drug actions frequently have a dynamic that is slower than that of the initial “a-process” of drug-induced high itself. When intoxication wears off and the influence of the a-process comes to an end, the opposing b-process is still active and pushes emotional balance below its original baseline. This activity results in the opposite of the pleasurable effects obtained during intoxication. In the absence of drug, there is now low mood and elevated anxiety. This is of course routinely experienced during alcohol or drug withdrawal. A bad hangover is, in miniature, an example of this “overshoot.”
But there is a fundamental difference between a healthy brain and one that is addicted. In a healthy brain that does not shortly become exposed to drug again, the b-process will ultimately dissipate over time as well, and affective balance will be restored. Mood will return to a neutral level, anxiety will wear off, and sleep will normalize. The ability to guide behavior by its anticipated pleasurable outcomes will be restored. But this takes some time. If we take alcohol as an example, physical withdrawal peaks around three days and is gone by a week, but the negative emotions after a period of heavy use last quite a bit longer, often around a month, and longer in women than in men. If, in the meantime, the person attempts to improve the low mood by resuming drug use, he or she will be successful in the short term. But this will be at the cost of an even deeper suppression below the neutral baseline of emotionality once the drug effects come to an end. By now the intensity of the b-process has been further strengthened. What George Koob ultimately called “spiraling distress” is now under way, and in the absence of drug the patient is miserable. This misery can temporarily be alleviated by resumption of drug use, but each time, that is at the expense of further pushing along the process that makes the patient suffer even more in the absence of drug.
Finally, Koob adopted a concept from physiology, which outlines how organisms under challenge from changing demands of the environment can achieve stability by changing their original set point, but do so at the expense of increased wear and tear on their machinery.8 In this he followed in the footsteps of one of the stress researchers we both may admire most, Bruce McEwen of Rockefeller University. Discussing a similar balance between the brain’s stress and antistress systems, McEwen had picked up the “allostasis” concept from physiology. According to this notion, stability of emotions after prolonged stress or drug use can be maintained at a level different from that previously held, for instance, with more sensitivity to stress or lower spontaneous mood. Stability at this new level is achieved at a cost, called allostatic load. This is the wear and tear to the system that occurs when affective opponent processes are chronically turned on in a manner that is not supposed to happen in health, for instance the pleasure systems through chronic drug use, and the aversion systems in an attempt to counteract the chronic drug effects. This may all sound quite complicated, but think of it this way: affective balance is like a seesaw. The seesaw is balanced when children of approximately the same weight sit on each end. But it could just as well be balanced, in the same position, with a baby elephant on each end. Although the position of the seesaw would be the same in both situations, the strain on the teeter board, or the allostatic load it would be subjected to, would clearly be different in these two cases.9
These concepts were fascinating, but there was one problem in the early days of these advances. Despite the elegant theorizing, no one knew any specific biological process that would qualify as an opponent process whose activity could result in an allostatic shift of emotionality after prolonged drug use.
Rats and mice are just not great party animals. Most of them dislike the taste of alcohol and will not achieve blood alcohol levels beyond mild intoxication when offered free access to an alcohol solution. High levels of intoxication are an inherent element of alcohol addiction, so it is not clear how the low spontaneous consumption by a regular laboratory rat can be helpful for developing an understanding of alcoholism, let alone identifying medications to treat this condition. Medications that decrease the reward from alcohol can perhaps be expected to reduce motivation to drink whether an individual is dependent or not, so they may still show activity in the simple laboratory models. Later we will see that this is indeed the case with the approved medication naltrexone. But if it is true that the alcohol-addicted brain develops a set of adaptations that alter its function, and if these adaptations are important for maintaining addiction through negative reinforcement, then it is unclear how studying a normal, nonaddicted brain can be of much help. Yet that was the way the vast majority of alcohol research was carried out for a long time.
Research on other addictive drugs faces somewhat less of a challenge in this respect. Compared with alcohol, heroin and cocaine are so strongly reinforcing that animals will spontaneously self-administer them at levels clearly sufficient for intoxication. Ironically however, much of that advantage was lost for a long time because of the way research on these drugs was carried out. Self-administration was typically studied in short sessions of 30 to 120 minutes, during which animals quickly established stable rates of self-administration and could go on with daily sessions without many apparent adverse consequences for months.
We will revisit heroin and cocaine in a while, but alcohol offers the greatest challenge to achieving relevant levels of intoxication. As already discussed, it is less potent in its ability to activate brain reward systems. It is also taken orally, so there is a delay between intake and the psychotropic effects that weakens reinforcement. Also, similar to humans, animals that first encounter alcohol don’t like its taste, which for starters at least further limits intake. There have been different attempts to address these challenges. One strategy has been to breed rats or mice that have a high preference for alcohol, just like dogs can be bred that are particularly keen to retrieve a bird or herd sheep. Alcohol-preferring rats and mice have successfully been bred in several laboratories and can be useful in searching for genetic factors that contribute to alcoholism risk.
But selectively bred animals don’t tell us much about the biological events that occur as addiction develops in the brain. One approach that has been tried to get at this neuroadaptive process has been to let large numbers of rats have access to alcohol for a long time. After a year or so, about 10 percent of these animals begin to escalate their consumption in a way that appears similar to what happens among humans. That approach has a certain appeal, given that about the same percentage of humans given access to alcohol will develop problems. But even if the long-term-access model were to reflect processes similar to those that play out in people developing alcoholism, it is highly impractical as a research tool, and even more so for medications development. We need large numbers of animals to screen for new potential medications, and we need animals whose brains have been exposed to levels of alcohol similar to those experienced by patients. Both the level and the duration need to be sufficient to trigger the long-term changes in brain function that accompany the transition from occasional to heavy, addictive use.
Several ways have been developed to achieve adequate levels of intoxication that allow these neuroadaptations to be studied. Both my own lab and that of George Koob have largely settled on a method developed decades ago by a remarkable scientist and one of the pioneers of alcohol research, Dora “Dody” Goldstein. A graduate of the first class of female students admitted to Harvard Medical School and later professor of pharmacology at Stanford University, Goldstein invented a clever method to get mice physically dependent on alcohol. Alcohol was simply vaporized, and the vapor pumped into the enclosures in which the mice lived. This way they breathed alcohol vapor, which did not seem to bother them at all. They bypassed the limitations of taste aversion and easily became intoxicated. Goldstein was able to determine that a real withdrawal reaction occurred after intoxication was turned off, and she could study the detailed course of this reaction.10 In Goldstein’s experiments, the mice were made intoxicated for only three days, which was enough to produce physical dependence and withdrawal symptoms. Once symptoms had worn off, the mice were seemingly normal. But what if intoxication were maintained for a long time? Would there be consequences for things that matter for alcohol addiction?
Since 2000 a flurry of papers from several laboratories, including Koob’s and my own, has established that this is indeed the case. Animals that have a long enough history of dependence develop a constellation of behavioral characteristics that seem to mimic core features of clinical alcohol addiction. First, they dramatically escalate their voluntary alcohol consumption when given a choice between alcohol and water. They are also willing to work harder for alcohol than animals without a history of dependence, indicating a higher motivation to obtain the drug. Their voluntary intake of alcohol that is offered to them remains escalated for many months, which is a large part of the life of a rat. Maybe this really is the rat or mouse version of the claim that “once an alcoholic, always an alcoholic.”
Second, animals with a long history of physical dependence are more sensitive to stress. When tested a month or two after they have gone through physical withdrawal, they may on a superficial look not seem more anxious than are nondependent controls. One might in fact be tempted to think that they are back to normal, in parallel to what we found for alcoholic patients after a month or so of sobriety. But that is the static view. The picture changes dramatically if we apply a dynamic perspective and look at responses to stressors. At that point we will see something quite remarkable. A stressor of an intensity that does not at all bother nondependent animals will be enough to trigger a profound anxiety response in those that have a history of dependence. This seems to happen in people as well. When exposed to deeply disturbing images in a brain scanner, alcoholics activate brain circuits that process aversive stimuli to a much higher degree than do normal, nonaddicted people.
So it seems that both in rats and humans with alcohol dependence, a ticking bomb is present. It may not make much noise at rest but is set off when the individual is faced with a stressful challenge.11 To patients, their families, and clinicians, this has a familiar ring. We have all experienced, over and over again, patient reports of how little things that once didn’t bother them now totally throw them off.
This brings us to the final behavioral characteristic that develops after a history of dependence, which I think is important to highlight. To understand this characteristic, we first need to address the widespread notion that stress makes people drink and take drugs. A great deal of animal research has been carried out in an expectation that the same would be observed in rats and mice. But the notion that stress uniformly promotes alcohol intake is simply not true. Animals that do not have a history of dependence do not necessarily increase their alcohol or drug consumption when they experience stress. People who are not alcohol dependent don’t crave alcohol or increase their consumption when stressed either.
But as a reflection of the all important neuroadaptations that are at the core of the addictive process, this changes dramatically if there is a history of dependence. Rats with such a history not only start out with higher alcohol consumption than those that have not been made dependent. When exposed to stress, they escalate their intake further. And what is perhaps most striking is that they remain at this high level even after stress exposure has come to an end. This, too, seems to have its parallel in humans. As I will discuss in more detail in a later chapter, people with alcohol or drug addiction experience profound cravings in response to stress, and stress is one of the major triggers of relapse in addicted patients. All this research seems to indicate that a history of dependence sets up a connection between stress and a motivation to consume alcohol that otherwise does not exist.
Although these mechanisms have been best worked out for alcohol, they seem to apply to other addictive drugs as well. For instance, it turns out that if laboratory rats are allowed to self-administer heroin for a long time every day instead of having the typical short access session, they will also escalate their self-administration rates over time, increase their motivation to work for the drug, experience an allostatic shift of their reward baseline, and turn up their sensitivity to stress.12 Arming the ticking bomb of enhanced stress responses and negatively reinforced drug taking seem to be a phenomenon that happens with most drugs and affects both humans and lab animals. George Koob once gave it a catchy label: “the dark side of addiction.” The question is, what neuroadaptations might be causing this shift in emotional state? If we could find out, we might be able to bring forward medications that can return this pathological state to a normal level. As we will see in the chapter on stress-induced relapse, science has identified some important brain systems that undergo dependence-induced neuroadaptations.