Learning
Learning is a relatively permanent or stable change in behavior as a result of experience. Such changes may be associated with certain changes in the connections within the nervous system. Learning occurs by various methods, including classical conditioning, operant conditioning, and social learning. Cognitive factors are also implicated in learning, particularly in humans.
Nonassociative learning occurs when an organism is repeatedly exposed to one type of stimulus. Two important types of nonassociative learning are habituation and sensitization. A habit is an action that is performed repeatedly until it becomes automatic, and habituation follows a very similar process. Essentially, a person learns to “tune out” the stimulus. For example, suppose you live near train tracks, and trains pass by your house on a regular basis. When you first move into the house, the sound of the trains passing by is annoying and loud, and it always makes you cover your ears. However, after living in the house for a few months, you become used to the sound and stop covering your ears every time the trains pass. You may even become so accustomed to the sound that it becomes background noise, and you don’t even notice it anymore.
After a person has been habituated to a given stimulus, and the stimulus is removed, this leads to dishabituation—the person is no longer accustomed to the stimulus. If the stimulus is then presented again, the person will react to it as if it were a new stimulus and is likely to respond even more strongly to it than before. In the train example above, dishabituation could occur when you go away on vacation for a few weeks to a quiet beach resort. The train noise is no longer present, so you become dishabituated to that constant noise. Then, when you return to your home and the noisy train tracks, the first time you hear the train after you return, you notice it again, and you may start covering your ears again or have an even more severe reaction.
Sensitization is, in many ways, the opposite of habituation. During sensitization, there is an increase in responsiveness due to either a repeated application of a stimulus or a particularly aversive or noxious stimulus. Instead of being able to “tune out” or ignore the stimulus so as to avoid reacting at all (as in habituation), the stimulus actually produces a more exaggerated response. Imagine that instead of trains passing by your house, you attend a rock concert and sit near the stage. The feedback noise from the amplifier may at first be merely irritating, but as the aversive noise continues, instead of getting used to it, it actually becomes much more painful, to the point at which you have to cover your ears and perhaps even move. Sensitization may also cause you to respond more vigorously to similar stimuli. For example, as you leave the rock concert, an ambulance passes. The siren, which usually doesn’t bother you, seems particularly loud and abrasive, as you’ve been sensitized to the noise of the rock concert. Thankfully, sensitization is usually temporary and unlikely to result in any long-term behavior change.
Desensitization refers to a decreased responsiveness to an aversive stimulus after repeated exposure. This phenomenon may occur on its own or in the context of desensitization therapy. For example, if you have a phobia of snakes, you might engage in systematic desensitization: you look at a picture of a snake until your reaction is normal; then you come into a room with a snake until your reaction is normal. In various therapy sessions, you get closer and closer to the snake, eventually even handling it. By being exposed to the stimulus but having no bad outcomes, you can become desensitized to the stimulus and thus overcome your phobia.
Classical conditioning was first described by Ivan Pavlov and is sometimes called Pavlovian conditioning. Classical conditioning occurs when a neutral stimulus, paired with a previously meaningful stimulus, eventually takes on some meaning itself. For example, if you shine a light in your fish tank, the fish will ignore it. If you put food in the tank, they will all typically swim to the top to get the food. If, however, each time you feed the fish, you shine the light in the tank before putting in the food, the fish will begin to learn about the light. Eventually, the light alone will cause the fish to swim to the top, as if food had been placed in the tank. The previously neutral light has now taken on some meaning. If you are having a difficult time understanding the different parts of classical conditioning, note that conditioning is another word for learning. For instance, unconditioned response is just another way of saying unlearned response.
Psychologists use specific terms for the various stimuli in classical conditioning. The conditioned stimulus (CS) is the initially neutral stimulus—in our example, the light. The unconditioned stimulus (US) is the initially meaningful stimulus. In our example, the US is food. The response to the US does not have to be learned; this naturally occurring response is the unconditioned response (UR). In our example, the UR is swimming to the top of the tank. The conditioned response (CR) is the response to the CS after conditioning. Again, in our example, the CR is swimming to the top.
What has just been presented is the simplest case of classical conditioning. The CS and the US can be paired into classical conditioning in a number of ways. Forward conditioning, in which the CS is presented before the US, can be further divided into delay conditioning, in which the CS is present until the US begins, and trace conditioning, in which the CS is removed some time before the US is presented. For the most part, the CS, or neutral stimulus, should come before the US. In the fish example above, this point is true because if the US was present first, the fish could be distracted from noticing the presence of the CS and will therefore not learn the association. Forward conditioning has been found to be the most effective at modifying behavior.
John Watson and his assistant Rosalie Rayner demonstrated classical conditioning with a child known now as Little Albert. Albert was first tested and found to have no fear of small animals, though he did show fear whenever a steel bar was banged loudly with a hammer. Watson then presented Albert repeatedly with a small, harmless white rat, and at the same time, banged the steel bar, making the child cry. Afterward, Albert cringed and cried any time he was presented with the rat—even if the noise wasn’t made. Furthermore, Albert showed that he was afraid of other white fluffy objects; the closer they resembled the white rat, the more he cried and cringed. This is known as generalization. If Albert could distinguish among similar but distinct stimuli, he would be exhibiting discrimination. We can use this example to demonstrate other terms related to classical conditioning. Acquisition takes place when the pairing of the natural and neutral stimuli (the loud noise and the rat) have occurred with enough frequency that the neutral stimulus alone will elicit the conditional response (cringing and crying). Extinction, or the elimination of the conditioned response, can be achieved by presenting the CS without the US repeatedly (in other words, the white rat without the loud noise). Eventually, the white rat will not produce the unpleasant response. However, spontaneous recovery, in which the original response disappears on its own, but then is elicited again by the previous CS at a later time, is also possible under certain circumstances. Returning to the fish: if after having taught them to associate a light with feeding, you shine a light and give no food a number of times, the fish will initially swim to the top looking for food, but will eventually ignore the light. However, after a period of time of not shining the light, if you shine the light again, the fish will again swim to the top. Notice that they do so spontaneously, without having been taught again. Of course, if there is no food, then the fish will even more quickly stop responding to the light. Spontaneous recovery demonstrates that even though the learning is not evident during the extinction period, the association between the CS and CR is still stored in the brain.
In second-order conditioning, a previous CS is used as the US. In our example, the fish would now be trained with a new CS, such as a tone, which would be paired with the light, which would now serve as the US. If the conditioning were successful, the fish would learn to swim to the top in response to the tone. Second-order conditioning is a special case of higher-order conditioning, which, in theory, can go up to any order as new CSs are linked to old ones. In practice, higher-order conditioning is rarely effective beyond the second order.
There are two distinct theories as to why classical conditioning works. Pavlov and Watson believed that the pairing of the neutral (eventual CS) and the natural (US) stimuli occurred because they were paired in time. This is the contiguity approach. Robert Rescorla believes that the CS and US get paired because the CS comes to predict the US. The fish from the initial example come to expect food upon seeing the light. This is known as the contingency approach.
Operant conditioning (also called instrumental conditioning) involves an organism’s learning to make a response in order to obtain a reward or avoid punishment. The response is an action not typically associated with obtaining a particular reward. B.F. Skinner pioneered the study of operant conditioning, although the phenomenon first was discovered by Edward L. Thorndike, who proposed the law of effect, which states that a behavior is more likely to recur if reinforced. Skinner ran many operant conditioning experiments. He often used a specially designed testing apparatus known as an operant conditioning chamber, or a Skinner Box.
This box typically was empty except for a lever and a hole through which food pellets could be delivered. Skinner trained rats to press the lever (not a typical behavior for rats) in order to get food. To get the rats to learn to press a lever, the experimenter would use a procedure called shaping, in which a rat first receives a food reward for being near the lever, then for touching the lever, and finally for pressing the lever. In the end, the rat is rewarded only for pressing the lever. This process is also referred to as differential reinforcement of successive approximations.
In a typical operant conditioning experiment, pressing the bar is the type of response (also called an operant), and food is the reinforcer. Food is a form of natural reinforcement; you don’t have to learn to like it. These types of natural reinforcers, such as food, water, and sex, provide primary reinforcement. Secondary reinforcement is provided by learned reinforcers. Money is a good example of a secondary reinforcer. In nature, money is just paper or metal; it has no intrinsic value. We have learned, however, that money can be exchanged for primary reinforcers.
Reinforcement can be divided into positive and negative reinforcement. Positive reinforcement is a reward or event that increases the likelihood that a particular type of response will be repeated. For example, picture an experiment in which a rat is given a food pellet every time it presses a lever. The food provides positive reinforcement, increasing the likelihood that the rat will press the lever again. Negative reinforcement is the removal of an aversive event in order to encourage the behavior. An example of negative reinforcement occurs in an experiment in which a rat is sitting on a mildly electrified cage floor. Pressing a bar in the cage turns off the electrical current. The removal of the negative experience (shock) is rewarding. Omission training also seeks to decrease the frequency of behavior by withholding the reward until the desired behavior is demonstrate.
reinforcement (think reward) |
punishment |
|
positive (think addition!) |
giving food, giving praise, giving money, giving a good grade, giving gold stars |
giving pain, giving a chore, giving extra homework, giving a bad grade |
negative (think subtraction!) |
taking away a chore, ending a punishment, removing pain, cancelling homework |
taking away food, taking away money (a fine), taking away freedom (being grounded, getting a time-out) |
Behaviorists use various schedules of reinforcement in their experiments. A schedule of reinforcement refers to the frequency with which an organism receives reinforcement for a given type of response. In a continuous reinforcement schedule, every correct response that is emitted results in a reward. This produces rapid learning, but it also results in rapid extinction, where extinction is a decrease and eventual disappearance of a response once the behavior is no longer reinforced.
Schedules of reinforcement in which not all responses are reinforced are called partial (or intermittent) reinforcement schedules. A fixed-ratio schedule is one in which the reward always occurs after a fixed number of responses. For example, a rat might have to press a lever 10 times in order to receive a food pellet. This schedule is called a 10:1 ratio schedule. Fixed-ratio schedules produce strong learning, but the learning extinguishes relatively quickly, as the rat quickly detects that the reinforcement schedule no longer is operative. A variable-ratio schedule is one in which the ratio of responses to reinforcement is variable and unpredictable. A good example of this is slot machines. The response, putting in money and pulling the lever, is reinforced with a payoff in a seemingly random manner. Reinforcement can come at any time. This type of schedule takes longer to condition a response; however, the learning that occurs is resistant to extinction, which helps explain why people can become addicted to gambling. A fixed-interval schedule is one in which reinforcement is presented as a function of fixed periods of time, as long as there is at least one response. This schedule is similar to being a salaried employee. Every two weeks, the paycheck arrives regardless of your work performance (as long as you show up at all). Finally, in the variable-interval schedule, reinforcement is presented at differing time intervals, as long as there is at least one response. Variable-interval, like variable-ratio, is more difficult to extinguish than fixed schedules.
Like reinforcement, punishment is also an important element of operant conditioning, but the effect is the opposite: reinforcement increases behavior, while punishment decreases it. Punishment is the process by which a behavior is followed by a consequence that decreases the likelihood that the behavior will be repeated. Like reinforcement, punishment can be both positive AND negative. Positive punishment involves the application, or pairing, of a negative stimulus with the behavior. For example, if cadets speak out of turn in military boot camp, the drill sergeant makes them do 20 push-ups. On the contrary, negative punishment involves the removal of a reinforcing stimulus after the behavior has occurred. For example, if a child breaks a window while throwing a baseball in the house, he loses TV privileges for a week. Positive punishment adds and negative punishment subtracts. Commonly, reinforcement and punishment are used in conjunction when shaping behaviors; however, it is uncommon for punishment to have as much of a lasting effect as reinforcement. Once the punishment has been removed, it is no longer effective. Furthermore, punishment instructs only what not to do, whereas reinforcement instructs what to do. Reinforcement is therefore a better alternative to encourage behavioral changes and learning. Additionally, the processes described for classical conditioning (acquisition, extinction, spontaneous recovery, generalization, and discrimination) occur in operant conditioning, as well. Note that the term “negative reinforcement” is often used incorrectly; colloquially, people use that term when they mean punishment.
Let’s further examine two specific types of operant learning: escape and avoidance. In escape, an individual learns how to get away from an aversive stimulus by engaging in a particular behavior. This helps reinforce the behavior so he or she will be willing to engage in it again. For example, a child does not want to eat her vegetables (aversive stimulus), so she throws a temper tantrum. If the parents respond by not making the child eat the vegetables, then she will learn that behaving in that specific way will help her escape that particular aversive stimulus. On the other hand, avoidance occurs when a person performs a behavior to ensure an aversive stimulus is not presented. For example, a child notices Mom cooking vegetables for dinner and fakes an illness so Mom will send him to bed with ginger ale and crackers. The child has effectively avoided confronting the aversive stimulus (the offensive vegetables) altogether. As long as either of these techniques work (meaning the parents do not force the child to eat the vegetables), the child is reinforced to perform the escape and/or avoidance behaviors.
Operant conditioning techniques are used quite frequently in places that have controlled populations, such as in a prison or mental institution. These institutions set up a token economy—an artificial economy based on tokens. These tokens act as secondary reinforcers, in that the tokens can be used for purchasing primary reinforcers, such as food. The participants in a token economy are reinforced for desired behaviors (responses) with tokens; this reinforcement is designed to increase the number of positive behaviors that occur.
Learned helplessness occurs when consistent efforts fail to bring rewards. If this situation persists, the subject will stop trying. Psychologist Martin Seligman’s original experiment placed dogs in a room with an electrified floor. At first, the dogs would try to escape the room or avoid the floor, but they ultimately learned that there was nothing they could do to prevent being shocked. Eventually, when the dogs’ leashes were removed, they still stayed on the electrified floor, even though they could have escaped. This fact shows that they have learned to be helpless. Seligman sees this condition as possibly precipitating depression in humans. If people try repeatedly to succeed at work, school, and/or relationships, and find their efforts are in vain no matter how hard they try, depression may result.
The biological basis of learning is of great interest to psychologists. Neuroscientists have tried to identify the neural correlates of learning. In other words, what physiological changes are brought about when we learn?
In the 1960s, psychologists noticed that neurons themselves could be affected by environmental stimulation. Experiments were conducted in which some rats were raised in an enriched environment, while others were raised in a deprived environment. The enriched environment included things to explore and lots of room in which to move, whereas the deprived environment was just a small, empty cage. At the end of the experiment, the rats were sacrificed, and their brains were examined. The experimenters found that the rats from the enriched environment had thicker cortexes, higher brain weight, and greater neural connectivity in their brains. This pattern of results suggests that neurons can change in response to environmental stimuli.
Donald Hebb proposed that human learning takes place by neurons forming new connections with one another or by the strengthening of connections that already exist. To study how learning affects specific neurons, scientists study the sea slug Aplysia. This is a good animal to study because it has only about 20,000 neurons, whereas humans have millions. Aplysia can be classically conditioned to withdraw their gill, a protective response. Eric Kandel, a neuroscientist, examined classical conditioning in Aplysia. Kandel paired a light touch (CS) with a shock (US). This pairing causes the Aplysia to withdraw its gill (UCR). After training, the light touch alone can elicit the gill withdrawal (now a CR). Kandel found that when a strong stimulus, such as a shock, happens repeatedly, special neurons called modulatory neurons release neuromodulators. Neuromodulators strengthen the synapses between the sensory neurons (the ones that sense the touch) and the motor neurons (the ones that withdraw the gill) involved. Additionally, new synapses were created. In other words, the neurons sensing shock and those that withdrew the gill became more connected than they were before. This experiment illustrated a neural basis for learning, namely, a physiological change that correlates with a relatively stable change in behavior as a result of experience. This is known as long-term potentiation (LTP). The same basic process has been shown to be the neural basis of learning in mammals. An easy way to remember this information is that “neurons that fire together, wire together.”
At a given synapse, long-term potentiation involves both presynaptic and postsynaptic neurons. For example, dopamine is one of the neurotransmitters involved in pleasurable or rewarding actions. In operant conditioning, reinforcement activates the limbic circuits that involve memory, learning, and emotions. Because reinforcement of a good behavior is generally intrinsically pleasurable (like food or praise), the circuits are strengthened as dopamine floods the system, making it more likely the behavior will be repeated.
After long-term potentiation has occurred, passing an electrical current through the brain doesn’t disrupt the memory associations between the neurons involved, although other memories will be wiped out. For example, when people receive a blow to the head resulting in a concussion, they lose their memory for events shortly preceding the concussion. This is due to the fact that long-term potentiation has not had a chance to occur (and leave traces of memory connections), while old memories, which were already potentiated, remain.
Long-term memory storage involves more permanent changes to the brain, including structural and functional connections between neurons. For example, long-term memory storage includes new synaptic connections between neurons, permanent changes in pre- and postsynaptic membranes, and a permanent increase or decrease in neurotransmitter synthesis. Furthermore, visual imaging studies suggest that there is greater branching of dendrites in regions of the brain thought to be involved with memory storage. Other studies suggest that protein synthesis somehow influences memory formation; drugs that prevent protein synthesis appear to block long-term memory formation.
The neural processes described above occur when animals or people learn new behaviors or change their behaviors based on experience (that is, environmental feedback). However, not all behaviors are learned: some are innate. These are the things we know how to do instinctively (or our body just does without us consciously thinking about it), not because someone taught us to do them (for example, breathing or pulling away from a hot stove). Further, innate behaviors are always the same between members of the species, even for those performing them for the first time.
Classical and operant conditioning obviously do not account for all forms of learning. A third kind of learning is social learning (also called observational learning), which is learning based on observing the behavior of others as well as the consequences of that behavior. Because this learning takes place by observing others, it is also referred to as vicarious learning.
Albert Bandura conducted some of the most important research on social learning. In a classic study, Bandura had children in a waiting room with an adult confederate (someone who was “in” on the experiment). For one group of children, the adult would simply wait. For another group of children, the adult would punch and kick an inflatable doll (thus the experiment is now nicknamed the Bobo Doll Experiment). In both groups, the children were then brought into another room to play with interesting toys, but after a short time, the experimenters told the children they had to stop playing with the interesting toys, and were brought back to the initial waiting room. The idea was to frustrate the children, and then see how they managed their frustration. Many of the children who had witnessed an adult abusing the doll proceeded to abuse the doll themselves. But most of the children who had witnessed an adult quietly waiting proceeded to quietly wait themselves. This experiment illustrated the power of modeling in affecting changes in behavior. This finding calls into question the behaviorist assertion that learning must occur through direct experience.
Bandura concluded that four conditions must be met for observational learning to occur. First, the learner must pay attention to the behavior in question. Second, there must be retention of the observed behavior, meaning that it must be remembered. Third, there must be a motivation for the learner to produce the behavior at a later time. Finally, the potential for reproduction must exist, that is, the learner must be able to reproduce the learned behavior.
Observational learning is a phenomenon frequently discussed in the debate over violence in the media. This issue is a particularly relevant one for television programs designed for children, as studies have shown that young children are particularly likely to engage in observational learning. However, even toddlers have shown unsolicited helping behaviors in experimental settings, suggesting that observational learning occurs in both positive and negative directions.
Building on recent views that there are multiple types of intelligence, including emotional intelligence, a number of schools have developed programs in social and emotional learning. These programs are designed to help develop empathy and conflict resolution in students.
The behaviorist view, championed by Skinner, is that behavior is a series of behavior-reward pairings, and cognition is not as important to the learning process. In more recent years, many psychologists have abandoned this view. One more recent view of learning posits that organisms start the learning process by observing a stimulus; then they continue the process by evaluating that stimulus; then they move on to a consideration of possible responses; and finally, they make a response. Various lines of evidence indicate that cognitive factors play a role in both animal and human learning. For example, if humans could be conditioned to salivate to the word style, they also would be likely to salivate to the word fashion. These words are not acoustically similar, but rather semantically similar, meaning they have a related meaning instead of sounding alike, so this pattern of behavior results from cognitive evaluation.
Perhaps a more profound demonstration of a similar phenomenon comes from work with pigeons. Pigeons were shown pictures containing either trees or no trees. They were trained to peck a key for food, but only when a picture of a tree was shown. As you might expect, they would peck the key only when tree pictures were shown, even after reinforcement stopped. They even pecked at pictures of trees that they had never seen before. Therefore, the birds must have formed a concept of trees, in which a concept is defined as a cognitive rule for categorizing stimuli into groups. Any new stimuli were categorized according to the concept.
An example of classical conditioning worthy of special mention is conditioned taste aversion (CTA), also known as the Garcia effect, after the psychologist who discovered it. John Garcia demonstrated that animals that eat a food that results in nausea induced by a drug or radiation will not eat that food if they ever encounter it again. This effect is profound and can be demonstrated with forward or backward conditioning. It is also highly resistant to extinction. A notable feature of this phenomenon is that it works best with food. It is hard to condition an aversion to a light paired with illness, for example. Psychologists have used this finding as evidence that animals are biologically predisposed to associate illness with food, as opposed to, say, light. This predisposition is a useful feature for a creature that samples many types of food, such as a rat. Humans also experience CTA. If you have ever eaten a food and vomited afterward, you may never want to eat that food again, even if you know that the food itself did not cause you to be ill.
CTA demonstrates another learning phenomenon: stimulus generalization. Let’s say that you eat a peach and get sick. You may never want to eat a peach again, but you may also develop an aversion to other similar fruits, such as nectarines. The two fruits are similar, so you generalize from one stimulus (the peach) to the other (the nectarine).
Garcia’s research is profound for two reasons: (1) it shows that certain species are built to learn certain associations more easily than others; (2) it shows that classical conditioning might be occurring through the access of some concept. The fact that someone might get sick from eating a peach, and then refuses to eat all fruit, suggests that this person has attached that negative feeling to the concept of fruit. If it was simple classical conditioning, this effect would not occur because the person did not have a direct experience getting sick while eating other types of fruit. There must be some concept at work, which discredits the “black box” theory of the brain held by behaviorists. It also calls into question the assertion that direct experience is necessary to learn and associate. Looking at this learning as cognitive provides an explanation for why someone would develop food aversion even though that person doesn’t become sick until hours after she has eaten the food. That person must access the concept of what she ate earlier and attach the association with getting sick. Finally, this is a cognitive issue because sometimes people develop food aversion to a food item they think made them ill, such as sushi, even though it could have been something else that brought upon the illness, such as bacteria in the water at the sushi restaurant.
Other evidence for a cognitive component to learning derives from the work of Edward Tolman. Rats permitted to explore a maze without being reinforced would find the exit after following an indirect path; the time it took them to exit the maze without reinforcement decreased quite slowly. However, when reinforcers were applied after several trials without reinforcement, the rats’ time to exit the maze decreased dramatically, indicating that the rats knew how to navigate to a specific location within the maze and so had formed a cognitive map, or mental representation of the maze. This demonstrates latent learning, or learning that is not outwardly expressed until the situation calls for it.
Another form of learning is insight learning. This occurs when we puzzle over a solution to a problem, unsuccessfully, and then suddenly the complete solution appears to us. As discussed in the next chapter, this can be quite useful for solving problems.
learning
Nonassociative Learning
habituation
sensitization
dishabituation
desensitization
desensitization therapy
systematic desensitization
Classical Conditioning
Ivan Pavlov
Pavlovian conditioning
conditioned stimulus (CS)
unconditioned stimulus (US)
unconditioned response (UR)
conditioned response (CR)
forward conditioning
delay conditioning
trace conditioning
generalization
discrimination
acquisition
extinction
spontaneous recovery
second-order conditioning
contiguity approach
contingency approach
Operant Conditioning
operant conditioning (instrumental conditioning)
B.F. Skinner
Edward L. Thorndike
shaping (differential reinforcement of successive approximations)
natural reinforcement
primary reinforcement
secondary reinforcement
positive reinforcement
negative reinforcement
omission training
schedule of reinforcement
continuous reinforcement schedule
partial (intermittent) reinforcement schedule
fixed-ratio schedule
variable-ratio schedule
fixed-interval schedule
variable-interval schedule
punishment
escape
avoidance
token economy
learned helplessness
Biological Factors
Donald Hebb
Eric Kandel
neuromodulators
long-term potentiation
Social Learning
social learning (observational learning/vicarious learning)
Albert Bandura
confederate
Bobo Doll Experiment
modeling
social and emotional learning
Cognitive Processes in Learning
cognitive
conditioned taste aversion (CTA) (Garcia effect)
stimulus generalization
Edward Tolman
cognitive map
latent learning
insight learning
See Chapter 19 for answers and explanations.
1. After having been struck by a car, a dog now exhibits fear responses every time a car approaches. The dog also exhibits a fear response to the approach of a bus, a truck, a bicycle, and even a child’s wagon. The dog has undergone a process of
(A) stimulus discrimination
(B) stimulus generalization
(C) spontaneous recovery
(D) backward conditioning
(E) differential reinforcement
2. Which of the following would be an example of second-order conditioning?
(A) A cat tastes a sour plant that makes it feel nauseated and will not approach that plant again.
(B) A horse that is fed sugar cubes by a particular person salivates every time that person walks by.
(C) A pigeon that has received food every time a red light is presented exhibits food-seeking behavior when a yellow light is presented.
(D) A rabbit that has repeatedly seen a picture of a feared predator paired with a musical tone exhibits a fear response to the musical tone as well as to a flashed light alone that had been repeatedly paired with the tone.
(E) Wild rats instinctively avoid canine predators, but domesticated rats show little fear of the domesticated dogs they encounter, and may even join them in exploration or play.
3. The reinforcement schedule that generally provides the most resistance to response extinction is
(A) fixed-ratio
(B) fixed-interval
(C) variable-ratio
(D) variable-interval
(E) continuous
4. The importance of enrichment and stimulation of the brain during critical periods in development can be seen in all of the following EXCEPT
(A) an increase in the number of neurons
(B) an increase in the number of connections between neurons
(C) strengthening of already existing connections between neurons
(D) an increase in the size of neurons
(E) higher levels of neurotransmitters
5. According to Albert Bandura, observational learning can occur even in the absence of
(A) observed consequences of behavior
(B) direct attention to the behavior
(C) retention of the observed behavior over time
(D) ability to reproduce the behavior
(E) motivation to reproduce the behavior at a later time
6. Jay joins a social media website to lose weight. He receives points based on the intensity of his daily exercise and praise from fellow users for each workout he logs on the website. This increases his exercise frequency and intensity. Eventually he stops logging onto the website, but continues to exercise with increased frequency. This is an example of
(A) vicarious reinforcement
(B) operant conditioning
(C) innate behavior
(D) classical conditioning
(E) observational learning
7. Which of the following scenarios is an example of negative reinforcement?
(A) After staying out past her curfew, Stephanie is grounded the next weekend.
(B) When Toni finishes her homework, she does not have to take out the trash.
(C) When Ben received an A for his research project, his family treated him to dinner at his favorite restaurant.
(D) When Lola the dog jumps on her owner, the owner takes a step away from her.
(E) When the rat in a Skinner Box presses the lever, it delivers an electric shock.
8. Chemotherapy is well known to cause nausea and vomiting. A chemotherapy patient’s care team cautions the patient to eat only “novel” or new foods before treatment as opposed to food staples, like chicken, rice, or pasta. This is most likely due to
(A) operant conditioning
(B) taste aversion
(C) Yerkes-Dodson Law
(D) observational learning
(E) stimulus generalization
9. Leigha, who is expecting college acceptance letters, knows the mailman comes every day around 1:30 P.M. Hoping that the mail arrives early, she checks the mailbox at 12:45, 1:15, and 1:40. She does not check again until the next day around the same time. This is an example of which reinforcement schedule?
(A) Fixed-interval
(B) Variable-interval
(C) Fixed-ratio
(D) Variable-ratio
(E) Continuous
10. Kevin tries to teach his dog Muka to roll over. First, he teaches her to lie down. Then, he teaches her to lie on her side. Eventually, Kevin gets Muka to roll onto her back and, finally, all the way around. He gives her a treat with every step. This process is known as
(A) habituation
(B) discrimination
(C) generalization
(D) shaping
(E) sensitization
Respond to the following questions:
Which topics in this chapter do you hope to see on the multiple-choice section or essay?
Which topics in this chapter do you hope not to see on the multiple-choice section or essay?
Regarding any psychologists mentioned, can you pair the psychologists with their contributions to the field? Did they contribute significant experiments, theories, or both?
Regarding any theories mentioned, can you distinguish between differing theories well enough to recognize them on the multiple-choice section? Can you distinguish them well enough to write a fluent essay on them?
Regarding any figures given, if you were given a labeled figure from within this chapter, would you be able to give the significance of each part of the figure?
Can you define the key terms at the end of the chapter?
Which parts of the chapter will you review?
Will you seek further help, outside of this book (such as a teacher, Princeton Review tutor, or AP Students), on any of the content in this chapter—and, if so, on what content?