Operant Conditioning Paper


Operant Conditioning Paper

Operant Conditioning Paper

 Psychologists like B. F. Skinner have studied how we can use operant conditioning to change the behavior of people and animals. Drawing on your personal experience, choose a person or animal whose behavior you want to change. (You may select your own behavior for this question if you wish.) How could you use operant conditioning to change the behavior of this person or animal? Operant Conditioning Paper

In a multi-paragraph essay, describe your plan to change this behavior. Be sure to mention what type of reinforcer and reinforcement schedule you would use and explain why you made those particular choices. Include information from class materials, readings, and research on operant conditioning to support your discussion.  Operant Conditioning Paper

  • attachmentWhatIsOperantConditioning.docx

What Is Operant Conditioning?

The discussion of behaviorism in Chapter 1 introduced you to Edward Thorndike and his law of effect. To recap, Thorndike had observed the learning that took place when a cat tried to escape one of his “puzzle boxes.” According to Thorndike, the cats learned to escape by repeating actions that produced desirable outcomes and by eliminating behaviors that produced what he called “annoying” outcomes, or outcomes featuring either no useful effects or negative effects (1913, p. 50). Consequently, the law of effect states that a behavior will be “stamped into” an organism’s repertoire depending on the consequences of the behavior (Thorndike, 1913, p. 129). Operant Conditioning Paper

The association between a behavior and its consequences is called operant or instrumental conditioning. In this type of learning, organisms operate on their environment, and their behavior is often instrumental in producing an outcome. B. F. Skinner extended Thorndike’s findings using an apparatus that bears his name—the Skinner box, a modified cage containing levers or buttons that can be pressed or pecked by animals (see Figure 8.7).

The Skinner Box.

A specially adapted cage called a Skinner box, after behaviorist B. F. Skinner, allows researchers to investigate the results of reinforcement and punishment on the likelihood that the rat will press the bar.



Operant conditioning differs from classical conditioning along several dimensions. By definition, classical conditioning is based on an association between two stimuli, whereas operant conditioning occurs when a behavior is associated with its consequences. Classical conditioning generally works best with relatively involuntary behaviors, such as fear or salivation, whereas operant conditioning involves voluntary behaviors, like walking to class or waving to a friend. Operant Conditioning Paper

8-4aTypes of Consequences

As we all know from experience, some types of consequences increase behaviors, while others decrease behaviors. Skinner divided consequences into four classes: positive reinforcement, negative reinforcement, positive punishment, and negative punishment. Both types of reinforcement increase their associated behaviors, whereas both types of punishment decrease associated behaviors (see Table 8.2). Operant Conditioning Paper

Table 8.2

Types of Consequences

Add stimulus to environmentRemove stimulus from environment
Increase behaviorPositive reinforcementNegative reinforcement
Decrease behaviorPositive punishmentNegative punishment

We all have unique sets of effective reinforcers and punishers. You might think that getting an A in a course is reinforcing, making all those extra hours spent studying worthwhile, but top grades may be less meaningful to the student sitting next to you, who came to college for the social life. A parent might spank a child, believing that spanking is an effective form of punishment, only to find that the child’s unwanted behavior is becoming more rather than less frequent. For some children, the reward of getting the parent’s attention overrides the discomfort of the spanking part of the interaction. In other words, the identity of a reinforcer or punisher is defined by its effects on behavior, not by some intrinsic quality of the consequence. The only accurate way to determine the impact of a consequence is to check your results. If you think you’re reinforcing or punishing a behavior but the frequency of the behavior is not changing in the direction you expect, try something else.

Positive Reinforcement

By definition, a positive reinforcement increases the frequency of its associated behavior by providing a desired outcome. Again, each person has a menu of effective reinforcements. In a common application of operant conditioning, children with autism spectrum disorder are taught language, with candy serving as the positive reinforcement. Benjamin Lahey tells of his experience trying to teach a child with autism spectrum disorder to say the syllable “ba” to obtain an M&M candy (Lahey, 1995). After 4 hours without progress, Lahey turned to the child’s mother in frustration, asking her what she thought might be the problem. The mother calmly replied that her son didn’t like M&Ms. Lahey switched to the child’s preferred treat, chopped carrots, and the child quickly began saying “ba.” Chopped carrots are probably not the first reinforcer you would try with a 4-year-old boy, but in this case, they made all the difference. Operant Conditioning Paper

Thinking Scientifically

Why Do People Deliberately Injure Themselves?

Edward Thorndike’s Law of effect stipulates that behaviors followed by positive consequences are more likely to be repeated in the future, and that behaviors followed by negative consequences are less likely to be repeated. Why, then, do large numbers of people, particularly in adolescence, engage in self-injury, or deliberate physical damage without suicidal intent (Klonsky & Muehlenkamp, 2007)? Up to 25% of teens have tried self-injury at least once (Lovell & Clifford, 2016). Most initiate self-injury while in middle school (grades 6 through 8), and approximately 6% of college students continue to self-injure.

As this chapter has detailed, reward and punishment are in the eye of the beholder. The first challenge that we face in our analysis of self-injury is the assumption that pain is always a negative consequence. For most of us, it is. However, adolescents who engage in self-injury report feelings of relief or calm, despite the obvious pain that they inflict on themselves. Such feelings probably reinforce further bouts of self-injury. Self-injury often occurs in response to feelings of anger, anxiety, and frustration, and alleviation of these negative feelings might reward the injurious behavior (Klonsky, 2007; Klonsky & Muehlenkamp, 2007). Finally, injury is associated with the release of endorphins, our bodies’ natural opioids. The positive feelings associated with endorphin release also might reinforce the behavior.

Self-injury frequently occurs in people diagnosed with psychological disorders, such as depression, anxiety disorders, eating disorders, or substance abuse, which are discussed further in Chapters 7 and 14. Others engaging in the behavior have a history of sexual abuse. Observations that captive animals in zoos and laboratories are often prone to self-injury might provide additional insight into the causes of this behavior (Jones & Barraclough, 1978). Treatment usually consists of therapy for any underlying psychological disorders, along with avoidance, in which the person is encouraged to engage in behaviors that are incompatible with self-harm. To assist these individuals further, we need to be able to see reward and punishment from their perspective, not just our own.

If the consequences of a behavior influence how likely a person is to repeat the behavior in the future, how can we explain the prevalence of self-injury? Why don’t the painful consequences of the behavior make people stop? In situations like this, operant conditioning tells us that we need to look for possible reinforcers for the behavior that override the painful outcomes. In the case of self-injury, people report feeling calm and relief. To treat such behaviors effectively, psychologists need to understand what advantages they provide from the perspective of the person doing the behavior.

Rusig/Alamy Stock Photo

The Premack principle can help you maintain good time management. If you prefer socializing to studying, use the opportunity to socialize as a reward for meeting your evening’s study goals.

If everyone has a different set of effective reinforcers, how do we know which to use? A simple technique for predicting what a particular animal or person will find reinforcing is the Premack principle, which states that whatever behavior an organism spends the most time and energy doing is likely to be important to that organism (Premack, 1965). It is possible, therefore, to rank people’s free-time activities according to their priorities. If Lahey had been able to observe his young client’s eating habits before starting training, it is unlikely that he would have made the mistake of offering M&Ms as reinforcers. The opportunity to engage in a higher-priority activity is always capable of rewarding a lower-priority activity. Your grandmother may never have heard of Premack, but she knows that telling you to eat your broccoli to get an ice cream generally works.

Both Thorndike and Skinner agreed that positive reinforcement is a powerful tool for managing behavior. In our later discussion of punishment, we will argue that the effects of positive reinforcement are more powerful than the effects of punishment. Unfortunately, in Western culture, we tend to provide relatively little positive reinforcement. We are more likely to hear about our mistakes from our boss than all the things we’ve done correctly. It is possible that we feel entitled to good treatment from others, so we feel that we should not have to provide any reward for reasonably expected behaviors. The problem with this approach is that extinction occurs in operant, as well as in classical, conditioning. A behavior that is no longer reinforced drops in frequency. By ignoring other people’s desirable behaviors instead of reinforcing them, perhaps with a simple thank-you, we risk reducing their frequency.

According to the Premack principle, a preferred activity can be used to reinforce a less preferred activity. Most children prefer candy over carrots, so rewarding a child with candy for eating carrots often increases carrot consumption. One little boy with autism spectrum disorder, however, preferred carrots to M&Ms, and his training proceeded more smoothly when carrot rewards were substituted for candy rewards.

Carolyn Jenkins/Alamy Stock Photo; Itani/Alamy Stock Photo

Some reinforcers, known as primary reinforcers, are effective because of their natural roles in survival, such as food. Others must be learned. We are not born valuing money, grades, or gold medals. These are examples of  conditioned reinforcers , also called secondary reinforcers, that gain their value and ability to influence behavior from being associated with other things we value. Here, we see an intersection between classical and operant conditioning. If you always say “good dog” before you provide your pet with a treat, saying “good dog” becomes a CS for food (the UCS) that can now be used to reinforce compliance with commands to come, sit, or heel (operant behaviors). Classical conditioning establishes the value of “good dog,” and operant conditioning describes the use of “good dog” to reinforce the dog’s voluntary behavior.

Serena Williams “loves” her Wimbledon trophy not for its intrinsic value (you can’t eat it, etc.), but because trophies have become conditioned reinforcers.

PA Images/Alamy Stock Photo

Many superstitious behaviors, like wearing your “lucky socks,” can be learned through operant conditioning. Operant conditioning does not require a behavior to cause a positive outcome to be strengthened. All that is required is that a behavior be followed by a positive outcome. Unless you suddenly have a string of bad performances while wearing the lucky socks, you are unlikely to have an opportunity to unlearn your superstition.

Humans are capable of generating long chains of conditioned reinforcers extending far into the future. We might ask you why you are studying this textbook right now, at this moment. A psychologist might answer that you are studying now because studying will be reinforced by a good grade at the end of the term, which in turn will be reinforced by a diploma at the end of your college education, which in turn will be reinforced by a good job after graduation, which in turn will be reinforced by a good salary, which will allow you to live in a nice house, drive a nice car, wear nice clothes, eat good food, and provide the same for your family in the coming years.

Negative Reinforcement

Negative reinforcement , which sounds contradictory, involves the removal of unpleasant consequences from a situation to increase the frequency of an associated behavior. Negative reinforcement increases the frequency of behaviors that allow an organism to avoid, turn off, or postpone an unpleasant consequence; these are sometimes called escape and avoidance behaviors.

Let’s look at a laboratory example of negative reinforcement before tackling real-world examples. If a hungry rat in a Skinner box learns that pressing a bar produces food, a positive consequence, we would expect the frequency of bar pressing to increase. This would be an instance of positive reinforcement. However, if pressing the bar turns off or delays the administration of an electric shock, we would still expect the frequency of bar pressing to increase. This would be an instance of negative reinforcement.

Be careful to avoid confusing negative reinforcement with punishment, which is covered in the next section. By definition, a punishment decreases the frequency of the behaviors that it follows, whereas both positive and negative reinforcers increase the frequency of the behaviors that they follow. Returning to our Skinner box example, the rat’s bar pressing increases following both positive reinforcement (food) and negative reinforcement (turning off a shock). If we shocked the rat every time it pressed the bar (punishment), it would stop pressing the bar quickly.

About the Author

Follow me

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}