• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Captain Psychology

  • Topics
  • Notes
  • Videos
  • Syllabus

Learning

April 4, 2023 by ktangen

Skinner’s Theory

Skinner’s Theory

Skinner's Atheoretical Theory

Building on the behaviorism of Watson, B.F. Skinner (1904-1990) emphasized the importance of what happens after a response. Not S-R, but S-R-C (stimulus-response-consequence), He expanded Thorndike’s law of effect to an entire system of reinforcement. He is best known for his schedules of reinforcement, token economies, programmed learning and teaching pigeons to play table tennis.

Skinner’s approach was both inductive and atheoretical. He rejected statistical analyses and built a body of knowledge on replication. Using single subject designs (N=1), Skinner manipulated when a reward was received. Skinner believed that behavior is emitted, not elicited. Where classical conditioning held that a stimulus elicited a response, Skinner’s operant conditioning model maintained that behavior is emitted from the organism, a consequence occurs, and the organism adapts its behavior accordingly. His focus was not S-R (stimulus-response) but S-R-C (stimulus-response-consequence).

In fact, Skinner’s focus was broader than a single response. Rather than an individual behavior, an operant is a class of behavior. Consequently, in operant conditioning, rewards impact an entire class of behavior. You have an operant for the way you answer the phone. When you answer the phone, you might say your name, answer “hi” or give a statement of greeting (e.g., good morning). Phone answering is not a specific behavior as much as it is a class of behavior; an operant.

Consequences can be classified on two dimensions: give-take and good-bad. Giving is better described by the verb “to posit,” which means to place, affirm or put forward. Consequently, Skinner referred to the process of giving as positive, indicating the direction of action. Similarly, to take something away is to negate (invalidate, deny), and its direction is therefore referred to as negative. The good-bad dimension is reinforcement and punishment, respectively. To reinforce is to strengthen or increase, and punishment is to penalize.

From these two dimensions (positive-negative and reinforcement-punishment), Skinner identified 4 conditions: positive reinforcement, positive punishment, negative reinforcement and negative punishment. Rewards and punishments that are posited (given) are called positive reinforcement and positive punishment. Rewards and punishments that involve removing something are negative.

Notice that there is no suggestion that positive punishment is good. Positive punishment is a situation where you are given a punisher (stern look, electric shock, etc.). Negative punishment is also unpleasant; it is having something taken away (car keys, pay check, etc.). Similarly, Positive reinforcement is being given a reinforcer (food, praise, etc.) and negative reinforcement is having something taken away (credit card debt, angry frown, etc.).

According to Skinner, reinforcement (positive and negative) increase the likelihood of an operant reappearing. And punishment (positive and negative) decrease the likelihood an operant reappearing. The entire operant is affected. If you were given money for answering the phone, you would have received a positive reinforcer (given is positive and money is a reinforcer). And the entire class of phone answering behavior would be more likely to occur. Although it would not be possible to predict which way you answered the phone, operant conditioning would predict that your phone answering behaviors would likely increase.

Similarly, if you were positively punished for answering the phone (given an electric shock when you lifted the receiver), your phone answering operant (the entire class of behavior) is less likely to occur.

Positive reinforcement and positive punishment are easy to understand. Negative punishment is also familiar to children who have had their car keys taken away or been “grounded” by their parents. Taking away a reward is a negative punishment.

Surprisingly, much of human behavior can be explained by negative reinforcement. People exhibit long chains of behavior in order to escape punishment. When a police car appears in your rearview mirror or your boss heads toward your cubicle, you might well adjust in your chair, cough several times, shuffle your feet and swallow frequently. And when the police officer or boss turns away without even an acknowledgement of you, the sense of overwhelming relief is very rewarding.

When an instructor calls on you in class and you don’t know the answer, you might well begin a long string of behaviors that have been negatively reinforced. These behaviors need not be tied to situation, they might be purely superstitious, but they are likely to reoccur if you escape impending doom.

Notice that what is impending doom and what is rewarding is very personal. Some people might enjoy being called on in class and others may abhor it. Some children may hate being sent to their rooms and others may find it very rewarding. Although it is impossible to tell ahead of time what is individually rewarding, Skinner relied on a functional analysis of situation. Rewards are not inherent in objects; any object that functions as a reinforcer is a reinforcer. Complementing a child on cleaning their room can be positive punishment if as a result the child stops cleaning. Reinforcement is not in the intent but in the effect.

According to Skinner, rewards should be given deferentially. Parents should reward behaviors they want and ignore (extinguish) behaviors they don’t want. Giving attention to a child (such as when giving a punishment) actually rewards the child with your presence and sends a mixed message. Behavior can be shaped by rewarding successive approximations but practice without reinforcement doesn’t improve performance.

Skinner relied on operational definitions for his experiments. Instead of inferring internal states (such as hunger), he defined hunger in terms of the number of hours since having last eaten. Skinner insisted on clear definitions that are not open to interpretation. He did hypothesize drive, insight or any internal process. Although he didn’t deny their existence, he thought them to be unknowable. For Skinner, like Watson, if it didn’t impact behavior, whatever went on in the black box of the mind was unimportant.

Basing his findings on animal research (mostly rats and pigeons), Skinner identified five schedules of reinforcement: continuous reinforcement, fixed interval (FI), fixed ratio (FR), variable interval (VI) and variable ratio (VR). Continuous reinforcement is used to shape (refine) a behavior. Every time the subject performs the desired behavior, it is rewarded. Continuous reinforcement leads to quick learning and (after the reinforcement is stopped) quick descent.

FI describes the condition where a certain amount of time must past before a correct response is rewarded (e.g., getting paid every two weeks). FI produces a “scalloped” pattern (the closer it gets to pay day the more often the proper response is given).

Fixed Ratio requires a certain number of responses to be made before a behavior is rewarded (e.g., 10 widgets must be made before you are paid). In VI and VR schedules of reinforcement, the required amount of time or the number of responses varied. These partially reinforcement schedules (never quite sure when you’ll be rewarded) are quite resistant to extinction.

In an attempt to apply his research to practical problems, Skinner adapted his operant conditioning chamber (he hated the popular title of “Skinner box”) to child rearing. His “Baby Tender” crib was an air conditioned glass box which he used for his own daughter for two and a half years. Although commercially available, it was not a popular success. Another theoretically successful but practically unaccepted application of operant conditioning occurred during WWII. Skinner designed a missile guidance system using pigeons as “navigators.” Although his system was feasible, the Army rejected it out of hand. The PR problems of pigeon bombers must have been extensive.

Skinner’s also originated programmed instruction. Using a teaching machine (or books with small quizzes which lead to different material), small bits of information are presented in an ordered sequence. Each frame or bit of information must be learned before one is allowed to proceed to the next section. Proceeding to the next section is thought to be rewarding.

Born in Susquehanna, Pennsylvania, Skinner was an English major in college (Hamilton College) and then pursued psychology (at Harvard). He earned his PhD from Harvard in 1931 and then taught at the University of Minnesota (Minneapolis), Indiana University (Bloomington). He returned to Harvard in 1948 and remained until his retirement in 1974.

Filed Under: Learning

April 4, 2023 by ktangen

Applied Classical Conditioning

Classical Conditioning

Can opener & cats

At its root, classical conditioning is about reflexes. Indeed, Pavlov called his approach reflexology. Reflexes are unlearned behaviors that are prewired to ensure your survival. These include your gag reflex (to prevent choking), eye blink (to protect and nourish the cornea), and rooting and sucking reflexes (to make sure newborns get food).

You have a reflex for kicking your leg when your knee is struck. It obviously is there so doctors can play with little hammers. Or perhaps it is there to maintain your posture and balance. It’s a stretch reflex that helps you adjust to changing terrains as you walk across a field.

You have digestive reflexes so you don’t have to think about processing the food you’ve eaten. It happens automatically.

And that’s the nature of reflexes. They are automatic. They occur without thinking. When your patella ligament is struck with a little hammer, the signal doesn’t travel to the brain for processing. You don’t have to ponder whether or not to kick your leg. No cognitive processing is required.

The simplest reflexes are monosynaptic. The signal travels to the spine on a sensory neuron and back again on a motor neuron. There is only one synaptic crossover. No thought is required.

Reflexes

A reflex is the combination of a stimulus and a response. Air puffed into your eye (stimulus) results in an eye blink (response). Neither is learned. No training is required. Both are unconditioned (unlearned).

An unconditioned stimulus elicits an unconditioned response. There are no response options. An eye puff will always elicit an eye blink. It doesn’t make you sneeze.  The connection is hard wired at the synaptic level, typically at the spine.

Process

Now let’s add some low-level processing. This is not processing in the cerebral cortex (what most people call the brain). This is processing which occurs under the cortex, in the region between the cortex and the spine. No conscious thought is involved; just low-level processing.

If the cerebral cortex is a computer, the low-level processing units underlying it are switchers, timers and interfaces. This is where low-level associations between stimuli occur. In classical conditioning, it is an association between a neutral stimulus and the stimulus portion of a reflex.

Once the connection between two stimuli is complete (it can take many trials or be formed in a single instance), the neutral stimulus (now called a conditioned stimulus) produces a response that is similar to the reflex. Some of Pavlov’s dogs took 50 or more trials to connect the bell to the food. Some the association was made, they would salivate to the bell, but not as much as they would to real food.

Notice that the conditioned stimulus doesn’t trigger the unconditioned response. It triggers a response that is similar but less intense.

Thanksgiving

If food is presented at Thanksgiving dinner, you begin to salivate. If your family tradition was to ring a bell or sing a song before the food arrived, the sound of that bell or song could make you start to salivate. But your brain, even the subcortical regions, are dumb enough to salivate as if the food has actually arrived. It can tell the difference between “about to come” and “here it is.” The conditioned response is a less intense version of the unconditioned response.

This subcortical region, called the limbic system, makes associations and tracks their context. If a bell is repeated rung without food arriving, the conditioned response will decrease in its size of response and eventually disappear. This extinction of behavior allows you to move on and create new associations in the same environment.

Once extinguished, there is no response to the conditioned stimulus (bell). It becomes neutral again, sort of. The limbic system is tracking the context. It will occasionally trigger a conditioned response in reaction to a bell. It is as if the body is asking “do you want to run this subroutine again?” or “I’m ready if you are.” If the bell is followed by food, the conditioned response rapidly returns.

Associations

The connection between the two stimuli (neutral and unconditioned) isn’t straightforward. The conditioned stimulus doesn’t trigger the unconditioned response. Consequently, it is unclear how the associations are formed. In the early days of behaviorism, the two stimuli where thought to be bonded together. Complex behaviors could then be explained by assembling a hierarchy of stimuli.

Pavlov believed that new stimuli were being attached to old behaviors. In other words, we don’t like change. We tend to response to new situations with old responses. The new stimuli are being substituted for the old ones.

The problem with this substitution theory is that conditioned responses, though similar, are not the same as unconditioned reflexive responses. They are not as large or as fast. And they occur in response to a wider variety of stimuli. Conditioned responses are triggered not only by the specific bell used in training but also by bells that sound similar. Similar bells, either higher or lower in frequency, can be used. The closer they resemble the original bell, the larger the response. The less similar they are, the less the response.

This means classically conditioned behaviors cannot explain all of human behavior. Thinking, once described as subvocalized speech, is more than a chain of S-R responses. Claims of understanding or controlling all human behavior were wildly overstated.

What We Know

We know that timing is important.

The optimal time between conditioned stimulus and unconditioned stimulus is one half-second. The bell should be followed by food in a half-second. Less than that or longer than that can work but one half-second gives maximal impact.

We know that the order of presentation is important.

Forward conditioning, where the bell precedes the food, produces the greatest affect. The bell is still ringing when the food is presented.

Trace conditioning, a variation of this method, presents the bell and then, after the bell stops ringing, the food is presented. It explains the excitement of a game’s beginning just after you hear “play ball.”

In practice, this is what happens with you classically condition a clicker as a reward giver. Horse, dog and human trainers often use toy noise makers as reward markers. Pressing your finger on the bent metal bar and releasing it produces a distinctive clicking sound. The sound indicates “good job” and is followed as soon as physically possible with a food reward or praise.

Before you can reward with a clicker, you must establish an association between its sound and the reward. With humans you can say “the click means good job” but for other animals, the sound is classically conditioned with a bit of food. Over and over again, the click-food link is established.

In simultaneous conditioning, the sound and food are presented together. Less effective than forward chaining and about the same as trace conditioning, simultaneous conditioning is common in everyday life. Songs that were playing when you fell in love produce emotions elicited by simultaneous conditioning. The lighting and ambiance of a great dinner is simultaneous conditioning. The environmental cues associated with drug abuse are also examples of simultaneous conditioning.

Backward conditioning (food before bell) is less successful but still useful. This is what trainers do with race horses. It is important to collect a urine sample after a race. Not wanting to wait a long time for this to occur, horse trainings will walk past a stall, observe a horse urinating and give a small whistle or mouth click. Over a period of time, the horse will make the association, and, after the race, will urinate on cue.

We know that unfamiliarity is better.

The technical term for this principle is latent inhibition. In general, it takes long to form an association when using a familiar neutral stimulus. Belling ringing worked well for Pavlov’s dogs because it is a stimulus that is uncommon in their experience. If the dogs had lived surrounded by ringing bells, a bell would not have worked well as a conditioned stimulus.

In taste aversion, if you get sick on a food you rarely eat, you might not want to ever eat it again. But if you get food poisoning from a pizza, it’s just one of a hundred pizzas you’ve eaten. You might even attribute your sickness to something other than your favorite pizza. You may think it was just the flu.

We know the connections are strong.

One reason it is difficult to kick a drug habit is all of the environmental cues associated with the addiction. Revisiting places you got high, or just thinking about them, can trigger strong cravings for drugs.

Taste-aversion conditioning can turn one experience of food poisoning into a lifetime of avoiding the associated food. I had a horrible reaction to a burger at a chain restaurant and still can’t return to their stores. I am also very picky about the taste of my burgers, even if I make them myself.

PTSD is a complicated condition that includes reactions that are classically conditioned. Seeing a hurt child or dog can make you react strongly to the portion of freeway or environment where the incident occurred. The sounds of professional firework can trigger immediate emotional reactions from people familiar with the sound of mortar shells.

Fear, in all of its forms, is often a classically conditioned reaction to a variety of environmental stimuli. One bad exposure to a clown can last a long time.

Filed Under: Learning

April 4, 2023 by ktangen

All Or None Learning

All-Or-None Learning

All or none learningYou will hear that habits are loops. The idea is that a cue triggers a response which results in a reward. The circle is complete when the reward links back to the cue. This easy to understand concept is quite popular. There is no evidence that it is true. Or at least, it is not that simple.

We tend to automatically jump to Skinner’s reinforcement theory. We assume rewards or dopamine stimulation account for all of our behaviors. But let’s not rush to that conclusion. There may be another explanation.

Aristotle

Since habits are thought to be learned, explanations of what causes behavior go back a long way. Aristotle, about 300 years before our calendar begins, proposed three principles of learning. These laws of association are similarity, opposites and contiguity.

Aristotle noted that we are good at making connections between similar things. We are also able to easily identify contrasts. Remember Sesame Street’s “One of these things is not like the others”? Essentially, we quickly identity positive and negative correlations. We track things that move together or things that go in opposite directions. We are gifted at pattern recognition.

Aristotle’s third law of association, contiguity, says that we notice thing that are close to each other physically or in time. We notice when objects are physically placed close to each other. We associate pies with the window ledge they are set on to cool. Planets close to Earth are thought as a single unit (solar system) because they care close together. We think of countries bordered by their neighbors as being similar.

Contiguity of time helps us associate events we believe go together. find a coin and meet an old friend on the same day. We say “now” and the traffic light changes. We run the can opener and the cat appears. Two events that occur about the same time are associated together.

Guthrie

Six hundred years after Aristotle, Edwin Guthrie (1886-1959) made contiguity the center of his theory of learning. Guthrie maintained that when a stimulus and response occur close together in time or space, bonds are formed. These bonds (associations) is all that is needed to explain learning. No other conditions are required. Once the association between a stimulus and a response has been established, the same sequence of movements is repeated.

Guthrie modified and extended Thorndike’s work with cats. Guthrie used the latest technology of the day to study cats in puzzle boxes. In addition to observing the cats, Guthrie film the entire process. He photographed the exact movements of the cats, using a glass paneled box.

What Guthrie discovered is that while the cats learned to release themselves faster, they repeated the same sequence of movements on every trial. They repeated the entire chain of responses, including all of the unsuccessful attempts. Some components were executed faster than others but all of them were still present. Guthrie called this process stereotyping.

Stereotyping

Stereotyping, says Guthrie, proves that a chain of movement was learned, not simply a single response. Cats repeated all of the behaviors. They didn’t eliminate any. Skinner called them superstitious behaviors and said they were caused by reinforcement. Guthrie called them stereotyping and said they were caused by association alone. There was no need for reinforcement.

For Guthrie, learning occurs without reinforcement. It occurs at full strength or not at all. This all or nothing formation of bonds initially sounds contrary to our experience of getting better incrementally. If it is one-shot learning (all or none), how is it that we get better and faster at performing behaviors?

Guthrie’s solution has to do with the size of bonds. We tend to think of learning bonds as being large. Guthrie thinks they are extremely small. Do you remember Gulliver’s Travels? He was tied down by the Lilliputians with many tiny strings. Each was small and easily broken, but together they formed a strong chain.

For Guthrie, we learn by forming millions of bonds, each very small. We repeatedly break some and make others. Improvements in our behavior come from breaking some of the irrelevant bonds and strengthening more of the relevant bonds. These tiny bonds stay in place until they are replaced. Forgetting is due to interference. Some of the bonds are broken.

HAM

Guthrie proposed that there are three components of learning. We can use the acrostic HAM to remember them: habits, acts and movements. Backward chaining them, movements (M) was the smallest bonds. They are the tiny strings. Each muscle and tendon movement produces proprioceptive stimuli, which, in turn, help produce the next movement. Movements are small S-R combinations that form a chain of associations. Learning occurs in movements.

The A in HAM is for acts. Acts are collections of movements. They are the observable behavior that we see. Any act can be composed of thousands or millions of movements.

The H of HAM is for habits. Habits are well-established acts. The strength of the habit (habit strength) is determined by the number of stimuli which can produce a response. The more stimuli involved, the stronger the habit.

Clearly, the way to change a habit is to change the movements. More precisely, the way to change a behavior is to change the stimuli of that cause the movements chains. These stimuli are called movement produced stimuli (MPS). They are the key to change. You replace old behaviors with new behaviors by doing them. By responding differently to the same stimuli, you form new associations in the chain.

Guthrie was an advocate of learning by doing. His theory is pre-cognitive. You don’t need to change your thinking. You need to change your behavior. You need to practice your behavior where the triggering stimuli are present. Consequently, a theater director should add more dress rehearsals. A coach should add more games and exhibitions.

The emphasis is on doing the desired behaviors. If you want to teach children, spouses or yourself a behavior, do it over. If you enter your house and dump your stuff on the floor, Guthrie recommends you practice the whole sequence. Go back outside, come in again, and hang up your things.

This is not punishment, says Guthrie, though it might feel like it. It is practice. The only way to learn is to form the tiny bonds between muscle movements. Unlearning bad habits is learning to do something else in the same situation (stimulus conditions). Practice new behavior when old cues present.

Filed Under: Learning

April 4, 2023 by ktangen

Classifying Skills

Classifying Skills

Yo Yo Ma

In general, there are six ways to classify skills: environments, muscles, skill targets, movements, simplicity and pacing. Each is a dimension, not a categorical dichotomy. It is not either-or but how much of this factor is involved.

Environments can be open or closed. Closed environments are predictable. Movements can be planned in advance. Examples include chess, gymnastics, and figure skating. A pianist playing classical music is a closed environment.

In contrast, a pianist playing jazz is an open environment. Open environments require dynamic adjustments to changing conditions. They are unpredictable. Examples include debating, boxing and horse trading.

Obviously, many tasks combine both open and closed environmental factors. Standup comics plan their joke sequence ahead of time but have to handle current events, theater conditions and hecklers. Creating a well-planned set is a closed skill but improvisations is an open skill.

Muscles. Skills can also be classified on basis of which muscles are employed. Gross motor skills use large muscles. These are the muscles you use to sit, kick a ball or maintain balance. In contrast, fine motor skills are required for writing with a pencil or picking lint off the carpet.

Gross motor skills are the first to develop in children, with fine motor skills coming somewhat later. Boys tend to develop fine motor skills later than girls but the sexes are equal by about age 5 or 6.

Targets. Skills can be classified by targets. Target skills are the actual tasks needed to accomplish a goal, such as keeping a tennis ball in play. Target behaviors are the component actions needed to perform target skills, such as watching the ball, and keeping the wrist firm. Target contexts are the environments in which you perform a skill. A friendly game of golf is quite different from a competitive tournament.

Movements. Skills can also be classified by type of movement: continuous, discrete, serial or mixed. Continuous movements are cyclical. There is no clear beginning or end. They include swimming, cycling and steering a car. Discrete skills have well-defined actions. There is a clear beginning and a clear ending. Discrete movements are independent actions, such as typing, hitting baseballs and pounding nails.

Serial skills are sequences of discrete movements. Serving in tennis requires tossing the ball up, swinging the racket and follow-through. It is not a continuous swing like a figure-8 would be. Although practiced together, the sequence has discrete component parts.

Mixed skills are a combination of discrete and continuous movements. It is clicking the shutter while taking a photo of a moving object you have to track. Shooting space aliens in a video game requires both flying the ship (continuous) and triggering the blasters (discrete). CPR requires giving chest compressions (discrete) and mouth to mouth artificial ventilation (continuous).

Simplicity. Simple skills can be discrete or continuous. They are tasks which require little thought or physical energy. Examples include flipping a light switch, a flick serve in badminton, and pushing the doorbell button. In contrast, complex skills require timing. Changing gears in a stick-shift care involves pushing in the clutch (discrete) and shifting gears (discrete) but the timing is critical. A lay-up in basketball is more complex than a free-throw. Twirling a Hula-Hoop on each arm is a complex skill composed of two simple continuous movements.

Pacing. Skills can be either self-paced or externally-paced. Self-pacing skills are performed when you want. You can let the chess clock wind all of the way down before you move or make the move quickly. You decide when you are ready to start your speech. You can do all of your planning ahead of time and then initiate your long jump or ring routine. In contrast, when environmental factors trigger a response, you are engaged in an externally-paced activity. Examples include driving when the light turns green, countering an attack or being forced to pass the ball before a linebacker sacks you. Externally-paced skills are time sensitive. They require immediate attention.

Summary

All six dimensions help us to understand the characteristics of a given skill. Most skills are a mixture of factors. And that mixture changes. Tasks which are complex and externally-paced might be less so the next day. You might have an easy comeback to an insult one day and find it terribly difficult to respond the next day. Thinking about the nature of the skills we use can help us focus our practice and improve our game.

Filed Under: Learning

April 4, 2023 by ktangen

Fox & Cat

Expectations & Heuristics

The Fox and the Cat

I love stories of talking animals. These are the kinds of stories the old and wise tell to the young and foolish. The elders don’t hit you in the head with commands but guide you to discover the story’s application. You learn the concept and apply it to your own situation.

The Fox and the Cat is an ancient fable about expectations. The cat and fox are talking, as animals are want to do, about what they would do in case of an emergency. In fables, the emergencies typically are humans: hunters in particular.

The fox, being very smart, has many ways of escape. He is very clever, which he is quick to point out. The cat is impressed by the number and variety of options the fox has. Sadly, the cat has only one method: climb a tree. The fox scoffs but is interrupted by the arrival of the hunters. The cat climbs a tree. The fox is still pondering his options when he is captured.

The practical conclusion is that it’s better to have one safe exit than a hundred that don’t work. The original moral was probably to not boast about being clever. For our purposes let me suggest that the moral is about how expectations impact decision making.

Expectations are predictions. The cat expected (predicted) it would get caught but knew exactly what to do in case of emergency (run up a tree). The fox expected he’d get away, easily. He had many routes to choose from. As it turns out, thinking takes time and energy. We think fast but deciding between multiple alternatives is slower. Every channel you surf takes a little bit more time. In cognitive science terms, the fox got caught because it takes time to process options.

Deciding also takes energy. Making decisions is tiring. Making many decisions is very tiring. It depletes your mental energy. One reason it is hard to decide what to eat for dinner is depletion. After a long day of making decisions, your decider circuit is tired. Decision fatigue shows up as physical fatigue.

Decision fatigue can lead us to not making decisions. Rather than make another decision, you want to just avoid deciding altogether. Iyengar & Lepper found that the more choices you have, the less you want to decide anything. Facing six options is better than facing 26. You feel outnumbered when there are too many options. It’s a choice-war.

Additionally, your satisfaction decreases when you have too many options. You calculate the odds of happiness based on the number of options. The more options, the less likely you are to have selected the correct one. Then after making a choice, all you can think of is the that correct choice is probably one of the options.

Since having too many options also decreases our satisfaction with whatever we choose, how many is the right number? No one really knows. But here are some of the ways we handle decision making when we are fatigued.

First, we select the status quo. We go with whatever the current setting is. If the TV is on, we leave it on. If it is off, we leave it off. When we are wiped out, we try hard not to decide. We expel no effort.

Second, we choose the default setting. We may turn the TV on but leave it on whatever channel pops up. We expel limited effort.

Third, we use event substitution. This happens a lot in companies. We get everyone in a conference room to handle an issue. Then, instead of solving the marketing-manufacturing problem, we fight about what to have for lunch. We substitute a small fight in place of the larger war.

Fourth, on a smaller scale, we substitute components. This is also called reasoning by simplification. Instead of wrestling with a complex problem with many variables, we work on a simpler and less complex problem. Cognitive psychologists Kahneman & Frederick call this attribute substitution. We make analogies. “The parking problem is a lot like pizza delivery.” “Hunting for a new CEO is a lot like finding a new dry cleaner.” Some analogies are better than others but all serve the purpose of simplification.

Fifth, we substitute real issues with simple rules: heuristics. In contrast to a formula or algorithm which will always work (even if it is not the fastest method), a heuristic is a mental shortcut. It is fast and usually works.

The underlying meaning of heuristic is to find or discover a solution. They ease the cognitive burden of decision making. These “rules of thumb” are practical methods. They don’t guarantee success but they are derived from experience with similar problems. And they are readily accessible.

Obviously, the most fundamental heuristic is trial and error. But it is the least satisfying. Trial and error is what we do after we’ve tried everything else.

George Pólya proposed several heuristics in his book “How To Solve It.” Written in 1945, these suggestions have been around a long time. Pólya suggests drawing a picture of the problem situation and working the problem backwards. These ideas are related to the general suggestion of switching from abstract to concrete or concrete to abstract. They are suggestions on perspective.

You may be less familiar with the Inventor’s Paradox. It says that the more ambitious your plan, the more chances you have of succeeding. This might be a good spot to point out that heuristics aren’t always true. Your mileage may vary.

Herbert Simon offers two thoughts on heuristics in problem solving. He uses the terms “bounded rationality” and “satisficing.” Bounded rationality means that there are always limits to our decision making process. We often lack information (what our competitor will do), can’t track the entire process (we tend to think and think but then jump), and don’t have enough time to think things through (the traffic light is changing; right, left or straight?). We also are limited by our cognitive and computational abilities. There are limits to what we can do.

Simon’s second term, satisficing, is what we do when we run out of time and resources. We find a satisfactory solution which requires sacrificing perfection. When we can’t get optimal, we go for good enough.

German psychologist Gerd Gigerenzer says we should embrace satisficing. He calls this approach “fast and frugal.” He maintains that we get more accurate decisions if we don’t weigh all the options. We should ignore part of the information. It helps focus our attention and clarify our priorities.

The general principle is called the focusing effect. It is our tendency to put too much importance on one aspect of an event. We focus on the past experience of others (all these people became zillionaires) and not on our likelihood for success. If you ask “how much happier are tall people than short people,” the wording assumes a difference that doesn’t exist. But we go with it.

In Tversky & Kahneman’s study of human decision making, they note that we tend to rely too heavily on the first bit of information we receive. We make it an anchor, and make all of our calculations based on that pin. Once the anchor is set, our mental boat doesn’t drift very far. Manufacturer-suggested or list prices are there to make subsequent prices seem more reasonable. Saving 50% off of an inflated price sounds better to us than paying full price for a lower initial offer.

Dan Ariely likes to ask an audience to recall the last two digits of their social security number. These are chance numbers with no actual power or significance. He asks if they would pay that amount for several items (wine, computer, chocolates, etc.). Then he has them bid for the items. People with higher ending digits bid higher. Once we have an anchor, even if it is by chance, we tend to stay with it.

Anchoring is hard to avoid. It seems quite built in. If you were asked if Thomas Jefferson died before or after age 9 (or before-after age 120), it seems unlikely that this anchor would affect you but it often does. Students given similar options, guessed closer to their anchor, whichever one they received.

Julian Rotter proposes a more general expectation theory that has two major components: the size of the reward and the likelihood of receiving it. He says we balance both parts of the equation. We go for risky propositions if the reward is high but stay with predictability when the rewards are low.

Filed Under: Cognition, Learning

March 28, 2023 by ktangen

Decoding

Retrieval

Notes

 

Memory Principles

 

  1. some things are easier to remember than others
  • length
  • content
  • familiarity
  • similarity
  • personal experience
  • self-referent
  • within-list associations
  • adjacent associations
  1. memories can be stored in different in different media
  • words can be stored as sight, sound or meaning
  • sounds can be stored as words or music
  • aren’t just visual processors or auditory learners
  • use a wide range of sensory media
  •  switching to another media when you need more cues
  1. two separate phases of memory
  • available
  • accessible
  • tip-of-the-tongue phenomenon
  1. memory is generative
  • don’t store exact copies
  • we are great at meaning extraction
  • MIDI file
  • blueprint
  • recipe
  • Spare encoding
  1. memories are stable but changeable
  • misinformation effect
  1. we make up memories
  • photographs
  • cognitive distortions
  • vividness of a memory isn’t a good indicator of its truth
  • eye witness testimony
    • collaborating evidence must also be present
  • memory cooks who store the recipe
  • Elizabeth Loftus
    • small changes in questions can produce different results
    • smashed or bumped
    • Did you see the stop sign? or Did you see a stop sign?
    • How tall a basketball player or how short
    • What shade of blue was the wallet
  • exact process of how false memories are generated isn’t clear
    • brain pulls bits for unrelated experiences
    • combines them into a new “authentic” memory
  • source monitoring
    • focus is on meaning extraction, not making a mental bibliography

 

Retrieval Tips

GENERAL

  1. Attention
  2. Bits (chunk)
  3. Chain the parts together
  4. “Don’t Forget” strategy
  5. Distributed practice
  6. Encoding Specificity Principle
  7. Switch tasks
  8. Higher criterion
  9. Overlearn
  10. Warm up

 

FACTS

  Good Items

  1. Positive
  2. Distinctive
  3. Meaningful
  4. Related to you & your experience

  Organizing

  1. Most important first
  2. Most important last
  3. Put it in context
  4. Blocking
  5. Categorize

  Encoding

  1. Reduction mnemonics
  2. Elaboration mnemonics
  3. Rehearse
  4. Visualize
  5. Associate items places & things
  6. Teach yourself
  7. Teach others

  Retrieval

  1. Retrieve often
  2. Retrieve in the same order every time
  3. Cluster
  • Even if items aren’t clustered
  • Try to remember them by clusters

When don’t remember

  1. Recall from different perspectives
  • realtor
  • burglar
  • buyer
  1. “Starts with the letter”
  • Lexical retrieval = search for a desired word
    • Can’t find it by meaning
    • Try it alphabetically
    • Or…by sound
  • Tip of Tongue Phenomenon
  • “Nothing will come”
  • “Empty gap”
  • Often retrieve partial info
  • About once a week
  1. “Sounds like”
  • Tip of the Ear?
  • What did you say?
  • Remember before the answer comes
  • I know I’ve heard that somewhere
  1. Follow a script
  • Cognitive maps
  • Cultural rules
  1. Ask for clues
  2. Rest
  • Rest: Incubation = allowing a problem to “perk”
  • Why it might work:
    • Changes focus more details to more abstract representations
    • Memories consolidate over time
    • New stimuli may come along
    • Get more sleep
    • Practice effects
    • Problem solving as skill

Types of Retrieval

  • Free recall: first and last items
  • get distracted, recency goes away
  • Cued recall
  • Recognition

 

Filed Under: Learning

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 4
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Interim pages omitted …
  • Page 11
  • Go to Next Page »

Footer

Search

KenTangen.com

My Channel

Copyright © 2025 · Executive Pro on Genesis Framework · WordPress · Log in