Encoding
Decoding
by ktangen
by ktangen
Encoding
Decoding
by ktangen
by ktangen
by ktangen
by ktangen
We know a lot of information. The older you get, the more information you encounter. Learning facts, concepts and behaviors gives you knowledge. Knowing that you have facts concepts and behaviors is metacognition.
Meta (larger than) and cognition (thinking) is what you know about the process of thinking. It is your self-awareness of how your mind works. It is thinking about thinking.
We have to know how our minds work in order to adjust its activity. Fortunately, we have processes that track our thinking processes. We have an executive thinking process which keeps track of our systems and our general progress.
Our minds gather and structure information to use it. All three parts are important: gathering, structuring and using.
The technical name for gathering is encoding. It is a critical first step. If we don’t gather information–if we don’t put it in–it is not available to use.
As it turns out, we are both exceptionally good and bad at gathering information. On the good side, we can listen to a band and focus more on one instrument than another. It’s like zoom-listening. We can change back and forth between vocals, guitar, piano and bass. We are great at taking a complex audio input and dividing it into separate threads.
We are also very good at hearing our name in a crowded room. This is called the Cocktail Party Effect. Surrounded by lots of conversations, we can “tune out” all the ones that don’t interest us and have a conversation with one person. This is remarkable.
Even more remarkable is that in the middle of this intense conversation surrounded by noise, we can detect someone in another conversation saying our name. This is a great skill and one not well understood.
English psychologist Donald Broadbent proposed a filter theory to explain the cocktail party effect. He found that two messages delivered at the same time to both ears (like two people talking to you at the same time) made it hard to recall either of them. But when two messages were delivered separately (one to each ear at the same time), people were able to listen to one and ignore the other.
Broadbent’s filter theory now has a lot of exceptions. We process more of what we’re ignoring than he thought. The main premise remains: there is no identification without attention. Once our priority is set, our attention is focused. We give our main attention to one stream but process the rest in the background. The background processing isn’t listening to everything but more a matter of looking for exceptions. We switch our attention if we detect key words (our name, “fire”) or sounds (a baby’s cry, a loud bang).
On the bad side, once our focus is set, we ignore things we think we’d notice. We ignore unexpected objects, including a gorilla in the middle of basketball players.
In a now famous study, Christopher Chabris and Daniel Simons filmed two groups of basketball players tossing a ball to each other. Subjects were told to watch one group and ignore the other. The task was to count how many times the ball was thrown by one team. In the middle of the film, while all the moving and throwing is going on, a man dressed in a gorilla suit enters, strikes his chest and walks off screen.
It is about 50/50 for those who notice the gorilla and those that don’t.
We usually expect we would notice the unusual. Turns out, it may just be chance. We either see it or we are inattentionally blind. Our attention system doesn’t pick it up on our mental radar.
In addition to not noticing unexpected objects (inattentional blindness), we often don’t notice changes in things we are attending to. This is change blindness. When images flash on and off, we assume the new image is the same as the old one. Film editors take advantage of this after noticing that most of the audience doesn’t detect even major changes in background images.
Our eyes don’t stay still. We are not like birds. We are always scanning. These movements (saccades) make our eyes jump back a bit. We don’t notice an image has changed if the change occurs during a saccade.
If we are talking to a person but are interrupted by a blackboard being walked between us, we assume everything is the same when the person reappears. We don’t notice they are wearing a new shirt or that it is a completely different person.
Both inattentional blindness and change blindness are part of a larger cognitive rule. Our brains ignore steady-state information. You don’t know where your left elbow is until I mention you have one. When your attention is drawn to it, your body reports in. The brain says “Tell me if something changes but otherwise be quiet.”
This dismissal of steady-state information allows you to wear clothes without feeling them on your skin. It allows you get used to a busy or noisy environment. It allows you to ignore that you have blood vessels in front of your visual receptors.
Attention is important to learning because it is a minimal requirement. You can’t learn it if you can’t see it. You can’t see it if you don’t notice it. Attention doesn’t guarantee learning but there is no learning without attention. Attention is necessary but not sufficient for learning to occur.
We use our attention to use the information coming in. We tend not to respond to individual elements. We form it into structures we can use. It is not structure making for love of structures. It is structuring information to do something with it.
Derek Cabrera suggests there are four universal structuring factors: distinctions, systems, relationships and perspectives. He sees them as skills we need to develop.
We need the skill of making more and more refined distinctions between ideas or objects. At first, every animal we meet is a dog. Then we learn the difference between dogs and cats and cows. Then we learn to distinguish between different breeds of dogs. I call this skill splitting.
We need the skill of see things as a system. We start with our family, then learn there are other families in the neighborhood. Then we learn we are part of a city, region, country, and continent. Cabrera calls this systems, some call this lumping. I call this skill organizing.
We need the skill of see relationships between ideas. We take one class and discover a whole new area of knowledge. As we take other classes, we learn that segments are the same. Some ideas in chemistry are present in other chemistry classes, in geology, in history and is public affairs. I call this connection skill relating.
We need the skill of seeing things from different perspectives. If I ask you to remember your house, you’ll recall certain items. But if I ask you to pretend you are a realtor, a buyer or a burglar, you will probably recall different items. Learning to see things from different perspectives gives you new insights. Since you’re always looking for something new, I call this skill prospecting.
I have a fifth skill to add. We need the skill of editing. Mental structures need modification and refinement. In addition to the parts distinction, system focus, relationship tracking and perspective taking, we need to learn how to modify our cognitive structures. Sometimes we get stuck with a set view that no longer serves us well. I call this skill editing.
I convert Cabrera’s DSRP into ROPES: relating, organizing, prospecting, editing and splitting. Both models recognize that we incorporate new information into what we already know. Learning doesn’t occur in isolation. We add new parts to existing cognitive structures.
We collect information, add it to what we already know, and use it. Learning has a practical aspect. Unlike our closets, the brain doesn’t store things it doesn’t use. Consequently, the type of encoding depends on its use.
Visual encoding. This is the process of converting information into mental pictures. An obvious example is seeing an object or looking at a photograph. We use the same system regardless of whether we are looking at the Mona Lisa, a flower or a child’s drawing.
We capture a mental image of it plus an emotional reaction. The visual cortex processes the scene and the amygdala processes the emotion. A picture of Mom evokes an emotional reaction as well as a recognition of facial features.
Acoustic encoding. Everything we look at is processed as an image, unless they are words. Reading is translating symbols into sounds and sounds into images. Listening to an audio book produces the same mental images as reading a book because both are processed acoustically.
Tactile encoding. We process how something feels (smooth, soft, dense) with our tactile system. We are interpreting the vibrations on the skin and the pressure on touch receptors. We can both feel the texture of a keyboard and ignore it while we type.
Semantic encoding. When we need to use words, we encode items into our semantic system. We are quite gifted at extracting meaning from our inputs and storing that information to be used in the future.
Elaboration encoding. Elaboration is the process of associating information with other information. It combines new inputs with old information. We constantly update our structures. How you think about something depends on the day. It depends on what when on before and on what you expect to happen tomorrow.