Cognitive [Learning, Language, Memory, Thinking] (Psychology Subject) Flashcards
Learning
*the relatively permanent or stable change in behavior as the result of experience
Classical conditioning (associative learning)
- Ivan Pavlov; Pavlovian conditioning
- pairing a neutral stimulus with a not-so-neutral stimulus; this creates a relationship between the two
Unconditioned Stimulus (UCS)
*not-so-neutral stimulus
- in Pavlov’s dog experiments, the UCS is the food
— without conditioning, the stimulus elicits the response of salivating
- unconditioned because they don’t have to be learned
— reflexive or instinctual behaviors
Unconditioned Response (UCR)
*naturally occurring response to the UCS
- in Pavlov’s dog experiment, it was salivation in response to the food
- unconditioned because they don’t have to be learned
— reflexive or instinctual behaviors
Neutral Stimulus (NS)
*a stimulus that doesn’t produce a specific response on its own
- In Pavlov’s dog experiment, this was the light/bell before he conditioned a response to it
Conditioned Stimulus (CS)
*the neutral stimulus once it’s been paired with the UCS
- has no naturally occurring response, but it’s conditioned through pairings with a UCS
- in Pavlov’s dog experiment, the CS (the light) is paired with the UCS (food), so that the CS alone will produce a response
Conditioned Response (CR)
*the response that the CS elicits after conditioning
- the UCR and the CR are the same (i.e., salivating to food or a light)
Simultaneous Conditioning
*the UCS and NS are presented at the same time
Higher-order conditioning/second-order conditioning
*a conditioning technique in which a previous CS now acts as a UCS
- in Pavlov’s dog experiment, the experimenter would use the light as a UCS after the light reliably elicited saliva in the dogs; food is no longer used; light could be paired with a bell (CS) until the bell alone elicited saliva in the dogs
Forward conditioning
*pairing of the NS and the UCS in which the NS is presented before thee UCS
- two types:
— delayed conditioning
— trace conditioning
Delayed conditioning
*the presentation of the NS begins before that of the UCS and lasts until the UCS is presented
Trace conditioning
*the NS is presented and terminated before the UCS is presented
Backward conditioning
*the NS is presented after the UCS is presented
- in Pavlov’s dog experiment, they would have been presented with the food and then with the light
- proven ineffective
- only accomplishes inhibitory conditioning (later the dogs would have a harder time pairing the light and food even if they were presented in a forward fashion)
Taste aversion learning
*occurs when food or drink becomes associated with an aversive stimulus (nausea, vomiting), even if the food or drink itself didn’t actually cause the nausea/vomiting
- type of classical conditioning but differs in:
— the response usually takes one pairing vs. longer acquisition
— the response takes a long time to extinguish vs. beginning when you remove the UCS
- evolutionarily adaptive so human/animal doesn’t eat poisonous food and die
Law of effect
*a cause-and-effect chain of behavior revolving around reinforcement
- E. L. Thorndike
- precursor of operant conditioning
- “connectionism” because learning occurs through formation of connections between stimuli and responses
- “Puzzle Box” experiment (cats learning complex tasks through trial and error)
Theory of association
*grouping things together based on the fact that they occur together in time and space
— organisms associate certain behaviors with certain rewards and certain cues with certain situations
- Kurt Lewin
- forerunner of behaviorism
School of behaviorism
- John B. Watson
- everything could be explained by stimulus-response chains
— conditioning was key factor in developing these chains - only objective and observable elements were of importance to organisms and psychology
Hypothetico-deductive model
*designed to dry and deduce logically all the rules that govern behavior
- Clark Hull
— created equation involving input variables leading to output variables; included intervening variables in between that’d change the outcomes
Radical behaviorism
*school of thought where it’s believed that all behavior, animal, and human, can be explained in terms of stimuli and responses, or reinforcements and punishments
- no allowances for how thoughts/feeling might factor into the equation
Operant conditioning (associative learning)
*aims to influence a response through various reinforcement strategies
- B. F. Skinner
- Skinner Box (rats repeated behaviors that won rewards and gave up on behaviors that didn’t)
— shaping (differential reinforcement of successive approximations) process rewarded rats with food pellets when near the lever and after touching lever
- also known as instrumental conditioning
Primary reinforcement
*a natural reinforcement; reinforcing on its own without the requirement of learning
- i.e., food
Secondary reinforcement
*a learned reinforcer
- i.e., money, prestige, verbal praise, awards
- often learned through society
- instrumental in token economies
Positive reinforcement
*adding something desirable to increase likelihood of a particular response
- some subjects are not motivated by rewards because they don’t believe/understand that the rewards will be given
Negative reinforcement
*reinforcement through the removal of a negative event
- i.e., taking away something undesirable to increase the likelihood of a particular behavior
- NOT punishment/delivery of a negative consequence
Continuous reinforcement schedule
*every correct response is met with some form of reinforcement
- facilitates the quickest learning, but also the most fragile learning; as soon as rewards halt, the animal stops performing
Partial reinforcement schedules
*not all correct responses are reinforced
- may require longer learning time, but once learned, behaviors are more resistant to extinction
- types:
— fixed ratio schedule
— variable ratio schedule
— fixed interval schedule
— variable interval schedule
Partial reinforcement schedules
*not all correct responses are reinforced
- may require longer learning time, but once learned, behaviors are more resistant to extinction
- types:
— fixed ratio schedule
— variable ratio schedule
— fixed interval schedule
— variable interval schedule
Fixed ratio schedule
*reinforcement delivered after a consistent # of responses
- ratio of 6:1 –> every 6 correct responses = 1 reward
Variable ratio schedule
*reinforcements delivered after different #s of correct responses
- ratio can’t be predicted
- learning less likely to become extinguished
- one performs a behavior not because it’s been rewarded but rather cause it COULD be rewarded on the next try
Fixed interval schedule
*rewards come after the passage of a certain period of time rather than the # of behaviors
- i.e., if fixed interval is 5 minutes, then rat will be rewarded the first time it presses the lever after a 5-minute period has elapsed, regardless of what it did during the preceding 5 minutes
Variable interval schedule
*rewards are delivered after differing time periods
- second most effective strategy in maintaining behavior; length of time varies, so one never knows when the reinforcement is just around the corner
- slow and steady learning
Token economy
*artificial mini-economy usually found in prisons, rehabilitation centers, or mental hospitals
- individuals in the environment are motivated by secondary reinforcers (tokens)
- desirable behaviors reinforced with tokens, which can be cashed in for more desirable reinforcers (i.e., candy, books, privileges, cigarettes)
Stimulus
*any event that an organism reacts to
- first link in a stimulus-response chain
Stimulus discrimination
*the ability to discriminate between different but similar stimuli
- i.e., doorbell ringing vs. phone ringing
Stimulus generalization
*make the same response to a group of similar stimuli; opposite of stimulus discrimination
- i.e., not all fire alarms sound alike, but they all require the same response
Stimulus generalization
*make the same response to a group of similar stimuli; opposite of stimulus discrimination
- i.e., not all fire alarms sound alike, but they all require the same response
Undergeneralization
*failure to generalize a stimulus
Response learning
*form of learning in which one links together chains of stimuli and responses
- one learns what to do in response to particular triggers
- i.e., leaving a building in response to a fire alarm
Response learning
*form of learning in which one links together chains of stimuli and responses
- one learns what to do in response to particular triggers
- i.e., leaving a building in response to a fire alarm
Aversive conditioning
*uses punishment to decrease the likelihood of a behavior
- i.e., drug antabuse used to treat alcoholism
Avoidance conditioning
*occurs when you avoid a predictable, unpleasant stimulus
- teaches an animal how to avoid something the animal doesn’t want
Escape conditioning
*occurs when you have to escape an unpredictable, unpleasant stimulus
- teaches an animal to perform a desired behavior to get away from a negative stimulus
Punishment
- promotes extinction of an undesirable behavior; after unwanted behavior is performed, punishment is presented
- acts as a negative stimulus, which should decrease the likelihood that the earlier behavior will be repeated
- positive punishment: addition of something undesirable to the situation to discourage a particular behavior
- negative punishment: taking away something desirable to discourage a particular behavior
- primary punishment: most species don’t have to learn about its unpleasant consequences
- secondary punishment: one must come to understand as a negative consequence
- Skinner preferred to extinguish behavior by stopping reinforcement as opposed to applying a punishment
Autonomic conditioning
*evoking responses of the autonomic nervous system through training
Extinction
*reversal of conditioning; goal to encourage an organism to stop doing a particular behavior
- accomplished by repeatedly withholding reinforcement for a behavior or by disassociating the behavior from a particular cue
- in classical conditioning, extinction begins the moment the UCS and NS are no longer paired
- in operant conditioning, one might see an extinction burst (behavior initially increases before it begins to diminish)
Spontaneous recovery
*reappearance of an extinguished response, even in the absence of further conditioning or training
Superstitious behavior
*occurs when someone “learns” that a specific action causes an event, when in reality the two are unrelated
- i.e., a football fan wearing the shirt during a game that they happen to wear every time their team wins
Chaining
*act of linking together a series of behaviors that ultimately result in reinforcement
- one behavior triggers the next, and so on
- i.e., learning the alphabet
Autoshaping
*an apparatus allows an animal to control its reinforcements through behaviors
- i.e., bar pressing or key pecking
- animal is shaping its own behavior
Overshadowing
*an animal’s inability to infer a relationship between a particular stimulus and response due to the presence of a more prominent stimulus
- a classical conditioning concept
John Garcia
- discovered that animals are programmed through evolution to make certain connections
- preparedness: certain associations are learned more easily than others
- studied “conditioned nausea” with rats and found that invariably nausea was perceived to be connected with food or drink
— unable to condition a relationship between nausea and a NS (i.e., a light) - Garcia effect
— explains why humans can become sick only one time from eating a particular food and are never able to eat that food again
— connection is automatic, needs little conditioning
Habituation (nonassociative learning)
*decreased responsiveness to a stimulus as a result of increasing familiarity with the stimulus
- i.e., entering a room with a buzzing light; you’re constantly aware of the noise until after a while, you stop noticing it
Dishabituation (nonassociative learning)
*when you remove the stimulus to which the organism had become habituated
- if you reintroduce the stimulus, the organism will start noticing it again
Sensitization (nonassociative learning)
*increasing sensitivity to the environment following the presentation of a strong stimulus
Desensitization (nonassociative learning)
*decreasing sensitivity to the environment following the presentation of a strong stimulus
- often used as a behavioral treatment to counter phobias
Social learning theory; social cognitive theory (observational learning)
*individuals learn through their culture; what’s acceptable/unacceptable
- Albert Bandura
- developed to explain how we learn by modeling; we don’t need reinforcements/associations/practice to learn
- Bobo doll study (children mirroring adults taking out their frustrations on a clown doll)
Vicarious reinforcement (observational learning)
*a person witnesses someone else being rewarded for a particular behavior so that encourages them to do the same
Vicarious punishment (observational learning)
*a person witnesses someone being punished for a behavior, and that discourages the likelihood of the witness engaging in that behavior
Insight learning
*when the solution to a problem appears all at once rather than building up to a solution
- Wolfgang Kohler’s chimpanzee and banana experiment (used boxes to reach banana)
- key element in Gestalt psychology because a person can perceive the relationships between all the important elements in a situation and finding a solution greater than the sum of its parts
— Gestalt psychology describes how people organize elements in a situation and think about them in relation to one another
Latent learning
*learning that happens but does not demonstrate itself until it’s needed later on
- i.e., watching someone play chess many times and playing chess later, realizing you’ve learned some new tricks
- Edward C. Tolman and three rat groups experiment (quickly learning to run at the end of maze for food)
Incidental learning
*unrelated items are grouped together
- like accidental learning
- i.e., pets associating cars with vets
- opposite of intentional learning
Donald Hebb
- created an early model of how learning happens in the brain, through formation of sets of neurons that learn to fire together
Perceptual/concept learning
*learning about something in general rather than learning-specific stimulus-response chains
- individual learns about something (i.e., history) rather than any particular response
- Tolman’s experiments with animals forming cognitive maps of mazes rather than simple escape routes; blocked routes led to internal sense of where the end was (purposive behavior)
Harry Harlow
- demonstrated that monkeys became better at learning tasks as they acquired different learning experiences
- eventually, monkeys could learn after only one trial
- “learning to learn”
Motivation and performance
*an animal must be motivated in order to learn and to act
- individuals are at times motivated by primary or instinctual drives (hunger or thirst); other times motivated by secondary or acquired drives (money, other learned reinforcers)
- exploratory drive may exist
Fritz Heider’s balance theory, Charles Osgood & Percy Tannenbaum’s congruity theory, Leon Festinger’s cognitive dissonance theory
- all agree that what drives people is a desire to be balanced with respect to their feelings, ideas, or behaviors
- along with Clark Hull’s drive-reduction theory, these theories are called into question by the fact that individuals often seek out stimulation, novel experience, or self-destruction
Hull’s performance = drive x habitat
*individuals are first motivated by drive, and then act according to old successful habits
- they’ll do what has worked previously to satisfy the drive
Edward Tolman’s performance = expectation x value
*people are motivated by goals that they think they might actually meet
- another factor is how important the goal is
- also the expectancy-value theory
- Victor Vroom applied this theory to individual behavior in large organizations; those lowest on totem pole don’t expect to receive company incentives, therefore they do little to motivate them
Henry Murray and later David McClelland’s Need for achievement (nAch)
- may be manifested through a need to pursue success or a need to avoid failure; either way, the goal is to feel successful
- John Atkinson suggested a theory of motivation in which people who set realistic goals with intermediate risk sets feel pride with accomplishment and want to succeed more than they fear failure
- because success is so important, people are unlikely to set unrealistic/risky goals or to persist when success is unlikely
Neil Miller’s approach-avoidance conflict
*the state one feels when a certain goal has both pros and cons
- the further one is from the goal, the more one focuses on the pros or the reasons to approach the goal
- the closer one is to the goal, the more one focuses on the cons or the reasons to avoid the goal
Hedonism
*theory that individuals are motivated solely by what brings the most pleasure and the least pain
The Premack principle
*idea that people are motivated to do what they do not want to do by rewarding themselves afterward with something they like to do
- i.e., child rewarded with dessert after they eat spinach
Abraham Maslow’s Hierarchy of Needs
*demonstrates physiological needs take precedence
- once those are satisfied, a person will work to satisfy safety needs, followed by love and belonging needs, self-esteem needs, and finally the need to self-actualize
M. E. Olds
- performed experiments in which electrical stimulation of pleasure centers in the brain were used as positive reinforcement
— animals would perform behaviors to receive stimulation - viewed as evidence against the drive-reduction theory
Arousal
*part of motivation; an individual must be adequately aroused to learn/perform
- Donald Hebb postulated that a medium amount of arousal is best for performance
— too little/too much could hamper performance of tasks
- for simple tasks, optimal level of arousal is towards high end
- for complex tasks, optimal level of arousal is towards low end
- optimal arousal for any type of task is never at the extremes
- Yerkes-Dodson effect
- on a graph, optimal arousal appears as an inverted U-curve, with lowest performance at the extremes of arousal
State dependent learning
*concept that what a person learns in one state is best recalled in that state
Continuous and discrete motor tasks
- continuous is easier to learn than discrete
- continuous task—riding a bicycle; one continuous motion that, once started, continues naturally
- discrete task—setting up a chessboard; one that’s divided into different parts that don’t facilitate the recall of each other; placing pieces in proper positions involves different bits of information; not one unbroken task
Positive transfer
*previous learning that makes it easier to learn another task later
- negative transfer: previous learning that makes it more difficult to learn a new task
Age
- affects learning
- humans primed to learn between 3 and 20
- 20 to 50, ability to learn remains fairly constant
- 50+, ability to learn drops
Learning curve
*when learning something new, the rate of learning changes over time
- i.e., when learning a language someone may learn a bunch of vocabulary and basic sentence structure, but as they try to learn more complex grammatical constructions, rate of learning may decrease
- Hermann Ebbinghaus
- positively accelerated curve: rate of learning is increasing
- negatively accelerated curve: rate of learning is decreasing
Educational psychology
*concerned with how people learn in educational settings
- examine things like student and teacher attributes and instructional processes in the classroom
- educational psychologists employed frequently by schools and help when students have academic/behavioral problems
- Thorndike wrote first educational psychology textbook in 1903; developed methods to assess students’ skills and teaching effectiveness
Aptitude
*a set of characteristics that are indicative of a person’s ability to learn
Cooperative learning
*involves students working on a project together in small groups
Lev Vygotsky
- described learning through zone of proximal development; a lower achieving student in a particular subject is placed with someone who is just a bit more advanced; lower-achieving student thus raises their game through the interaction
- scaffolding learning: occurs when a teacher encourages the student to learn independently and provides assistance only with topics/concepts that are beyond the student’s capability
— as student continues to learn, the teacher aids with less to encourage independence - Vygotsky’s theories on education are used in classrooms worldwide
Language
*the meaningful arrangement of sounds
Psycholinguistics
*the study of the psychology of language
Phonemes
*discrete sounds that make up words but carry no meaning
- i.e., ee, p, or sh
- infants first make these sounds when learning language
- phonics is learning to read by sounding out the phonemes
- all words in a language are created from basic phonological rules of sound combinations
Morphemes
*made up of phonemes; smallest units of meaning in language
- words/parts of words that have meaning are morphemes
- i.e., the word boy and the suffix -ing
Phrase
*a group of words that when put together function as a single syntactic part of a sentence
- i.e., “walking the dog” is a noun phrase that could function as the subject of a sentence if it were followed by a verb
Syntax
*the arrangement of words into sentences as prescribed by a particular language
Grammar
*the overall rules of the interrelationship between morphemes and syntax that make up a certain language
Morphology or morphological rules
*grammar rules; how to group morphemes
Prosody
*tone inflections, accents, and other aspects of pronunciation that carry meaning
- is the icing on the cake of grammar and meaning
- infants can more easily differentiate between completely different sounds than between different expressions of the same sound
Phonology
*the study of sound patterns in languages
Semantics
*the study of how signs and symbols are interpreted to make meaning