Chapter 2 and 3 Flashcards
Sources of ideas
personal interest, observation, reading, problem solving
Research that is directed toward solving practical problems is often classified as ____ research
Research that is directed toward solving practical problems is often classified as APPLIED research
Studies that are intended to solve theoretical issues are classified as ___ research
Studies that are intended to solve theoretical issues are classified as BASIC research
Primary vs Secondary Source
Primary: firsthand report of observations or research results written by the individual who actually conducted the research and made the observation
Secondary: description or summary of another person’s work. Written by someone who did NOT participate in the research or observations being discussed.
Issue with secondary sources
- they could be biased or inaccurate
- not directly from the researcher/observer
- only pieces of the original study were taken and perhaps reshaped to fit their writing
- only share part of the truth. Sometimes can be distorted and false
What is an abstract?
a brief summary of the publication, usually about 2000 words
Do full-text databases usually contain more or less info on a subject?
less
What is in an introduction?
- the intro discusses previous research that forms the foundation for the current research study
- clear statement of the problem being investigated
- hypothesis and prediction
What is in the methods section?
details concerning the participants and the procedures used in the study
Results section
presents details of the statistical analysis.
Not usually important for generating new research idea
Discussion section
summarizes the results of the study, starting the conclusions, and noting potential applications.
- hypothesis supported?
- alternate explanations/limitations
What qualities must a hypothesis possess?
- must be LOGICAL: based on observation, previous research, etc.
- must be POSITIVE: indicates that a relationship does exist.
- must be TESTABLE: testable prediction for which data can be collected to support
- must be SIMPLE
- must be FALSIFIABLE: have a way to prove it wrong or support alternate hypotheses
hypothesis vs prediction
hypothesis is a more general statement, while a prediction is MEASURABLE and SPECIFIC
what is a testable hypothesis?
one for which all of the variables, events, and individuals can be defined and observed
What is a refutable hypothesis?
one tha can be demonstrated to be false. It is possible for the outcome to be different from the prediction
What is the problem with this hypothesis?:
“For adults, there is no relationship between age and memory ability.”
It is not positive, ie. it does not indicate that a relationship exists between variables.
A PREDICTION THAT DENIES EXISTENCE IS UNTESTABLE.
A researcher designs a study to determine whether the number of syllables per word influences people’s ability to recall a list of 20 words. This study can be classified as ___ research
basic
features of pseudoscience
-hypothesis not falsifiable–scientific-sounding terminology
-supoprtive evidene is anectodal or relies on “expert” testimony
“claims are vague, appeal to preconceived ideas
-claims are never revised to account for new data; conflicting data ignored
-If tests are reported, methodology unscientific, data questionable
4 GOALS OF PSYCHOLOGICAL SCIENCE RESEARCH
- DESCRIBING BEHAVIOUR: careful observation and measurement
- PREDICTING BEHAVIOUR: systemically related variables/events
- DETERMINING CAUSE OF BEHAVIOUR: cause and effect
- EXPLAINING BEHAVIOUR
CRITERIA FOR CAUSAL CLAIMS
- covariation of cause and effect
- temporal precedence
- alternative explanations
What does “covariation of cause and effect” mean?
do the two events or variables happen together?
temporal precedence?
does one event happen before the other
in the claim: “violent crime and ice cream sales increase at the same time,” what is an alternate explanation/third variable?
warm weather causes an increase in both violent crime and ice cream sales
Applied vs basic research
applied: solving practical problem
basic: gaining theoretical understanding
What is a variable?
anything that can have more than one level (e.g., intelligence, GSR, happiness, etc.
Types of variable
- situational
- response
- participant
- mediating
Situational variable
e.g., time of day, lighting in room
Response variable
- what is measured
- e.g., behaviour, pressing a button, answering a test, etc.
Participant variable
- what makes an individual unique
- e.g., English as a second language, age, gender, etc
Mediating variable
- connects one variable to another
- alternative explanation
- e.g., poverty = lower lifespan. Mediating variable would be access to healthcare
What is an operational definition?
- how am I going to measure/change a variable?
- procedure for indirectly measuring and defining a variable that cannot be observed or measured directly.
- specifies a measurement procedure (a set of operations) for measuring an external, observable behavioe, and uses the resulting measurements as a definition and a measurement of the hypothetical construct.
What is a theory in behavioural science?
A theory is a set of statements about the mechanisms underlying a particular behaviour.
A good theory generated predictions about behaviour.
What are constructs?
hypothetical variables or mechanisms that help explain and predict behavior in a theory.
e.g., anxiety or self-esteem
Give an example of how a construct can be influenced by external stimuli and, in turn, can influence external behaviour
external factors such as rewards can affect motivation (a construct), and motivation can then affect performance (a behaviour)
how can hunger be operationally defined?
as the number of hours of food deprivation.
e.g., In an experiment, measure how much food a rat eats and that amount defines how hungry he is.
Limitations of operational definitions
- leaves out characteristics
- add extra characteristics
- definitions can impact conclusions
most common way intelligence is operationally defined?
IQ test
How can you reduce the problem of leaving out important components of a construct when operationally defining it?
One way to reduce this problem is to INCLUDE TWO OR MORE DIFFERENT PROCEDURES TO MEASURE THE SAME VARIABLE.
Whenever the variables in a research study are hypothetical constructs, you must use ___ ___ to define and measure the variables
operational definitions
Positive relationship
two measurements change together in the same direction
Negative relationship
two measures change in opposite directions so that people who score high on one measure tend to score low on the other
a consistent positive relationship produces a correlation near ___, a consistent negative relationship produces a correlation near ___, and an inconsistent relationship produces a correlation near ___
a consistent positive relationship produces a correlation near +1, a consistent negative relationship produces a correlation near -1, and an inconsistent relationship produces a correlation near 0.
an inconsistent relationship, or a correlation near zero, would indicate that a test is not a ___ measure of a variable
valid
what is validity?
Does the measurement procedure actually measure what it claims to be measuring?
validity of research design
in an experiment where x causes y, how valid is that claim?
6 kinds of validity of measurement:
- face validity
- concurrent validity
- predictive validity
- construct validity
- convergent and divergent validity
Face validity
an unscientific form of validity demonstrated when a measurement procedure superficially appears to measure what it claims to measure.
ie. does the measurement technique look like it measures the variable that it claims to measure?
Concurrent validity
demonstrated when scores from a new measure are directly related to scores obtained from an established measure of the same variable
Predictive validity
does the measure predict later behaviour?
When the measurements of a construct accurately predict behaviour, the measurement procedure is said to have PREDICTIVE VALIDITY
Construct validity
Requires that the score obtained from a measurement procedure behave exactly the same as the variable itself.
Construct validity is based on many research studies that use the same measurement procedure and grows gradually as each new study contributes more evidence.
ie. does the measure match the theoretical construct?
example of construct validity
a therapist gives BDI to clients and finds that those he deemed to be depressed scored higher. Expert showed BDI to be valid
Convergent validity
demonstrated by a strong relationship between the scores obtained from two or more different methods of measuring the SAME construct.
-ie. are scores on the measure similar to other measures of the same construct?
Divergent validity
demonstrated by showing little or no relationship between the measurements of two DIFFERENT constructs
When is a measurement procedure said t have reliability?
if it produces identical (or nearly identical) results when it is used repeatedly to measure the same individual under the same conditions.
RELIABILITY = IS THE MEASURE STABLE/CONSISTENT?
The concept of reliability is based on the assumption that the variable being measured is ____ or ___
stable or constant.
ie. won’t change dramatically one day to another
equation to measure error in a measurement procedure
Measurement score = true score + error
most common sources of error and their definitions
- OBSERVER ERROR: individual who makes the measurements can introduce simple human error into the measurement process (e.g., millisecond errors from pressing handheld stopwatch)
- ENVIRONMENTAL CHANGES: small changes in environment from one trial to another may influence measurements. It is hard to replicate exact environment multiple times.
- PARTICIPANT CHANGES: participant can change between measurements. e.g., degree of focus and attention or feeling hunger during a test
3 reliability measures:
- test-retest reliability: same individual should score similarly on 2 successive tests
- inter-rater reliability: degree of agreement between 2 observers who simultaneously record measurements of the behaviour.
- split-half reliability: splitting items on a questionnaire or test in half, computing separate score for each hald, and then calculating degree of consistency between the two scores for a group of participants. Two halves should score similarly.
Reliability vs validity
reliability: consistent scores
validity: measuring what is intended to be measured
Is it possible to have a valid measure that is not reliable?
NO
Is it possible to have a reliable measure that is not valid?
YES
What is the Barnum effect?
people tend to agree with vague personality descriptions that appear to apply to them.
How can the Barnum effect be cancelled?
by negative attributes
What is the accuracy of a measurement?
the degree to which the measurement conforms to the established standard
can a measurement process be valid and reliable, but not accurate?
yes.
e.g., a car is travelling at 40mph and the speedometer consistently (reliably) reads 50mph. When the car is going 30 mph, the speedometer consistently reads 40mph.
The speedometer correctly differentiates different speeds, which means that it is producing valid measurement of speed. It is consistent which means it is reliable. But it is inaccurate
The set of categories used for classification is called the ___ ___ ____
scale of measurement
4 different types of measurement scales:
- nominal
- ordinal
- interval
- ratio
Nominal scale
- qualitative data arranged in discrete, not ordered categories.
- categories have names but are not related to each other in any systemic way.
Ordinal scale
- data arranged in order of magnitude
- e.g., small, medium and large; letter grades; junior, sophomore, and senior year; etc.
- with measurements from an ordinal scale, we can determine whether 2 individuals are different and the direction of that difference, but not the magnitude of the difference between the two individuals.
Interval scale
- series of equal intervals
- ZERO IS ARBITRARY
e. g., temperature in degrees celcius, time, etc.
Ratio scale
- series of equal intervals
- ZERO MEANS SOMETHING
e. g., weight
Why can’t you multiply or divide values in an interval scale?
because zero is arbitrary
What is the issue with rating scales?
They are often treated like interval scales, when they should be ordinal.
Researchers assume that differences between labels are equal.
e.g., rating happiness from 1 to 5
3 categories of modalities of measurement
- self-report measures
- physiological measures
- behavioural measures
What category of external expression to define fear is used in each scenerio:
- individual claims to be afraid
- individual refuses to travel
- individual has an increased heartrate
- self-report
- behavioural
- physiological
pros and cons of self-report measures
pros: -can ask people (almost) everything
- useful when behaviour cannot be observed (e.g., past)
- inexpensive and easy
cons: -unreliable memories
- untruthfulness
- unwilling to answer
- don’t know how to answer
pros and cons of physiological measures:
pros: -objective
- measures what people can’t or don’t know how to measure
cons: -expensive
- there could be alt. explanations to physiological changes
- can sometimes be controlled (e.g., control HR in lie detectors)
pros and cons of behavioural measures
pro: -more control (e.g., in lab)
- more generaliable/real (in naturalistic observations)
con: -behaviour may be temporary or situational
applications of barnum effect
personality tests, horoscopes, fortune telling
can the mode be determined for nominal, ordinal, interval, and ratio scales?
yes
Can the median be determined for nominal, ordinal, interval, and ratio scales?
Only for ordinal, interval and ratio. Not for nominal scale.
*in ordinal, you can determine the “center” category, but not the difference between them
Can the mean determined for nominal, ordinal, interval, and ratio scales?
only for interval and ratio scales
Range effect
sensitivity problem when scores obtained in a research study tend to cluster at one end of the measurement scale. Includes ceiling and floor effects
Ceiling effect
when the range of scores is restricted at the high end.
Usually task is too easy
floor effect
clustering of scores at the low end of the scale.
Usually task is too hard
What is an artifact in research?
a nonnatural feature accidentally introduced into something being observed.
Experimenter bias, demand characteristics and participant reactivity.
Experimenter bias
when measurements obtained in a study are influenced by the experimenter’s expectations or personal beliefs regards in the outcome of the study
strategies for limiting experimenter bias
- standardize or automate the experiment
- use a blind experiment (single of double)
single-blind research study
if the researcher does not know the predicted outcome
double-blind study
if both the researched and the participants are unaware of the predicted outcome
demand characteristics
any of the potential cues or features of a study that (1) suggest to the participants what the purpose and hypothesis is, and (2) influence the participant’s response or behaviour in a certain way
Reactivity
occurs when participants modify their natural behviour in response to the fact that they are participating in a research study or the knowledge that they are being measured
the good subject role
participants have identified the hypothesis and are trying to produce responses that support the investigator’s hypothesis
the negativistic subject role
The participants have identified the hypothesis of the study and are trying to act contrary to the investigator’s hypothesis
The apprehensive subject role
participants are overly concerned that their performance will be used to evaluate their abilities or personal characteristics. They respond with socially desirable and fashionable responses rather than truth
The faithful subject role
participants attempt to follow instructions to the letter and avoid acting on any suspicions they have about the purpose of the study
field study
place that the participant or subject perceives as a natural environment
ways to limit effects of demand characteristics
- use deception on participants
- naturalistic strategy
- add filler items to questionnaire
- post-experiment interview to see if participants figured out the hypothesis