Research Methods Flashcards
Study revesion
Communality
Science and its methods are nobody’s property, but held in common by all humanity – methods available to everybody -
Universalism
scientific validity doesn’t depend on the status or identity of the researchers – science is its own thing about trying to find out about the world
Organized scepticism
scientific claims must be exposed to critical scrutin
Merton’s norms for science, simplified – scientific truth is provisional truth
ethical obligation
usually people, and we have a duty not to waste their time
Results of psychological research are often applied to how people are treated, so we’d better get it right
The Stroop Effect
reading has become so automatic that it’s hard to inhibit it
So when the colour of the print conflicts with the colour named, reaction time is slower
Dependant variable
measure of behavior we’re interested in e.g how many colours in Stroop effect, goes on y axis of a graph of our results (need to be reliable and valid)
Independant Variable
Manipulated by the exprimentor to see if it effects the DV e.g whether colour matches name on stroop effect, goes on x axis
Operational definition
Description of operations carried out by researcher to measure DV or to manipulate IV. Helps others to replicate study, and helps us to remain objective and avoid biasing our results.
Reliability
Whether we get the same results if we measure the same variable again under the same conditions.
Validity
Whether our variable really measures what we meant it to.
Population
All the events, scores etc. we are interested in. * e.g., heights of the entire PSYCH 109 class
Sample
Representative subgroup drawn from population, preferably randomly. Used to draw conclusions about whole population. g., heights of a randomly selected 10 people in the class
Sampling error
Random samples drawn from the same population will give different results. Chance variation. Unavoidable, but minimised by using large samples.
due to natural variability and can be reduced by increasing the sample size
Sampling bias
When a sample does not truly represent its parent population, usually because it was not drawn randomly. E.g., a minority ethnic group may be underrepresented. Avoidable by random sampling.
Sample misrepresents population in a systematic way
* Serious sampling bias invalidates the research
due to systematic errors in the sampling errors in the sampling process and can invalidate research findings if not properly adressed.
Observational designs:
Look for correlation between two DVs (strictly, there is no IV. Some sources, like your lab manual, use IV slightly differently, to mean the variable that may cause changes in the DV. I prefer the stricter definition that it is the variable the experimenter manipulates). Note that correlation doesn’t always imply causation, so less powerful than experimental designs, but sometimes the only choice for ethical or practical reasons.
Measure two DVs and look for a relationship between them
Sometimes called a correlational design, because a relationship between variables is a correlation
There is no IV, because nothing is manipulated
e.g., is self-esteem related to intelligence?
Experimental designs
Manipulate IV and observe (look for) effect on DV. Can imply causation, if effect is replicable. More powerful, but not always possible
is self-esteem related to results of a fake IQ test? (turned it into a IV - manipulated it
* e.g., Flourens and experimental ablation – gave dog brain damage – turned the brain damage into a independent variable not dependent and seeing what it now couldn’t do.
Experimental designs are more powerful (give us the power to do the thing) than observational, so we should use them when it’s ethical and practical to do so
When you’re evaluating research that you read about, always consider this issue – has causation really been proved?
Confounding variable
A variable other than our IV which might have been responsible for any change in the DV that we observed. An alternative explanation for our results. Invalidates our experiment. Also just called confound.
Controlling for potential confounding variables:
Hold them constant (esp. with external confounds, such as time of day, or stimulus lists in a memory task). Randomize them (esp. with subject confounds, such as individual differences in ability on a task).
Within-subject design
Each subject is exposed to all levels of IV (all conditions). Comparison is between each subject’s performance in several conditions. Internal (subject) confounds controlled, but could be external (environmental) confounds.
Between-subjects design
Each subject only encounters one level of IV (one condition). Comparison is between average performance of groups of different subjects in each condition. External variables can be controlled, but could be subject confounds.
Matched-pairs design:
Each subject is only in one experimental condition, but his/her behaviour is compared with a matched partner (according to subject confounds that might be important, like pre-existing ability at the task) in the other experimental condition. Controls both external and internal confounds. Good idea but not widely used.
Experimental group:
Group that receives the intervention (e.g., a new drug)
Control group
Group that doesn’t receive the intervention, but is otherwise treated identically to experimental group. Assess effect of intervention by comparing improvement of control with experimental group.
Placebo
A sham drug (e.g., a sugar pill which looks like the real pill). Given to control group in drug evaluations.
Single-blind design:
Subjects don’t know whether they are in experimental or control group. experimental receives treatment, control dosnt. possible co-founding is subject expectation or experimenter expectation (experimentor needs to remain objective because expectatipns might effect how to subjects behave)
Double-blind design:
Experimenter who regularly interacts with subjects doesn’t know which group they are in either.
Correlation does not imply…
causation
what are the two ways sampling can go wrong?
Sampling bias, sampling error
1948 US election sampling bias-
aimed at poeple eho had phones (most likley richer people, richer people more likley to vote republican, sample wasnt random and misrepresented the populations of votes
Misrepresentation must be systematic, and sample must be biased to behave or respond in a certain way
College graduates were overrepresented AND they were more likely to favour Clinton
Depends on your dependent variable (what you are interested in measuring)
e.g., Recruiting undergraduate Psychology students may not matter if you are interested in memory, but it may matter if you are interested in academic achievement
basic weakness of observational designs?
Just because two DVs are correlated, we can’t conclude that one affects the other
* e.g., suppose we find that watching violent films and aggressive behaviour are correlated
* Maybe watching violent films makes you aggressive
A causes B
- But maybe naturally aggressive people like watching violent films
B causes A
- Or maybe they’re both caused by another variable
C causes both A and B
Called the third variable problem (looking for the difference between two independent variables, not manipulating anything)
fundamental principle of research design…
We want to eliminate all explanations of our results except one
That is, if we conduct an experiment, and see a change in behaviour (our DV), we want to be sure that it was caused by our IV
Alternative explanations of the results are called confounds or confounding variables
A confounding variable is anything, other than our IV, that might have produced the change in DV that we sa
They are bad, and we need to eliminate them if want our research to be worthwhile
Minimising confounds in research
use a within-subjects or between-groups design
Which design you use will depend on your research question
alternative explanations of results
confounds or cofounding variables
approaches to eliminating confounds
Hold the confounding variable constant
* especially good for environmental or external confounds
* called “standardization” in your textbook -
* Randomize the confound
between-groups design also called
Between subjects
* Independent samples
* Independent groups
Within-subjects design also called
Repeated measures
Within-subject…
Each subject is exposed to all levels of
the IV (more than one condition)
* Compare scores between conditions
for each subject
* There is an effect if individual subjects
behave differently in the different
conditions
* Good for controlling subject
confounds
* Susceptible to environmental
confounds, and risks of order effects
Between Groups
Each subject is only exposed to one
level of the IV (one condition)
* Compare average scores between
groups
* There is an effect if the average
behaviour of the two groups is different
* Good for controlling environmental
confounds
* Susceptible to subject confounds, but
can minimise these by randomly
allocating subjects to groups
Matched-pairs design
Keeps environmental confounds constant, and (nearly) keeps subject
confounds constant too (run pre-test, find paris of subjects, randomly allocate one member of each pair to sober and drunk conditions (ex) - then run experiment and compare each subjects score
Gold standard in research?
experimenter dosnt know which group is control/experimental to remain objective
what is the significance level
05, or 5%, or 1/20
HO
Null hypothesis - no real effect - results obtained by chance or sampling error
H1
experimental or alternative hypothesis, real effect - results not obtained by chance or sampling error
what if the possibiliy of the HO has a less than significant level
reject the
null hypothesis and conclude that there is a real effect
what are the two possible versions of the alternative hypothesis
two tailed, one tailed
two tailed
Does not specify the direction of the effect
“There is a difference between Groups A and B”
One-tailed:
Specifies the predicted direction of the effect, Group A will be higher/lower than Group B”
Must have a good reason for predicting the direction
Type I error
Rejecting the null hypothesis when it is true
e.g., “Alcohol impairs memory ability” when there was no real effect of alcohol on memory
Type II error
Accepting the null hypothesis when it is false
e.g., “Alcohol does not impair memory ability” when it actually does
Reducing the chance of making one kind of error inevitably increases the chances of making the other
The .05 significance level is a cautious compromise
frequency
Score would be the number of words remembered
Variability
tightly clumped around the average or spread out
what are the two things samples significantly differ from each other
How far apart are their means? (location)
How variable are the samples?
Bigger difference between means is more likely to be significant
But more variability implies less likely to be significant
observational designs - inverse correction;
as one variable increases, the other variable decreases and vice versa
Inferential stats?
measurements from the sample of subjects in the experiment were used to compare the treatment groups and make generalisations about the larger population of subjects
awnsers questions about whether teo sames are likley to come from different populations
what is the third variable problem?
the fact that an observed correlation between two variables may be due to the common correlation between each of the variables and a third variable rather than any underlying relationship (in a causal sense) of the two variables with each other.