Research Methods Flashcards
lab experiment
controlled conditions and ps know that they are taking part in an experiment
manipulates IV and measures DV
can control extraneous variables
field experiment
occurs in natural conditions
manipulates IV and measure DV
ps act as they normally would
quasi experiment
can be controlled or natural
takes advantage of natural occurring variable
IV may be a difference between people eg gender/depression
(IV not manipulated)
measures DV
no random allocation
test= T test, mann whitney, wilcoxon, anova
natural experiment
takes advantage of variable manipulated by another individual/ organisation
something has happened and experimenter measures the effect on a person eg a flood
measures DV
covert observation
observing people without their knowledge
less investigator effects as ps don’t know they’re being watched, less demand characteristics
ethical issues and ps should be debriefed
overt observation
ps are aware they are being observed
more ethical as ps can give informed consent
investigator effects / demand characteristics
participant observation
person conducting the observation takes part in the activity- can be covert or overt
can get lots of in depth data in close proximity to the ps
investigator effects as they can impact the ps behaviour
non participant observation
person conducting doesn’t take part, just observes
less investigator effects as they can’t impact the behaviour of the people
researcher may miss some behaviours of interest as they are far away
naturalistic observation
carried out in an everyday setting and the researcher does not interfere, observes behaviour as it would usually happen
high ecological validity
low reliability as it is hard to replicate due to the nature that events are happening by chance
controlled observation
under strict conditions eg in a lab where extraneous variables are controlled
can be replicated to check for reliability as they are standardised
low external validity due to high controls
ps behaviour may be altered due to controlled nature
time sampling
observer records events at agreed time incriments eg every 10 seconds
makes better use of time
may miss important behaviours which are relevant to the observation
event sampling
observes the number of times a specific behaviour occurs
every target behaviour should be counted for
but some may be missed if there is too much happening at one time
questionnaires- open qs
allow ps to answer how they wish, no fixed answers
qualitative data collected
less researcher bias as the ps answer in their own words and their response isn’t affected options given by researcher
social desirability to present themselves in a certain way
questionnaires- closed qs
restrict p’s answers to predetermined set of responses
quantitative data
eg checklist, rating scale, Lickert response table
quantitative is easy to statistically analyse and compare to other groups
answers are limited so ps may choose an option that doesn’t actually reflect them but they have to pick one
structured interviews
questions are decided in advance and every p is asked the same questions
gains quantitative data which is easy to statistically analyse
standardised so can be tested for reliability
investigator effects as they are asking same qs over and over and body language may change in response to some answers
- have to train interviewers which takes time and money (for both)
unstructured interviews
conducted more like a conversation where lots of rich, in depth qualitative data is collected
higher validity due to decreased investigator effects
investigator is not determining where the interview will go so they will not affect the ps answers
time consuming and hard to analyse and compare data
aim
research question that they are trying to answer
eg
to investigate whether (IV) effects/improves/hinders DV
directional hypothesis
predicts the direction of difference of the variables
eg the results will be higher when…
allocate 5% risk of error to one side of the distribution
based on past research
one tailed
will have 1 crtitical region on a graph
non directional
predicts that a difference will exist but doesn’t say the direction of the difference
eg there will be a difference…
normal way of testing H0
we reject H0 if the sample statistic reaches the CV in either tail- 2 crtitical regions on a graph
no past research
two tailed
sampling
involves selecting ps from a target population
sample should be representative so that it can be generalised to whole population
bias occurs when one or more group is over represented in a sample
population- large group, whole or entire group
sample- small group selected from population, representative sample allows generalisation
opportunity sampling
sample of people who are available at the time the study is carried out
convenient as is quick and easy
may be researcher bias as they may choose people with certain characteristics
bias as doesn’t represent whole population
ps may not want to fulfil the study and drop out
volunteer sampling
self selecting as ps have volunteered or responded to an advert to be part of the study
ps want to be in the study so will be engaged and won’t drop out
ps have given full consent to take part
may be bias as some people are more likely to volunteer than others so will have similar characteristics
random sampling
every member of the target population has an equal chance of taking part
eg pulling names from hat/random number generator
sample is representative
eliminates researcher/ participant bias
not everyone who is chosen to take part will participate so sample may still not be representative
stratified sampling
each stratum in population should be large enough so that selection can be done on random basis
should be perfect homogeneity among different units of stratum
ratio of number of items to be selected from each unit of strta should be same as total number of units in strata bearing units of the entire population
stratification should be well defined and clear cut
systematic sampling
when every nth person in the pop. is selected
removes participant and researcher bias
not all ps will want to participate
can be time consuming with large groups
pilot studies
small scale prototypes of a study carried out in advance to see if there are any problems with;
experimental design, instructions for ps and instruments for measurements
ensure time, effort and money aren’t wasted on a study with flawed methodology
other peers/ scientists can comment on study/ questionnaire
repeated measures (related)
same ps take part in each condition of the exp
data is compared for each p to see if there is a difference
fewer participant variables so the only thing affecting the DV is the IV (improves internal validity)
may be order effects as the ps learn what the aim is or get tired of the experiment so perform worse in second condition
counterbalancing
used to counteract/ reduce order effects in repeated measures design (within participants)
half of the sample complete condition 1 then 2
half of the sample complete condition 2 then 1
independent groups (unrelated)
two separate groups take part in each condition of the experiment (randomly allocated)
decreases the likelihood of order effects
reduces demand characteristics as ps won’t be able to guess the aim of the study
easier for investigator as they can use same material in both conditions- one isn’t easier
increases the effects of participant variables
matched pairs
participants are matched on a characteristic eg age, personality type, IQ
may be matched using a test, highest scores matched then next highest etc
one p from each pair is put into each condition randomly ( similar to independent groups once matched)
reduces participant variables as they are matched
impossible to match on all characteristics
need more ps
better than repeated measures as there are less demand characteristics as they are only doing one condition
extraneous variables
any variable other than the IV that may affect the DV
may have failed to take these into account when designing study
eg time of day, age, gender
confounding variable
extraneous variables that cause a change in the DV
relates to both of the main variables we are interested in
randomisation
when trials are presented in a random order to avoid any bias
standardisation
situational variables are kept identical so that any changes to the DV can be attributed to the IV
demand characteristics
occur when ps try to guess the aims of the study and change behaviour in order to support it
investigator effects
when a researcher acts in a way to support their prediction (can be conscious or unconscious)
influences the behav of the ps
they know the aims of the study
can be reduced by using double blind
ensure method/investigation is standardised
use open ended qs
ethical issues
take into consideration the welfare of the ps, integrity of research and use of data
deception right to withdraw informed consent confidentiality protection from harm
peer review
assessment process by psychologists in a similar which takes place before research is published
check validity of research assess work for originality allocate research funding allows errors to be identified looks at significance of research in a wider context
VOFES
implications on economy
eg development of treatments for depression/ OCD mean that people are able to work more, take less time off work, doesn’t cost the company, NHS saves money if treatments are successful
quantitative data
numerical data that can be statistically analysed and converted to graphical format
easy to analyse statistically to check for significance etc
lacks representativeness as it usually comes from closed questions
qualitative data
non numerical, language based data expressed in words
data collected is rich in detail (usually from unstructured interviews)
data is subjective and can be interpreted differently between people so may be subject to bias
primary data
data collected for a specific reason and reported by the original researcher
it has good authenticity as it has been collected specifically for the research- data will fit the aims of the research
primary data can take a long time to collect
secondary data
data which already exists and used for another study
less time consuming than primary to collect
concerns with accuracy as the data wasn’t collected to meet the aims of the research
meta analysis
process where investigators combine findings from multiple studies and make an overall analysis of trends and patterns across the research
- based on a large sample so more likely to be generalisable
- may be bias as the researcher may only choose studies which show significant results
descriptive statistics
summarising data numerically, allowing researchers to view the data as a whole
measures of central tendency + dispersion
mean (interval ordinal)
mathematic average (includes anomolies) most sensitive measure
easy to calculate, uses all values, gives central point of distribution
affected by extreme scores, can’t be used with nominal data
median (interval ordinal)
central score
not affected by extreme scores
difficult if there is lots of data
doesn’t reflect all values
can’t be used with nominal data
mode (nominal)
most frequent value
not affected by extreme scores
useful when scores cluster around non central value
may not be representative
not useful in small sets of data
measures of dispersion
define the spread of data around the mean
range
subtract lowest from highest and add 1
easy to calculate
doesn’t show distribution pattern
standard deviation
large SD shows data is very dispersed around mean
small SD shows values are concentrated around the mean
precise as all values are included
extreme values can distort measurement
used when mean ia a good measure of the average
normal distribution
bell shaped curve
mean median and mode all in the middle, same line
negatively skewed
curve to the right ( test too easy)
mode at top, median in middle, mean on end (less than median)
positively skewed
curve to the left (test too hard)
mode at top, median in the middle, mean on the end
mode remains at the highest point as it isn’t affected by extreme values
mean greater than median
content analysis
a type of observational technique that involves studying people through qualitative data
data can be placed into categories and counted (quantitative) or can be analysed in themes (qualitative)
coding system is used- categories for the data to be classified
high ecological validity
findings may be subjective as they may be interpreted wrong
thematic analysis
helps to identify themes throughout qualitative data
will produce more refined qualitative data
high ecological validity as observations come from real life behaviour
features of a science
paradigm
concepts are falsifiable
use of empirical methods
theory constructed from which hypotheses are derived + tested
paradigm
an agreed upon set of theoretical assumptions about a subject and its method of enquiry
Kuhn- pre science as no paradigm exists due to disagreements between various approaches
paradigm shift- when scientists challenge an existing paradigm and many opinions change
theory
set of general laws that have the ability to explain a particular behaviour
in order to test a theory, an experiment must be devised, hypothesis is made about what they think will happen and this must be objective and measurable
falsifiability
scientific theories must always be stated in a way that predictions derived from them could be shown to be false
even if you consistently find the same results (replicability) you must be able to prove it wrong
empiricism
knowledge should be gained from direct experiences in an objective, systematic and controlled manner
must be objective- free from bias and subjectivity
replicability
repeating and gaining the same results (not due to chance)
use of standardised procedure, control of variables
replication is harder in humans due to confounding variables eg mood
reliability
measure of consistency
eg a person gets the same result each time on an introvert test
test retest reliability
same person/ group is asked to undertake the research measure on different occasions
results are correlated with the original results
use spearman’s to see if correlation is significant
inter observer reliability
extent to which two or more observers are observing + recording in a consistent way
observers should discuss and agree behavioural categories, ensuring they are trained
should observe the same people at the same time, recording observations independently
results should be correlated using stats test ( spearman’s or pearson’s)
validity
whether the test measures what it is set out to measure
internal validity
whether the results (DV) are solely affected by changed in the IV
external validity
whether the data can be generalised to other situations beyond the context of the research situation
face validity
on the surface, does the test appear to measure what it should be measuring/set out to measure