1 Flashcards
What is a confounding variable?
A type of extraneous variable that varies systematically with the different levels of the independent variable and affects the dependent variable
What is an independent variable?
The variable that the researcher manipulates and which is assumed to have a direct effect on the dependent variable (DV).
What is a dependent variable?
The variable being measured
What is a directional hypotheses?
A clear and precise prediction about the difference or relationship between the variables in the study This prediction is typically based on past research, accepted theory or literature on the topic.
E.g. there will be an increase…
What is a non- directional hypotheses?
Predicts that a difference will exist between two or more variables without predicting the exact direction of the difference. This is usually because previous research has been inconclusive, and the specific nature (direction) of the effect of the IV on the DV cannot be predicted confidently.
E.g. there will be a difference…
What is operationalisation?
Operationalisation is clearly defining a variable and making it measurable. This enables the behaviour under review to be measured objectively.
Operationalise stress
Cortisol levels, score on a likert scale
What do we standardise?
Investigator effects and situational variables
2 limitations of matched pairs
- time consuming
- can’t control for every ppt variable
what is matching?
making sure a particular characteristic of the participants is divided equally across groups
what does random allocation do?
ensures that participant variables are distributed evenly using chance
how can we use random allocation
number hat, computer program
What does it mean if participants are blind to the experimental group?
They don’t know which experimental group they’re in.
how do we control for ppt variables? (2 ways)
matching, random allocation
how do we control for situational variables? (1 way)
standardisation
how do we control for investigator effects? (1 way)
standardisation
what does blinding do?
it makes ppt unaware on what experimental group they’re in so reduces demand charcteristics
What is meant by external reliability?
External reliability refers to how consistent the results of a study or specific test are.
3 ways to increase reliability in a questionnaire
- test-retest method
- adapt/ remove anything unreliable
- use closed questions instead of open
2 ways to increase reliability in interviews
- same interviewer for each participant
- structured interview
What is meant by internal reliability?
Internal reliability refers to how consistent the results are of individual items on a test or questionnaire.
What is meant by face validity?
Face validity refers to whether a test appears to be measuring what it claims to measure.
What is meant by concurrent validity?
if results of a test are similar to those of a previously validated test, that appears to measure the same thing
pros of covert
pros - less investigator effects so high external validity
- less demand characteristics so high internal validity
cons of covert
ethical issues
pros of overt
ethical
cons of overt
investigator effects and demand characteristics means low internal and external validity
pros of naturalistic
high ecological external validity
cons of naturalistic
ppt and situational variables hard to control
pros of controlled
high internal validity
cons of controlled
lacks ecological external validity
pros of participant experiment
in depth data
cons of participant experiment
investigator effects = less internal validity
pros of non- participant
less investigator effects
cons of non- participant
lack of closeness means behaviours of interest may be missed
3 things that behaviourable categories need to be
observable, measurable and clear
what is event sampling?
tally the number of times a certain event occurs over an entire time period
what’s time sampling?
observing behaviour at certain time intervals
what’s inter- observer reliability?
the extent to which two or more observers are observing and recording behviour in the same way
4 ways to ensure inter-observer reliability
- training
- operationalise behaviour categories
- more observers
- same perspective
define internal validity
when a study measures what it intends to measure by controlling for extraneous variables to ensure only the IV affects the DV in order to establish a clear cause and effect
define external validity
whether or not the study can be extrapolated beyond the scope of the study
3 types of external validity
temporal (time period) , population (representative) , ecological (setting)
mundane realism?
whether or not the study reflects what happens in a day to day life
randomisation
using chance to control for the effects of bias
standardisation
where all participants are subject to the same environment, information and experience
null hypotheses?
statement showing no significant difference between variables
experimental hypothesis
shows a significant difference between variables
what is a pilot study
A small- scale trial run of a study, to make improvements before we commit to a large scale investigation.
what do pilot studies test for?
internal validity, efficiency, reliability, ethics, easily interpreted by participants
3 question types to avoid in questionnaires
double- barrel, double- negative, expert terminology
what is self- report?
when participants are asked to provide information about their own thought, feelings and behaviour
pros cons of open questions
pros - more detailed, less researcher bias
cons - less reliable (harder to analyse)
pros and cons of closed questions
pros - easier to analyse
cons - less internally valid due to guessing
pro and con of structured interview
Pro - easier to analyse quantitative data; discover trends
- replicable
Con- Increased risk of investigator effects
pro and con of unstructured interview
Pro - less chance of demand characteristics and investigator effects.
Con - Difficult to analyse qualitative data
- time consuming and expensive.
limitations of self report
- partaker bias (untruthful)
- social desirability bias
4 types of extraneous variables
demand characteristics, investigator effects, social desirability bias, extraneous variables
how to reduce demand characteristics?
- deception
- double blind trial
how to reduce investigator effects?
double blind trial
how to reduce social desirability bias?
- covert observation
- anonymous questionnaire
empirical method
Empirical method refers to the idea that knowledge is gained from direct experiences in an objective, systematic and controlled manner to produce quantitative data.
It suggests that we cannot create knowledge based on belief alone, and therefore any theory will need to be empirically tested and verified in order to be considered scientific.
objectivity
A key feature of science is the ability for researchers to remain objective, meaning that they must not let their personal opinions, judgements or biases interfere with the data.
replicability
Replicability is a key feature of a science, and refers to the ability to conduct research again and achieve consistent results.
falsifiability
Falsifiability refers to the idea that a research hypothesis could be proved wrong.
theory
A theory is a set of principles that intend to explain certain behaviours or events
paradigm
A paradigm is a set of shared assumptions and methods within a particular discipline.
paradigm shift
The way in which a field of study moves forward is through a scientific revolution. It can start with a handful of scientists challenging an existing, accepted paradigm, which gains popularity over time
ethics
Deception
Right to withdraw
Informed consent
Privacy and confidentiality
Protection from harm
What is BPS code of ethics?
The British Psychological Society (BPS) code of ethics sets out a series of guidelines that researchers need to consider when undertaking psychological research.
deal with deception
- debrief
- right to withdraw
deal with privacy
- informed consent
- right to withdraw
- conidential
deal with confidentiality
-anonymity
deal with protection from harm
right to withdraw
informed consent
4 reasons for peer- review
- validate quality and relevance
- suggest amendments
- allocate research funding
- reduce misinformation
3 cons of peer- review
- criticism of rival researchers
- publication bias (catchy enough?)
- time- consuming
cost- benefit analysis
Psychologists must ‘weigh up’ the benefits to society that may be gained by testing new theories and the costs to the participants within the research.
test- retest check
administering the same test to the same participants at different times to check for reliability
correlation coefficient?
+0.8
types of order effects and prevention
practise
fatigue
mitigate with counterbalancing
define experiment
An experiment is a study in which the researcher investigates a cause and effect relationship,
by comparing the effect of different levels of an independent variable on the dependent variable.
What do we do with audio recordings?
We transcribe them
What is a schemer?
A mental framework developed through experience.
Why do we use content analysis?
Investigating trends and patterns over time in communication (qualitative info) which ends up as quantitative data