research methods Flashcards
extraneous variable
a variable other than the independent that may affect the DV. these don’t vary systematically with the IV.
confounding variable
a type of EV that varies systematically with the IV.
operationalise
to make your variables measurable (specific)
random allocation, important because…
using chance to separate participants into different conditions
…to avoid bias, and limits effect of participant variables (by spreading out each type of person)
standardisation, important because…
using exactly the same formalised procedures + instructions for all participants in a research study
…limits effect of extraneous variables
randomisation, important because…
use of chance methods to control for the effects of bias when designing materials and deciding order of experimental conditions
…limits effect of EVs
demand characteristics
cues from the researcher or the situation that may be interpreted by the participants as revealing the purpose of the investigation, which could lead to a change in their behaviour
investigator effects
effects of investigator’s behaviour on the research. can involve design of study, selection of participants and interaction with participants
single-blind
single blind is where participants don’t know what condition they’re in
…reduces demand characteristic as they don’t know what you want them to do and change accordingly
double-blind
double blind is neither participants or investigator know what condition they’re in
…reduces demand characteristics and investigator effect. if investigator doesn’t know condition they can’t expect anything and give anything away
lab experiment strength and weakness
s: high control over CV and EV so we can be sure DV changes are due to IV. study has high validity
w: may lack generalisability to real life, because settings are artificial and participants are aware of being studied, so may show unnatural behaviour due to demand characteristics
s: high control = replication more possible, so can check reliability
w: lacks mundane realism - dissimilar to what we do in everyday life
experimental designs:
independent groups, good bad
participants randomly allocated to different groups where each group represents one experimental condition
good: order effects not a problem, less time to set up groups than matched pairs, participants less likely to guess aims
bad: participant variables could effect results, more time taken finding participants
experimental designs:
repeated measures, good bad
all participants take part in all conditions of experiment
good: participant variables controlled(higher validity), fewer participants needed
bad: order effects could cause boredom and fatigue causing deterioration or could get better, demand characteristics. use counterbalancing
experimental designs:
matched pairs, good bad
pairs of participants are first matched on some variable(s) that may affect the DV. Then one member of the pair is assigned to condition A and the other to condition B
good: order effects and demand characteristics less problematic, reduces effect of participant variables
bad: time consuming and ecpensive
counterbalancing
ABBA - the first ten participants take part in Condition 1 then Condition 2, the other half do the conditions in the opposite order
it attempts to control order effects
sampling methods:
random
potentially unbiased, confounding/extraneous variables should be equally divided between different groups enhancing internal validity
- difficult and time consuming to conduct
sampling methods:
systematic
objective, once the system for selection has been established the researcher has no influence over who is chosen
- time consuming, participants may refuse to take part
sampling methods:
stratified
produces representative sample as it’s designed to accurately reflect composition of population, so generalisation of findings becomes possible
- not perfect, identified strata can’t reflect all the ways people are different
sampling methods:
opportunity
convenient, less costly in time and money than random sampling as list of members of target pop isn’t required
- sample unrepresentative of target pop as it’s drawn from a very specific area
sampling methods:
volunteer
easy, requires minimal input from researcher so less time consuming, the researcher ends up with more engaged participants
- volunteer bias, may attract certain types of people who are more curious, trying to please researcher (affected how far findings can be generalised)
aim
statement outlining why research is being done, usually begins with “the aim is to discover/find out/see…if/whether”
hypothesis
PRECISE and TESTABLE statement of the relationship between two variables
field experiment strengths and weaknesses
s:real life setting is good, gives high ecological validity
w: less control over CV and EV
s: fewer demand characteristics so participants may behave more naturally
w: potential ethical issues if they don’t know they are being studied
natural experiments strengths and weaknesses
s: high external validity
w: lack of control over CVs - cannot randomly allocate groups so may be participant variables
s: allows us to study things that would otherwise be unethical
w: less control of EVs
quasi experiment strengths and weaknesses
participants can’t be randomly allocated because their condition is already set eg age
s: controlled conditions improve replicability
w: cannot randomly allocate conditions so may be confounding variables
ethical guidelines
(can do can’t do with participants)
C - Confidentiality (& privacy)
D - Deception
C - Consent (informed)
D - Debrief
W - right to Withdraw
P - Protection of participants
ethics - informed consent
Ps must be told about anything ‘that might reasonably affect their willingness to participate’
- aims of the research, procedure, their rights, what data will be used for
presumptive consent: ask other similar people
prior general consent: permission given beforehand to take part in a number of studies
retrospective consent: consent given afterwards
ethics - deception
withholding information from Ps - can affect informed consent
should be avoided if possible
ethics - protection from harm
should be no physical or psychological harm
- Ps should leave unchanged from how they entered
ethics - confidentiality
all data should be confidential, all Ps should be anonymous and unidentifiable unless prior informed consent given
ethical issues - briefing
before study, researcher must obtain consent and ensure Ps understand tasks
- explain nature of study
- instruct P about what’s expected of them and what will happen
- confirm they fully consent
- explain they can withdraw at any time
ethical issues - debriefing
after the study researcher should ensure Ps are returned to their initial state and informed about the research
- explain aims and nature of the study
- explain deceptions used
- reassure Ps about their performance
- offer retrospective withdrawal
- get feedback
- invite and answer questions
naturalistic vs controlled
naturalistic - setting where behaviour would usually occur
controlled - structured environment where some variables can be controlled
covert vs overt
covert - Ps don’t know they are being observed
overt - Ps know they are being observed
ecological validity
is study true to real life
reliability
consistency - want study to be replicable
internal validity
is your study actually measuring what you say you are?
are there demand characteristics - if so, it’s probably not got internal validity
covert PEEL
p - covert observations have high internal validity
e - as Ps don’t know they are being observed
e - so are less likely to respond to demand characteristics and so their behaviour will be true to real life
L - so covert observations may give us better insight into behaviour than overt observations
observations - participant vs non-participant
P - researcher becomes a part of the group being observed
NP - researcher remains separate from the group being observed
unstructured vs structured observation
u: writing down everything you see - more suitable for small scale observations
s: simplify behaviours being observed using behavioural categories
quantitative vs qualitative data
unstructured gives you qualitative data (lots of words, richer in detail, but harder to analyse and draw clear conclusion)
structured gives you quantitative data (usually numbers, more concise easier to analyse, but might miss detail)
behavioural categories
- operationalising target behaviours
- precisely define target behaviours so they are specific and measurable
- should include all forms of the target behaviour in checklist
- behaviours should be exclusive and not overlap
(very difficult to do but very precise)
sampling in structured observations
- event sampling - counting no of times a behavioural category occurs
- time sampling - recording behavioural categories should be in particular time frames (only look at set time frames, good if behaviour occurs frequently)
self report methods
interviews and questionnaires
…asking Ps to explain their thoughts etc
self report: interviews
structured - pre-determined questions asked in a set order (easy to analyse and compare, but missing detail/elaboration) (better replication/science)
unstructured - there’s an aim to the discussion but there are no set questions (richer data, but hard to compare) (interviewer bias due to more flexibility so their opinions/expectations may direct the interview/data)
semi-structured - a list of pre-determined qs but interviewers can ask follow up qs (certain qs but more depth)
social desirability is a problem, Ps change answers to be seen differently
self report: questionnaires questions
open q’s = no fixed answers, produce qualitative data (harder to analyse, but truer data)
closed q’s = fixed responses, produces quantitative data (easier to analyse, but lacks validity)
self report: questionnaire good bad
cost effective - large amounts of data added quickly
closed q’s easier to analyse
but may not offer correct answer leading to lack of validity
social desirability - Ps may lie
response bias - always answering in a similar way eg always saying yes
designing interviews
interview schedule (list of questions)
standardised to reduce bias
reassure confidentiality
conduct in quiet room
designing questionnaire
closed or open q’s (likert scale, rating scale, fixed choice option)
don’t use too complex language
avoid leading questions
avoid double negatives
primary vs secondary analysis
P: collected specifically for the purpose of the current research, first hand from Ps
S: data collected before current research by someone else. used in meta-analysis (when a no of studies collected for a similar purpose are pooled together and a conclusion is drawn)
correlational analysis
designed to investigate strength and direction of relationship between 2 variables
+0.91 (+ means positive correlation, number means strength)
shown on scattergrams
sign test (three Ds)
difference - test of difference not correlation
data - looking at nominal data (mutually exclusive categories)
design - used repeated measures/matched pairs (related design)
sign test calculated value of S
has to be equal to / lower than the critical value to be significant
peer review
the assessment of scientific work by others who specialise in the same field.
PROBLEMS:
LACK OF ANONYMITY - rivals may use anonymity to criticise others (if in direct competition)
PUBLICATION BIAS - editors can be selective with what they publish (due to opinions/want for circulation of their publication) so some research may be disregarded
BURYING GROUNDBREAKING RESEARCH - established scientists are more likely to be chosen as reviewers so findings that chime with current opinion are more likely to be passed, instead of new innovative research that challenges established order
characteristics of correlational research
always non-experimental: observes relationships between variables without altering them
dynamic: patterns between variables are never constant/always changing. (-ve can change to +ve in future due to various conditions)
inferential statistics: correlation coefficient is used to measure
strength and nature of the relationship between 2 co-variables
between -1.0 and +1.0
pros of correlational design
ideal place to begin preliminary research investigations
can be used when lab experiment would be unethical
secondary data can be used
cons of correlational design
can’t establish cause-and-effect relationship
only identify linear relationships not curvilinear ()
what is content analysis
research method used to measure no of times a behaviour or event occurs within a form of media
=> indirect observation
5 steps of content analysis
collect data
researcher reads/examines
researcher identifies coding units
data analysed by applying coding units
tally made of no of times a coding unit appears
strengths of content analysis
reliable way to analyse qualitative data as the coding units are not open to interpretation
easy, not too time consuming, avoids ethical issues
high external validity
allows statistical analysis if required, as quantitative data collected
weakness of content analysis
causality cannot be established as it just describes data
identification of suitable themes and codes is subjective and decided by researcher alone so conclusions may lack objectivity
content vs thematic analysis
content produces quantitative data
thematic produces qualitative (ideas and themes with evidence)
=> good because it doesn’t ignore context
how is a thematic analysis good
flexible approach - themes do not have to be pre defined
provides detail that quantitative analysis can miss
may identify unexpected themes so guidelines for future research
weaknesses of thematic analysis
time consuming
investigator bias
what is a case study
in-depth investigation of an individual / group / event
strengths of case studies
rich yield of data, and the depth of analysis means high validity
studying abnormal psychology gives insight into how something works when functioning correctly eg KF
detail collected on a single case may lead to interesting findings that stimulate new research paths
weaknesses of case studies
little control over EVs so difficult to establish causal relationships between variables
poor reliability as replication is unlikely
small sample size, hard to generalise
researcher may become involved, so researcher bias
reliability def
consistency of study / measuring test
internal: internal consistency of measure (are diff items all measuring the same construct)
external: consistency of a measure from one use to another (eg taking test a year apart, getting same results)
2 ways to measure external reliability
test-restest: assess same person on 2 diff occasions, scores then correlated.
if strong positive relationship (0.8 and above) test is reliable
inter-observer reliability: looks at agreement between 2+ observers. small pilot study to see if behavioural categories are applied the same way. observers watch same event and record it individually, then share.
again reliable if correlation coefficient is 0.8+
2 ways measure internal validity
face validity: does the test appear at face value to measure what it claims to (look at measuring instrument)
concurrent validity: results match results from a recognised established test (+.8 correlation)
6 key features of science
paradigm - shared assumption within a scientific discipline
paradigm shift - significant change in dominating unifying theory
objectivity - all sources of personal bias are minimised (so research process not distorted)
empirical - gathering evidence through direct observation and experience
replicability - extent to which scientific procedures and findings can be repeated by other researchers
falsifiability - cannot be considered a science unless it can possibly be proved untrue
type 1 error
false positive
=> reject null hypothesis when should’ve accepted (because there is no connection)
type 2 error
false negative
=> accept null hypothesis when should’ve rejected (because there is a connection)