research methods Flashcards
what are the types of data?
- quantitative
- qualitative
- primary
- secondary
quantitative data what is it?
- data in forms of numbers
- can be transformed to tables, graphs, fractions, charts etc
- can be statistically analysed e.g. mean, mode etc
quantitative data strengths?
- reliable as easy to compare + analyse as techniques used to collect it are normally replicable
- highlights trends + patterns= useful to apply general laws
- objective, open to bias
quantitative data limitations?
- reveals what not why behind a behaviour (lacks explanatory power)
- oversimplify complex things e.g. human behaviours
qualitative data what is it?
in forms of words/images e.g. thoughts, feelings etc
- can be analysed using content analysis/thematic analysis
qualitative data strengths?
- gain insights into nature of individual experience + meaning
- can expand + deepen knowledge of complex behaviours
qualitative data limitations?
- tends to use small sample sizes, difficult to generalise
- subjective= lacks control, hard to analyse + is left to interpretation
primary data what is it?
- collected at the source + has not ben previously published
- refers specifically to research aim
- obtained first-hand from the researcher
primary data strengths?
- may be more reliable + valid as researcher has full control over data collected
- more trustworthy than secondary data as researcher knows research will be subjected to peer review which if negative could harm reputation
- more specific to research
primary data limitations?
- derived from single study compared to secondary data
- expensive, time-consuming
secondary data what is it?
- consists of any research findings/results which are pre-existing –> not collected at source/original data collected by other researchers
- has been previously published
- derived from multiple sources e.g. meta-analysis consists of quantitative findings from a range of research studies on same topic
secondary data strengths?
- research studies have already been peer-reviewed –> time + money isn’t wasted + researcher can have confidence in data
- provides new insight into existing theories
secondary data limitations?
- secondary data may not directly address aim on topic of research –> may be misinterpretation
- unaware of control of original research
meta analysis what is it?
- quantitative research method that takes data from published studies (secondary data)
- data from lots of studies that use same technique + research questions are combined
- statistical analysis is performed on results of these studies to produce a effect size as dependent variable to assess overall trends
meta analysis strengths?
- less chance of bias results due to secondary data –> researchers can’t influence results= reliability increases as involves lots of studies
- can generalise findings to population due to large amounts of studies included
meta analysis limitations?
- secondary data= may not be precise etc
- may be difficult to + time consuming to access relevant studies
case studies what is it?
- detailed, in-depth investigations of small group/individual
- allow researchers to examine individuals who have undergone unique/rare experience/are unusual etc
e.g. someone in a cult/wild boy of Averyon - collects qualitative (interviews, open questions, questionnaires etc) more subjective individual, personal experience. quantitative data (memory tests, closed questions etc)
- uses triangulation (sometimes involves more than one researcher collecting/analysing data in same study
- tend to be longitudinal (person experience tracked + measured over time)
case studies strengths?
- provide rich, in-depth data= high in explanatory power –> whole individual is considered
- conducting case study on unusual person with rare condition= researcher can form conclusions as to how majority of population function
- gains unique insights which would normally be over looked with manipulation of only one variable
- can be used in circumstances that wouldn’t be ethical
case studies limitations?
- findings only represent small group/individual= hard to generalise
- if researcher becomes close to person they’re studying= they lose objectivity + may become bias in reporting
- subjective + sometimes unscientific= less validity
correlations what are they?
- analysis of relationship between co-variables
- correlation research- variables aren’t manipulated (no IV), instead 2 co-variables are measured + compared to look for a relationship
- correlation uses 2 scores
- case of self-reported data= there are 2 scores per participant
- case of pre-existing data, researcher would go by records
- each ppt. has 2 scores + researcher then calculates to look for a relationship
- score for correlations= plotted on scattergraphs/grams
analyse relationship between co-variables –> eyeball scattergraph to see direction of correlation
-calculate correlation co-efficient which represents strength of relationship between co-variables expressed as value between -1 and +1
- perfect positive correlation= +1
- perfect negative correlation= -1
- no relationship= 0
types of correlations?
positive correlation (as one co-variable increases the other one increases)
negative correlation (as one co-variable increases the other decreases)
no correlation (no relationship)
correlations strengths?
- data may be easily available for researcher to quickly analyse –> enables researcher to access large amounts of data which would otherwise be impossible to gather –> increases reliability
- correlations allow researchers to make predictions as to relationship between 2 co-variables
correlations limitations?
- extraneous factors connected to co-variables may affect results -> invalid conclusions
- only work well for linear relationships (height + shoe size), not non-linear (hours worked + level of happiness)= limits type of data that can be analysed
nominal data
- used when data put into categories, provides little info or insight
ordinal data
- data placed in some kind of order or scale
interval data
- data measured in fixed units with equal distance between points on the scale –> equal intervals between each value
presentation of data- graphs/tables
- don’t contain raw scores (e.g. individual scores on a test) of the data –> instead they’re converted to descriptive statistics to present overview of results e.g. mean + standard deviation
- clear, straight forward at summing up results per condition
presentation of data- bar charts
- used when data is divided into categories (discrete data)
values in set are distinct + separate - bar charts have gaps between each category
–> x-axis shows category/condition
–> y-axis shows score/percentage
presentation of data- histogram
- display continuous data (have finite/infinite interval e.g. 3.265 vs discrete which would be 3
- don’t have gaps between bars
- x-axis represents categories
- y-axis represents frequencies of the categories
presentation of data- scattergrams
used for correlations
presentation of data- line graph
represents continuous data
presentation of data- distributions (normal distribution)
- spread of data around the mean
- symmetrical around the mean
- tails never touch x-axis
- bell curve
- mean, median, mode= all occupy at midpoint of curve, they all have similar values
- left of peak= ppl that score less than mean + right= ppl that score more
presentation of data- distributions (skewed distributions)
- behaviours/test scores don’t always fit into normal distribution so skewed distribution is necessary
- one tail is longer than the other
- data is not distributes evenly
positive skew
- most values= found on left so long tail on the right
- e.g. a hard maths test –> most ppl score low marks + only a few score high (right side of tail)
- mode then median then mean
negative skew
- most values= found to the right= long tail on the left
e.g. easy maths test –> most ppl score high , few score low
- mean then median then mode
mathematical content
- percentages
- decimals + decimal places
- fractions
- ratios
- significant figures
- standard form
- mathematical symbols
measures of central tendency + dispersion
- central tendency- any measure of avg value in a set of data
- dispersion- calculate spread of scores
-mean, median, mode, range,
standard deviation
–> calculates how much a set of scores deviates from the mean
- provides insight on how clustered/spread out scores are
statistical testing
- used to determine whether hypotheses should be accepted or rejected
–> find out if differences/relationships between variables are significant or just occurred by chance
critical value –> in sign test values need to be equal to/lower than it in order to be significant
peer review- what is it?
- process of assessing scientific work to decide whether it is worthy of publication in an academic journal –> collections of studies about similar topics –> how science gets communicate therefore important work enters journal in good science –> to decide this it goes through a peer review
peer review- how it’s done
- once scientist writes up their study it’s sent to 2/3 ppl in same field –> these peers review quality + decide whether it’s good enough to be seen by scientific community e.g. was it valid, were IV + DV operationalised, were there flaws in design, was analysis appropriate, ensure no plagiarism etc –> peers then comment on work + return it, corrections by original scientist must then be made if needed –> reviewers are normally anonymous
peer review- why peer review?
- ensure quality of research is published –> validity of current scientific knowledge is maintained
- universities= rated according to quality of research they produce. better quality= more funding for future projects
- guards research/data from being fraudulent
peer review- evaluation
- anonymity- allows viewers to be honest BUT in small fields some may just use it to critique rivals
- publication bias- journal editors feel pressure to publish finding that find positive results –> means negative results which are just as important aren’t published as much
- reviewer may prevent publication of a rival then repeat study + claim it as theirs –> may only publish research that holds different view to their own –> limiting this publishing= slows scientific progression
implications of psychological research and the economy
- economy= system that enables scarce resources to be distributed according to needs + wants –> economic implication= effect that something eg. research finding may have on this
- research in psychology can have ripple effects in society e.g. cause social change, adopt new ideas
How governments spend money= has implications in economy
- health, education, leisure, law + order, depression
- economic implications on small scale= how individual is impacted –> women who take maternity leave= perceived as less reliable by employers= overlooked for promotions
- research shows a happy workforce= more productive –> means schemes to boost staff well-beings may be introduced
- ppl that work= contribute more to economy through taxation
what are components of psychological research?
- psychological research has between 2000-9000 words
- abstract, introduction, methods, results, discussion, conclusion, limitations
components of psychological research- abstract + introduction
- abstract= 200 words, brief overview of paper
- introduction= sets the scene, lays out the aims, reviews current literature
components of psychological research- methods + results
- methods- how the research is conducted, who the ppts were, how did we collect the data (interviews, case studies etc), analyse of data (correlations, standard deviation)
- results- (can be together or apart from discussion), presents results collected in format that’s accessible, identifies patterns/trends/relationships
components of psychological research- discussion + conclusion + limitations
- discussion- critically evaluate/analyse data, discuss reasons/impacts of results with ref to earlier research
- conclusion- summarise findings + propose anything that might happen in the future (further development)
- limitations- outline limitations of research, generalisability, validity, reliability etc
what makes a subject scientific?
- paradigm + paradigm shifts
- role of theory
- falsification
- role of peer review
- role of hypotheses testing
- use of empirical methods
- replication
- generalisation
what makes a subject scientific- paradigm + paradigm shifts
- brings together all assumptions that scientists are prepared to accept about:
1. what they’re studying
2. how they’ll think about it
3. how they will study it - majority of researchers with subject must agree with work + work within this common paradigm (like a set of universal laws from which theories are constructed)
- paradigm shifts occur when there’s too much contradictory evidence to ignore -> many researchers question the accepted paradigm e.g. shift from Newton’s law to Einstein or shift from smoking
what makes a subject scientific- role of theory
- theory= explains observable behaviours + events using a set of general principles. It can be used to predict observations
theories role:
1- give purpose + direction to research by organising facts + patterns into a set of general principles
2- theories therefore generate testable hypotheses which offer testable predictions of the facts organised by the theory
what makes a subject scientific- falsification
- karl popper- suggested psychologists should hold themselves up for hypotheses testing + possibility of being proven false –> even if scientific principle has been proven true repeatedly it may not be the case
- theories that survive the most attempts to falsify= the strongest (not because they’re definitely true but because they haven’t been proved wrong
- for theory to be scientific it must be open to falsification
what makes a subject scientific- role of peer review
- essential to check quality, relevance, honesty, validity of research
what makes a subject scientific- role of hypothesis testing
- allows researcher to refute or support theory –> done in a controlled way altering only one variable at a time –> degree of support for hypothesis determines degree of confidence in a theory
what makes a subject scientific- the use of empirical methods
- use careful observations + experiments to gather facts + evidence
- variables highly controlled + objectively measured= cause-effect relationships can be found
e.g lab experiments, empirical methods use standardised procedures= more replicable
what makes a subject scientific- replication
- repeating experiment (same method) to see if same results can be achieved –> increases confidence in validity of results so they can be built upon, strengthens theory through attempts of falsification –> something discovered that can’t be replicated= not accepted
what makes a subject scientific- generalisation
- sample should be large enough + representative to apply to other situations/wider population
- should be possible if findings are objective + appropriate research methods has been used
sampling methods
- start of research, researcher must identify target population –> sample used in research = taken from target population + presumed to be more representative of the population (higher generalisability) –> population (large group of ppl who researcher may be interested in studying) , sub group of general population (see diagram on mind map)
opportunity
volunteer
stratified
systematic
random
sampling methods- opportunity
opportunity - researcherobtaining sample from those who are present + available at the time + willing to take part in research
S:
- convenient, quick, easy way of obtaining ppts (who will all be willing)
L:
- can’t generalise findings as only represent a small group of ppl
- researcher bias when choosing ppl to approach
sampling methods- volunteer
- people actively selecting themselves to participate in study (self-selecting)
- may see posters, media, newspapers etc + choose to take part
–> advertising research, advert may ask for specific characteristics e.g. ADHD
S: - quick, easy, cost-effective
- ppl more willing + enthusiastic= better results
L: - volunteer bias –> results hard to generalise (often volunteers have similar personality traits e.g. outgoing)
- volunteers= eager to please= demand characteristics
sampling methods- stratified
- generates small scale reproduction of target population (they’re divided + categorised according to key characteristics required by research to be representative
S: - representative of target pop as based on exact proportions= easy to generalise data
- researcher has control over categories which can be selected according to how relevant they’re to the aim
L: - can’t confidently classify every member of public to sub-group (not always perfect)
- can be time-consuming
sampling methods- systematic
- selecting every nth person from a list to make a sample e.g. every 10th person etc
S: - unbias as researcher has no control= more representative +generalisable, quick, easy, cost-effective
L: - technique= not completely bias free
- no guarantee it’ll be representative
sampling methods- random
- least bias
- all ppts have equal chance of being selected
- uses names out of hat, random number generator etc
S: - eliminated researcher bias as they have no control on who’s selected
- findings should be fairly representative + generalisable
L: - time-consuming + impractical
- sample can be non-representative (no guarantee it’s always going to be representative)
aim
general statement covering topic that will be investigated –> straightforward of what researcher will attempt to find out by conducting investigation
- identifies purpose of the research
hypotheses
- a testable statement written as a prediction of what the researcher expects to find as a result of experiment
- precise + unambiguous
experimental hypotheses/alternative hypotheses
- includes IV + DV –> both should be operationalised which involves specifics on how each variable is manipulated + measured e.g. will state students recall more info on a Monday morning than a Friday afternoon
what are the types of experimental hypotheses
- Directional/one tailed
predicts direction of the difference in conditions i.e. one condition will out-perform the other
‘women are significantly better drivers than men’
–> one tailed, it states the direction the results are expected to go (one group will do better than the other) - non-directional/two-tailed
doesn’t predict direction of difference in conditions i.e. it just predicts a difference that will be shown
e.g. ‘anxiety influences performance’
null hypotheses
- begins with an idea that IV will not affect the DV e.g. no difference on amount recalled on a monday morning vs friday afternoon
–> only difference due to extraneous/confounding variables - hypotheses for correlation investigations written in same way as experimental ones BUT instead of using the term ‘difference’ it will use ‘relationship/correlation’ e.g. there will be a relationship between
variables
independent
dependent
extraneous variables
confounding variables
independent + dependent variables
independent
- only variable changed/manipulated in an experiment
- required to observe the effect it has on DV which is being measured
dependent
- variable that is being measured to determine the outcome of the experiment/assess the affect of the IV
operationalising IV + DV
operationalising IV + DV- refers to how both the IV + DV are implemented by a researcher (need to be CLEAR)- defining variables
- operationalising IV- researcher needs to set up + define each condition= it’s clear difference between conditions being investigated
- operationalising DV- researcher needs to design a procedure which enables relevant + appropriate data to be collected with no ambiguity involved
extraneous + confounding variables
extraneous
- factors that may affect the DV (e.g. time of day, mood, temp etc)
- usually controlled so they have same effect across all conditions
- removig extraneous variables= research is objective + unbiased
- if extraneous variables aren’t controlled they become confounding variables
- confounding variables can affect DV + negatively impact research findings, so if they occur they need to be acknowledged in ‘discussion’ section in psychological report
single blind + double blind procedures
single blind procedure- ppts not told any info on procedure until end of study to control for demand characteristics
double blind procedure- neither ppts of researcher are aware on aims in investigation= reduces investigator effects
experimental design
- how participants are allocated to different conditions in an experiment
- random allocation used to decide which condition –> ensures wide spread, unbias results
independent groups
repeated measures
matched pairs
experimental design- independent groups
- participants only experience one condition of the IV
- generates unrelated data (each group generates its own data)
- participants= randomly allocated to each condition of the IV (avoid bias)
S: - demand characteristics unlikely to be a confounding variable
- order effects= less of a problem as only involves one condition –> less likely to become tired= increase validity
L: - more ppts needed for design
- ppt variables (characteristics etc) = affect validity
experimental design- repeated measures
- ppts experience all conditions of the IV
- generates related data (scores between conditions for ppts are compared)
- ppts= own control group
S: - participant variables (sex, culture, mood etc)= not an issue
- fewer ppts needed
L: - demand characteristics can become confounding as ppt more likely to guess the aim of study + act accordingly
- order effects can lower validity due to boredom of tasks
experimental design- matched pairs
- where ppts are matched based on specific characteristics e.g. age, IQ etc
- matching ppts across conditions= one condition doesn’t compromise over-representation
- matched pairs= randomly allocated to one condition each
S: - limits individual differences as confounding variable as each ppt performance= compared to someone similar to them –> ppt variables controlled
- demand characteristics reduced (ppt= only takes part in 1 conditioning of IV)
L: - matching=difficult + time-consuming
- impossible to match ppts across all criteria (lowers reliability)
- if one ppt drops out of research= need to find a replacement
demand characteristics
when a ppt acts in a way to meet requirements they assume assessor wants –> controlled with single-blind procedure
order effects + how are they reduced
- difference in responses of a ppt due to order of presentation of a task
- ppt may become bored, tired ec
reduce order effects with COUNTERBALANCING - where order of diff conditions is diff for all ppts e.g. 20 ppl do condition of A then B and 20 ppl do condition of B then A
investigator effects
- any effect of investigators behaviour on outcome of study
e.g. design of study/interaction with ppts/order of experimental conditions etc
standardisation + randomisation
standardisation- using exactly same procedures + instructions for all ppts in research
randomisation- use of chance methods to control effects of bias when designing + planning experiment
pilot studies
- small-scale trials that are run to test some/all aspects of an investigation –> basically a ‘dress rehearsal’ of the procedure conducted b4 research to identify any issues which could arise
- enables researcher to identify any problems + fix them to suitable alternatives
- identify if it’s worth time, money + effort to run the experiment
types of experiments
- laboratory
- field
- quasi
- natural
laboratory experiment
- research methods where researcher has high control over environmental factors/what happens in process etc
–> effects of IV + DV can be observed + measured - uses standardised procedures= ensures replicability + reliability
- only the IV changes, everything else constant –> means DV can be measured exactly with quantitative data
S: - easy to establish cause-effect relationships between IV + DV –> high internal validity due to control + objective nature
- use standardised procedures= replicable + reliable
W: - lacks ecological validity due to artificial task –> hard to generalise
- demand characteristics- limits generalisability of findings –> ppt knows they’re in a study= alter behaviour –> lower external validity
field experiment
- research method in a natural setting (not lab)
- researcher has less control due to real world location –> so many extraneous variables
- still involve IV + DV
- collect quantitative data (can collect qualitative as well to comment on quantitative findings etc)
S: - artificiality reduced- high external validity as experimented more likely to act normally
- ppts= less likely to experience demand characteristics
L: - extraneous variables= more likely to interfere with findings= decrease reliability
- difficult to replicate = low reliability
natural experiment
- consist in naturally occurring phenoma (researcher can’t manipulate IV)
- takes place in natural setting + natural changes are observed + measured
- IV= naturally occuring
- may be conducted in real world settings
- may collect qualitative data
S: - allows researcher to investigate topics which would otherwise be unethical to study in lab e.g. menta illness etc –> ethical validity
- high ecological validity–> has mundane realism + no control
L: - causal relationships= hard to determine due to array of variables + no control–> reduce reliability
- may have bias= lowers validity
–> sample bias, confirmation bias, social desirability bias
quasi experiment
- doesn’t manipulate IV, uses naturally occurring phenoma
- researcher has less control over experimental process as can’t randomly allocate ppts to a condition
- collects quantitative data as can be run in same way a lab experiment (just the IV can’t be controlled by researcher)
- diff. to natural experiment as DV can be measured in a lab
S: - little manipulation of IV= results have higher external validity
- experiment follow a true experimental design= can be replicated with ppts that match original sample demographics (age etc)
L:
- ppts can’t be randomly allocated to condition= ppt variables= hard to determine causality
- lack internal validity as other factors may explain the results
ethical issues
- ethical considerations put in place to protect ppt and researcher
- BPS- publishes code of conduct that all psychologists must adhere to in order for their research to be approved by a funding body + maintain professional reputation
ethical issues - informed consent + right to withdraw
informed consent:
- ppts should be given detailed info. about what they will be required to do= they can make informed decision about taking part in research
- 16 + younger need parental consent
- ppl on drugs/alcohol can’t give informed consent
right to withdraw:
- ppts should be made aware that they have the right to withdraw at any time in research –> even after research they can be withdraw + data collected is destroyed + any personal details
ethical issues- deception + protection from harm
deception
- when ppts are informed of a false aim/task when researcher introduces fake elements to procedure
- deliberately misleading/withholding info
- may be necessary for validity of the aim –> in this scenario informed consent can’t be given but it still needs to be in place
protection from harm
-ppts must be protected from harm b4 + after experiment
- harm (physical, psychological, emotional damage) inflicted on ppt during research
- way to protect ppt= ensure they’ve given full consent + are aware of their right to withdraw
- debrief at end of study
-researcher should provide counselling if required
ethical issues- privacy + confidentiality
- privacy= any invasion of individuals private space/ env which go beyond boundaries of being acceptable
- keep individuals so they aren’t personally identifiable
- confidentiality- ppts should not be disclosed/available to anyone outside research
- confidential data can’t be traced back to ppt
- published research must have non info on who ppts were
- ppts may be referred to as numbers (not names)
- in debrief ppts= reassured on confidentiality
observations (techniques)
- observation= non-experimental method –> involves observing + recording behaviours –> happens in a natural or controlled setting
- observers can only investigate observable behaviours –> can’t infer motive, intention, feeling or thought from an observation –> can only record a behaviour then link to topic of investigation with no assumption of cause-effect
naturalistic observations (techniques)
- one where researcher observes + records behaviours in a natural setting (away from lab) with no manipulation/complete absence of IV
- used when it would be inappropriate to run an experiment to investigate topic
- ppts may be unaware they’re being observed
S: - ppts= observed in natural + unforced daily activities + unaware they’re being observed= high ecological validity
L: - ethical concerns (ppts can’t give informed consent/have a right to withdraw as they’re unaware of being observed
- can’t be replicated as researcher can’t control variables –> method may be overly subjective
controlled observations (techniques)
- one where researcher implements level of control + replicable procedures + sometimes on IV
- procedures of observation= carefully designed + have predetermined behavioural categories to be measured
- ppts know they’re participating in a controlled observation as they are recruited for study + set a specific task
covert observations (techniques)
- ppts= unaware that they’re being observed
- ppts may not be able to see researcher observing them
- more likely to occur in naturalistic observation
S: - high ecological validity as ppts= unaware so act in a natural real way –> investigator effects unlikely
L: - ethical issues (ppts can’t give informed consent)
- problematic to be replicated
overt observations (techniques)
- ppts are aware they’re being observed (as they’re informed in advance)
- ppts may not be able to see researcher
- most likely to occur in controlled lab env.
S: - good ethics as inform ppt in advance + can withdraw
L: - demand characteristics more likely + investigator effects= lowers validity
- researcher bias (look for behaviours that support hypotheses etc)
participant observation (techniques)
- researcher (+confederates) join group they are observing (become part of them)
- ppts may be unaware that researcher is on ‘outside’
S:
-obtain in-depth data as in close proximity to ppts= unlikely to overlook behaviours –> high validity as can access real thoughts, feelings + convos
L: - researcher may have restricted view on what they observe + miss some important behaviours
- if researcher too immersed they may lose objectivity as they may begin to identify with ppl they’re observing= lowers validity
non-participant observation (techniques)
- researcher= separate + apart from group they’re observing
- ppts may be aware or unaware they’re being observed
S: - objective distance kept= less bias/objective behaviour recordings = higher validity
- demand characteristics + investigator effects= less likely
L: - due to distance from ‘action’= observation may lack key detail + insight= lacks explanatory power
- could misinterpret some behaviours= lowers validity
observational (design)
structured observation
unstructured observation
behavioural categories
sampling methods
structured observation (design)
- used normally in large samples in busy environments
- allows researcher to observe few, specific + clearly defined behaviours rather than trying to make sense of too much info
- emph on gathering of quantitative data
- researchers conducting structured observations= interested in limited set of behaviours
S: - quantitative data= quick + easy method + can be presented to show + compare trends
- using predetermined categories= researcher less likely to become distracted
L: - quantitative data only focuses on what and not ‘why’
- predetermined categories= only relevant behaviours to study may be ignored
unstructured observation (design)
- used normally in small samples with more intimate environment where interpersonal interaction= focus
- allows observers to observe everything= not restrictive
- more flexible + open ended (don’t use pre-determined behavioural categories)
- gather more qualitative data
S: - gain rich, insightful, detailed data= higher ecological validity
- good to use on case study
L: - personal + subjective= loses objectivity= unreliable –> researcher may be bias to certain ppts they get close to, may overlook key details
- analysing data= time-consuming + down to interpretation
behavioural categories
- used to record specific behaviours during observation session
- categories design must be observable behaviours + have no ambiguity about what’s being observed
- categories have to be operationalised to ensure they’re specific + can’t be confused
S: - clearly defined, unambiguous categories= subjectivity removed + researcher can be objective
- can use more than one observer= increase inter-observer reliability
L: - predetermined categories may be limiting
sampling methods
- helps structure + organise observation session
- event sampling- researcher records every time a behaviour from specific category occurs
- time sampling- researcher records all behaviours during a set time frame
S: - event sampling= specific behaviours won’t be overlooked
- time sampling= allows for flexibility to record behaviours + gives opportunity to record only unexpected behaviours for future
L: - too many specific behaviours occuring at same time= complex + hard to capture= lower validity
- time sampling misses behaviours outside time frame= lowers validity
self-report techniques
questionnaires
interviews
questionnaires what are they?
- ppts answer a range of questions designed to collect their thoughts, feelings, attitudes, attributes + opinions
- can consist of open (offers freedom of response, generates qualitative data) + closed questions (offers limited options for ppt response, generate quantitative data)
questionnaires what must researcher consider when designing a questionnaire?
- Aim (purpose of it + how it will aid research)
- length (too short= lacks data, too long= ppts will become bored + not answer with care/attention)
- questions- need to be clear + concise, can’t be leading (provide expected answers) + emotive (more neutral), can’t be misunderstood
questions: - fixed choice- asking ppt to choose from one of the options provided e.g. yes or no
- libert scale- ppts agree of disagree with a statement
- rating scale- ppts select value that corresponds to how strongly they feel about an idea/topic (e.g. scale of 1-10)- avoid double barrel questions
questionnaires strengths + weaknesses
S:
- quick, easy, convenient method of collecting data (from large samples, increase reliability)
- use standardised questions= can be replicated= increase reliability
- closed questions= provide quantitative data= easy to analyse + spot trends as can be presented graphically
- open questions allows for expansion + explanations= explanatory power
- can be completed without researcher present
L:
- tendency for ppl to under report negatives + over-report positive aspects of themselves= questionnaires can lead to ppl succumbing to social desirability bias (demand characteristics)
- too little open questions= limits usefulness –> quantitative data lacks detail + insight
- open questions= hard to analyse due to subjective nature= left to interpretation= lacks objectivity + reliability
- ppts may have response bias/not read questions properly –> only few may be willing to fill out= sample bias –> need to be able to read + write
interviews what is it?
- involves ppt answering range of questions put to them by a researcher –> one-to-one process (over phone, face-to-face, online etc)
- designed to collect thought, feelings, attitudes, opinions
types of interviews (structured)
- structured
made up of pre-determined questions asked in fixed order-
-open/closed questions - researcher writes down ppt response/records it
S:
standardised questions= interview can be replicated= limits researcher effects - may generate more quantitative data than unstructured= can be statistically analysed= increase reliability
W: - pre-determined questions= restrictive= limits usefulness + richness of data
types of interviews (unstructured + semi-structured)
unstructured
- no prepared questions –> researchers have open mind as how interview will proceed
- researcher writes/records ppt answers
- interview= treated as a convo= ppts have freedom in responses etc –> normally has open questions –> produces qualitative data only
S:
- ecological validity- as ppl have freedom to respond how they want. Ppt has no manipulation from researcher
- flexibility to pursue any interesting topics –> opens up insight
W:
- ppt may go in depth on irrelevant topics to research
- researcher may lose objectivity due to intimate nature –> may feel too close to ppt + identify with them + present them in best positive light
- interviewer bias
- requires skilled interviewer
semi-structured- e.g. job interview- list of questions worked out in advance BUT interviewers are free to ask follow-up questions etc
designing interviews
- questions must be clear + concise + on-topic
- record interview (make notes or audio/video record)
- interviewer mustn’t pass judgement
- presence of interviewer (whether they seem interested or not) may effect amount of info provided (listening skills)
- ppts wants/needs to feel comfortable in env + with interviewer
content analysis what is it?
- method used to analyse qualitative data by turning it into quantitative data, does this by coding
- it uses pre-recorded examples of spoken interactions, written word, media etc e.g. transcripts, text messages, interview audio recordings
- aim= to summarise main ideas presented via structured methods
CODING:
assigns each behaviour to a ‘code’ that can be analysed numerically
1. researcher formulates research questions
2. researcher selects a pre-existing qualitative data source
3. researcher decides on coding categories e.g. terms for certain words
4. researcher works through data using a tally
5. researcher checks reliability via:
test-retest reliability (researcher runs content analysis again on same results + compares data)
inter-rater-reliability (second person conducts content analysis on same sample + compares results
content analysis strengths + weaknesses
S:
- analyses qualitative + quantitative data = will have richer meaning which can be easily compared= reliable + valid
L:
- uses material produced outside research process
–> true context may not be known, researcher may be making assumptions= (affects validity)
- converting data from qualitative to quantitative= original data likely to be lost= lowers validity
thematic analysis what is it?
- method used to analyse qualitative data
- inductive method –> themes emerge from data, no hypothesis testing
- allows researchers to analyse + report common themes from a data set
- theme= only feature of data which recurs throughout
- researcher identifies themes in data –> reviews them to see if they explain behaviour + answers research –> then categorises + defines each theme
thematic analysis strengths + weaknesses
s:
- solely qualitative data= provides insight into ‘why’= ecological validity
- researcher can quote directly from source –> real, subjective
l:
- time-consuming
- researcher prone to confirmation bias (overlooks themes which don’t align with their focus + ideas)