Research methods Flashcards
what is ordinal data?
made up scale. objective scale based on judgement, e.g ratings of attractiveness (as opposed to size of eyes) or ratings of agression (as opposed to number of punches)
what is interval data?
precise scale. someone took 10 secs to complete a task took twice the time someone who took 5 secs, but someone who scores 10 on an attractiveness scale is not twice as attractive as someone who scored 5.
what is ratio data?
same as interval data but there is an absolute zero. there can be no negative scores e.g number of words recalled or height.
what is a significant difference
a difference that is unlikely to be due to chance
can reject the null hypothesis
what does just by chance mean?
the null hypothesis is true
what percentage does a difference have to be to be deemed significant is psychology
5% p<0.05
what level significance is used in medial trials
1% p<0.01
p means
probabilty the result is due to chance
content analysis (2marks)
a technique for analysing qualitative data of various kinds. Data can be places in categories and counted (quantitative) or anlaysed by themes (qualitative)
application of content analysis to a Q
psychologists observe/watch/read whatever it is
enable them to identify potential categories which emerge of different types of x
give examples of what the categories could be
then watch/read/observe again and count the number of examples which fell into each category to provide quantitative data.
how to assess the reliability of a content anlysis
inter rater reliability
the two psychologists could carry out the content analysis of the films separately and compare their answers to see if they got the same/similar tallies for each category- if they did then the inter-rater reliability is high
justify the use of repeated measures design
to remove the effects of individual differences in the DV if an independent groups design was used.
to avoid potential difficulties in matching participants
to reduce the number of participants needed for the experiment
experimental extraneous variables
- situational variable: environmental that affect ppt behaviour such as noise, temeperature, lighting etc (should be controlled by standardised procedures so the conditions are the same for every ppt (including standardised instructions)
2.partipnt varibales e.g mood intelligence anxiety nerves concentration
these could affect performance and therefore the results of the experiment
- investigator effects: because they know the aim of the experiment they interpret behaviour is a way that is bias and fits with what they were expecting
- demand characteristics: clues in the experiment that suggest to the ppt the purpose of the research
to minimise the effects of this the envinroment should be as natural as possible
how to control order effects
counterbalancing: give half the ppts condition A first while the other ppts get condition B first
this prevents improvement due to practice or poorer performance due to boredom
how can participant variable be controlled
random allocation to conditions
external validity
how fair findings from an experiment can be generalised to real life situations
e.g a hazard perception test on a computer doesn’t resemble a real life driving situation (no noise, stress etC) ecologically invalid
example of extraneous variable and how affects experiment
the conversation with the psychologist was not controlled so the difficulty or the number of questions could have aired this would influence the DV as more or less attention would be required
possible ethical issues:
- protection from harm
- informed consent: participants should be given full info about the nature of the task before deciding whether or not to participate
- debriefing: at end of experiment, feedback on performace and ppt can ask questions
- freedom to withdraw: ppt should be made aware of their right to withdraw before and during the ex
- confidentiality:ppl should not be identified and retain anonymity (use of initial or numbers instead of names)
writing a set of standardised instructions
- you will take part in blah test and how long
- what you have to do is
- do you have any questions
have to be written to be read out
and formal
variables in treatment studies
use of different therapists on conditions
length of time before assessment
the interaction between sex of therapist and patient
individual differences such as age and gender
whether patients were receiving other form of therapy or medication.
validity
how well a test measures what it says it measures- so is it true and accurate
and can it be generalised beyond the research setting within which it was found
assessing validity e.g a questionnaire used to measure the severity of symptoms
CONCURRENT
take another measure of symptoms from the same ppts e.g doctor or family member and compare the two sets of scores. if they agree then the measure has high validity
CONTENT
ask a expert to assess the questions to see if they are an accurate measure of panic attacks
CONSTRUCT
assess how closely the questions relate to underlying theoretical constructs i.e. how well they relate to panic symptoms
giving fully informed consent- what should participants be told first
ppts should be informed about the trial (use the stem to give details on what they’ll be required to do)
AND
data will be anonymised so that they are not identifiable in the results
make them aware they are free to withdraw themselves or their data from the clinical trial if they want
purpose of the abstract in a psychological report
to provide a short summary of the study, that is sufficient to establish whether it is worth reading the full report.
recording data from interviews
audio recording( less intrusive than filming the patient) and more likely to agree to take part or be honest
also making written notes during the interview could be off putting for the interviewee
likewise with filming.
how would you analyse qualitative data from the interviews
content analysis: involve identifying important categories from a sub sample of interview responses (for example references to homework or warmth in the therapist ((things they found helpful in therapy))
they would then work through the written data and count the number of occurrences of each of the categories to produce quantitative data
thematic analysis: this method involves reading the rereading (familiarisation) the written transciptrs carefully. coding would involves looking for words which cropped up repeatedly in transcripts. these could then be combined to reduce the number of codes into three or four themes. the data would stay in qualitative format and not be reduced to numbers.
sampling techniques
opportunity - quicker and easier than a random sample using people who are in the surroundings e.g students in a school
random sample- participant selection is less biased because every student has an equal chance of being selected
volunteer sampling- participants would be interested in the task and likely to take it seriously
importance of pilot studies (general)
make specific to stem
but
time appropriate
length of study- not too tiring
make sure instructions are clear (if stem is complex)]
order of things e.g is things are shown to ppt (should be random probs)
number of stimuli appropriate- sufficient to give reliable data
reliability
refers to the consistency of findings/data or participant responses.
external reliability is whether the rest is consistent over time (whether a measure varies from one use to another).
and internal is whether results across items within a test are consistent
assessing the reliability of a measure e.g a questionaire
split half ( internal reliability) scores on half of the items are correlated with scores on the other half and the higher the correlation the reliable the questionnaire
test-re test (external reliability)
the questioner is given to the same participants again on different occasion the scores on the two tests are correlated and the higher the correlation the more reliable the questionnaire
so the similarity of scores between the tests
reasons for doing correlational reserach
- when it’s not practical to do an experiment as you cannot manipulate the IV e.g individual differences
- when it’s not ethical to manipulate the IV e.g stressful life events
- preliminary research: establish that variables are related before conducting an experiment
intra rater rel
similarity of ratings made by the same observer
inter-rater rel
similarity of rating made by different observers
experimental realism
extent to which participants are engaged in and take seriously the experimental task (maybe more likely with volunteer samples)
ecological validity
extent to which the results of a study generalise to other contexts
population valifity
extent to which the results of a study generalise to other people who weren’t part of the sample
standardisation
process of making everything the same in a study in some relevant way
randomisation
altnernative to standardisation for controlling extraneous varibles in a study
correlation
statistical measure often used to test the reliability of the measure (the higher the correlation the higher the reliability of the test)
face validity
a basic form of validity in which a measure is scrutinised to determine whether it appears to measure what it is supposed to measure- for instance does a test of anxiety look like it measures anxiety
concurrent validity
extent to which a score is similar to a score on another test that is known to be valid
(an existing similar measure)
predictive validity
extent to which a score predicts future behaviour
types of external validity
ecological (other settings/situations)
population (other people)
temporal (generalising to other historical times/eras)
types of internal validity
operationalisation (does the study validly measure/manipulate the variables
control- extraneous varibles controlled
experimental validity/realism- are participants engaged realistically in the experimental/research task
types of validity of measurement
face validity
concenuurent
predictive
threats to validity
demand characteristics: participants who know they are in the study may guess the aim and therefore behaviour differently to how they would not knowing
social desribailty bias- ppts may behave/respond in social acceptable ways either to appear a better person to the researcher or to themselves so not truthfully
order effects- what ppts experience early in a study may affect their behaviour later (e.g learning from practice)
hawthorn effect- participants who they are in a study may try harder than they would in everyday life
what do you do if the results are not statistically significant
retain the null hypothesis
type 1 error
when results are accepted as significant when in fact they are not; the alternative hypothesis is accepted when it is false (false positive)
where the researcher rejects the null hypothesis (or
accepts the research/alternative hypothesis) when in fact the effect is due to chance – often
referred to as an error of optimists
what decreases the chance of a type 1 error
if the results are significant at p<0.01 since that is a stringent level of significance
the more stringent the level the less chance of a type 1 error
the liklihood of a psychologist making a type1 error is 1 in 100 or less
type 2 error
false negative
when the results are accepted as insignificant when they in fact are significant
the null hypothesis is retained when it is false
what decreases the chance of a type 2 error
increase the sample size