Research methods Flashcards
define operationalisation of variables
- clearly defining variables and how they are intended to be measured.
define EV.
a variable that is not the IV, but may affect the DV, eg light, sound
Define confounding variable
- a EV that cannot be controlled, varies systematically with the IV.
Define and explain investigator effects
- any conscious/unconscious behaviours from the researcher that may effect answers from pps
what are the 3 types of experimental design?
- independent groups
- matched pairs
- repeated measures
Describe what an independent groups design is.
- each group only takes part in one conditions
- the mean of whatever the DV is compared between these diff groups.
Give pros and cons of using an independent groups design .
- more time efficient
- less chance of order effects and demand characteristics
define a repeated measures design
each pp takes part in each condition then compared.
give a pro and con of repeated measures
pp variables controlled, so higher validity
order effects might come into play with demand characteristics.
define a matched pairs design
- pps are partnered with someone with a similar, relevant variable, so pps only take part in one condition
give a pro and con of matched pairs
- pre tests and other matching process may be time consuming
- order effects and demand characteristics are reduced.
define a quasi experiment
- they have an IV that is based on an already existing factor that can’t be changed- eg gender.
give the strengths and limitations of quasi experiments
-cannot determine for sure if IV is what caused change in DV
- have some control, which increases validity.
define a natural experiment
- a type of natural experiment where a researcher uses an IV that is already in existence, but an environmental one that MAY be manipulated, eg drug addicts or left/right handed people.
Give the pros and cons of a natural experiment.
- high ecological validity: since IV cannot be changed, can be applied to real life well.
- lack of control: there may not be control over EVs and confounding variables, leading to low internal validity.
Compare and contrast a NATURAL and QUASI experiment.
- both involve choosing already occurring variables.
- however in a natural experiment, a variable that is an environmental choice is chosen, eg the preference of coffee over tea, whereas in quasi experiments, a natural IV is chosen that cannot be manipulated whatsoever, eg age.
define systematic sampling.
- every nth person from a population being chosen within a sampling frame.
define stratified sampling
- population divided into strata, with pps from each strata being selected using random sampling until a sample size is reached.
give pros and cons of systematic sampling
+: little bias because after interval has been decided, researcher has little to no control over who is selected.
-: time consuming and sampling frame needed.
give pros and cons of stratified sampling
- representative of population, equal proportion of each strata present, easier to generalise.
- time consuming to distribute population into strata, and sometimes categories might be half met, therefore hard to group certain individuals.
What are the BPS ethical guidelines? (4)
- protection from physical and mental harm
- informed consent
- no deception
- confidentiality and privacy.
How do we get around informed consent?(3)
- presumptive consent- other group not in the study asked if it sounds ok
- Prior general consent: consenting to multiple studies, even those that involve deception.
- retrospective consent: asked for after study is done, dueing debrief.
Dealing with deception and protection from harm? (3)
- provide counselling if study was potentially traumatic.
- fully debrief pps
- should be aware that they have right to withhold data.
How do we deal with confidentiality? (2)
- use initials when listing patient specific data
- during debriefing, patients should be made aware that their data will be completely private and not shared with others unless their prior consent is given.
Define pilot studies
- a smaller scale investigation carried out before the actual investigation.
What are the aims of a pilot study? (3)
- to check that experimental design is appropriate.
- to check if possible questionaries are appropriate.
- to make changes to any technique is needed
Define a naturalistic observation
Pros and cons?
- observing a study in a natural environment, eg a classroom.
- +: higher external validity, can apply to real life because its in a natural environment.
- -: lack of control, so less likely to produce valid results.
Define a controlled observation
pros and cons?
- observing in an environment where some control is used, eg a lab.
- +: high internal validity: controlled observations, so easier to replicate.
- -: low ecological validity, due to artificial stimuli and environment.
Define an OVERT observation
pros and cons?
when pps KNOW they’re being observed.
+: much easier to do if researcher doesn’t fit into sample size as pp.
-: demand characteristics are likely
-: ethics man ethics- privacy
Define COVERT observation
pros and cons?
when pps DONT KNOW when they’re being observed.
- +: less chance of demand characteristics, as unaware they’re being wacthed
- -: ETHICS- kinda creepy, deception and privacy.
Define pp observation
pros and cons?
when the researcher becomes a part of the sample that they are observing (eg poses as a student)
- +: increased external validity because less chance of demand characteristics, gives real life insights.
-: researcher might loose objectivity, therefore conclusions might be biased.
What is a criticism of ALL types of observation?
how can it be minimised?
- observer bias: observer’s interpretation may affect results of study.
- can be reduced by involving several observers.
- define behavioural categories
when a target behaviour is broken down into observable and measurable components
define event sampling
- data that is recorded every time an event occurs
define time sampling
- data that is collected at a certain time interval, eg every 5 mins
Define the following terms:
structured interviews
unstructured interviews
semi structured interviews
- fixed list of questions asked in a fixed order
- no fixed list of questions- pp’s response determines next question
- certain questions are fixed, but allows room for flexibility in changing questions.
Define acquiescence bias
- yes saying tendency to questions without properly understanding them.
how do you design good interview/questionaries
- avoiding overuse of jargon
- avoiding leading questions and emotive language
- avoiding double barrelled questions (two questions in one)
What are features of a sign test? What situations should it only be used in?
- repeated measure design
- looking at a difference in data
- have to have nominal data (categoric data)
What is a type 1 error
when a null hypothesis is rejected when it shouldn’t have been.
so false positive
What is a type 2 error?
- when a null hypothesis is accepted when it shouldn’t have been
- false negative
what 3 questions would you ask yourself when you determine what statistical test you should use?
- Diff or Correlation
- Nominal, ordinal or interval data.
- Experimental design (independant, or related data?)
in what case would you use a sign test?
- Diff, Nominal, related (matched pairs or repeated measures)
In what case could you use a Wilcoxon test?
- diff, ordinal, related data
In what case would you use related t test?
- Diff, interval, related
In what case would you use a chi squared test?
- independant, nominal, diff
In what case would you use a Mann-Whitney test?
- Independant, ordinal, diff
In what case would you use the unrelated t test?
- independant, interval, diff
in what case would you use a chi squared test when looking for the correlation?
- nominal, correlation, related data
In what case would you use a Spearman’s rho?
- related, correlation, Ordinal
In what case would you use Pearson’s r?
- Interval, correlation, related data.
Nominal data
- categoric data, is discreet.
Ordinal data
- data that can be ordered, eg scores on a test, on a scale, not equal intervals.
Interval data
- data with equal intervals, usually with units.
give the format of a psychological research report.
- abstract
- intro
- method
- results
- discussion
- refs
Define reliability
- the extent to which a test produces consistent results.
Define internal reliability
- extent to which something is consistent within itself
Define external reliability
- extent to which a test measures consistently over time.
Define inter-observer reliability
- consistency between findings/observations of multiple observers in a study.
How do you improve observational reliability?
- use multiple observers
- train observers to know what to look for
- Have clearly defined criteria.
Give a method assesing internal reliability using questionnaires
- split-half method - the same pps do the both halves
- assess correlation between the answers of the 2 halves using Pearsons R test
- if results are consistent, then the questionnaire has good internal reliability
Give a method to assess external reliability using questionnaires
- test-retest method: give the pps the same test on 2 separate occasions,
Give 3 ways to improve reliability in self-report methods. (interviews)
- use precise questions (eg closed qs)
- use the same interviewer, or train the interviewer.
- Pilot questionnaire beforehand to check clarity of questions
Give 3 methods to improve reliability for controlled research
- use same method for pps
- use same conditions
- when replicated, researchers need to use the same method each time.
Define validity
- the extent to which something measures what it claims to.
Define internal validity
- concerns towards whether the results are due to the IV being manipulated and not due to confounding variables.
Name the 3 types of external validity.
- Ecological
- temporal
- population
Define ecological validity
- whether findings from a controlled experiment can be generalised elsewhere.
Define temporal validity
- whether findings from a controlled experiment can be generalised beyond the period of time of the study.
Define population validity
- whether results can be generalised to other groups.
Define face validity
- when we use self report methods - quick eyeballing to see if the test is measuring what it’s supposed to measure
Define concurrent validity
- when an established test is used, in comparison to a non-established one.
- if the non-E one has similar results to the established, we can say that the test has concurrent validity.
Give 2 ways of improving validity when using questionnaires.
- assure responses are anonymous
- review them again when they are determined to have low concurrent validity.
give 3 ways of improving validity in experimental research.
- use control group to serve as a comparison - whether or not the manipulation of the IV is the thing that’s changing it.
- standardise procedures
- Double blind studies to reduce DCs.
give 1 way of improving validity when using observations
- make sure categories aren’t too broad or overlapping.
Give 2 ways of improving validity when using observational techniques.
- use direct quotes and be coherent in reporting.
- triangulation can be used - eg using different sources: personal diaries, family etc.
Describe what is meant by content analysis.
- researcher making their observations based on indirect methods, such as books, films, diaries etc.
give the 3 steps to go through content analysis.
- identify themes in the data
- repeatedly go through the data - eg listen to recordings more often, and read over reports again.
- tally the themes
Define thematic analysis
- an analytical qualitative method for organising, describing and interpreting data.
What is a case study
An in depth analysis of a single induvial or group over time - often idiographic and individualistic
Describe one disadvantage of a case study
- findings cannot be generalised to other individuals
- Because of case studies being very unique to an individual, they cannot be replicated, so they lack reliability
give 2 disadvantages, other than generalisability when using case studies
- researcher may develop bias toward subject as they get to know them very well.
- Case studies are extremely specific to one individual or group, and thus cannot be replicated - so they lack reliability
Describe a method to carry out content analysis.
- Create a checklist of categories
- count/tally frequency of behaviour.
- Analyse the data using quantitative methods - eg representing tallies over time of study etc.
What is the advantage of thematic analysis over content analysis?
- since themes are identified after the overview of content, it may prevent observer bias in this case.
Describe what is meant by investigator effects.
- when a researcher’s potential biases can impact results of the study.
Give ways to prevent investigator effects.
- inter-rater reliability - use another researcher and compare these results.
- use a double blind method
State and explain one advantage of using observation
- allows observation of real behaviour rather than spoken responses - people may lie/resort to trying to remember their responses.
What are the stages of scientific theory.
- observation
- constructing a hypothesis
- collecting experimental data
- Proposing a theory that may explain the results.
Outline what is involved in self report technique
- when pps themselves report their thoughts/feelings themselves
- This can be through questionnaires or surveys
- can also involve open/closed questions
define Paradigm
- a shared set of assumptions that is from a generally accepted scientific theory.
Describe how criteria may help refine observational techniques (4)
- may help provide quantitative data - so easier to analyse
- may help researcher have clear goal as to what they are looking for
- may improve reliability
- can allow tallying into pre-arranged groupings
How can you deal with socially sensitive research?
- be aware of implications of research if published
- make sure pps are aware that they have the right to withdraw
- ensure confidentiality of all pps involved
- assess research question carefully - is it leaning towards anything?