Chapter 4 Flashcards
what is external vvalidity
whether or not findings are generalisable
what is measurement validity
how much a measurement tool actually measures what it is supposed to
whats the difference between a systematic review and a meta analysis
in a systematic review it is solely looking at what the research says. a meta analysis goes beyond that and examines the data in the research
the extent to which the measure overtly appears to be measuring the construct of interest
face validity
what are the two psychometric properties of a measurement tool
reliability and validity
how does a researcher develop an idea
either something theyre interested in or finding the academic gap
are the results of a
magnitude that they represent a
meaningful difference in participants
quality of life and/or daily functioning?
clinical significance
examines the association between variables
correlational research designs
the extent to which the results of a study are accurate and valid based on the type of statistical procedures used in the research
statistical conclusion validity
what are the 7 main threats to internal validity
- history
- maturation
- testing
- instrumentation
- statistical regression
- selection bias
- attrition
what are the two types of validity
internal validity and external validity
what is test-retest reliability
the stability over time of scores on a measure
what are the two main types of sampling
probability and non-probability
the degree to which elements of the measure are homogenous and measure the same thing
internal consistency
what are the five main threats to external validity
- sample characteristics
- stimulus characteristics
- reactivity of research arrangements
- reactivity of assessment
- timing of measurement
what are the six types of research designs
- case study
- single subject design
- correlational designs
- quasi experimental designs
- experimental
- meta analysis
what is a moderator
something that changes the relationship between two variables based on the level of the moderator
rater evaluations
info about participant is gathered from others who know the individual
what is a mediator
something that completely explains the relationship between two variables
is there researcher manipulation in correlational research
no, all participants experience the same study conditions
what is reliability
the consistency of a measurement tool -> how much it yields the same results if used again
what is inter-rater reliability
different people conducting the measurement getting the same results
the consistency of scores on a measure across different raters or observers
inter-rater reliability
a variable that influences the strength of relation between a predictor variable and a criterion variable
moderator
a comprehensive statistical procedure that involves testing all components of a theoretical model
structural equation modelling
what are the two main ways to do research synthesis
- systematic review
- meta analysis
what is probability sampling
random. each participant has equal chance to be included
an experiment in which research participants are randomly assigned to one of two or more treatment conditions
randomised controlled trials
Follows steps of systematic review and then extracts and analyses the data
meta analysis
who originally conceptualised the problems we classify as threatening internal, external, and statistical conclusion validity
donald campbell
what is internal validity
being confident that your results are due to what you have manipulated/the variable you are studying
in a single subjects design, what does the A signify
the baseline level of behaviour with no intervention
what are the types of a single subject design
an AB design, an ABAB design, and a multiple baseline design
recommendations are accepted because the person delivering them is seen as an expert
eminence based practice
whats the difference between a quasi experimental design and an experimental design
quasi experiments do have researcher manipulation, however there is no random assignment to conditions (generally because there is an innate characteristic that cannot be randomised)
what are the two types of measurement validity
- content validity
- face validity
who developed the statistical work used in tools to help researchers determine optimal number of participants
jacob cohen
when is a case study used
generally with new phenomena
what are the six types of measurement tools
- self report
- rating by someone who knows the individual
- interviews
- performance on psychological test
- projective measures
- archival data (school reports etc)
Basing decisions on replicated research findings wherever possible
Evidence based practice
whats the difference between statistical significance and clinical significance
statistical significance is concerned with if the difference is real, clinical significance is concerned with if the difference even matters
in a single subjects design, what does the B signify
the level of behaviour WITH intervention
what is the reliable change index
determines whether a participants pre treatment and post treatment change is greater than would be expected as a result of measurement error
whats a single subject design
systematic repeated assessment of behaviour over time where the subject is their own control group
using a systematic and explicit set of
methods to identify, select, and critically appraise research studies
systematic review
what is the difference between a mediator and a moderator
a moderator changes the relationship, a mediator explains the relationship
the extent to which the measure fully and accurately represents all elements of the construct being assessed
content validity
what did neil jacobson do
he and colleagues developed the reliable change index
Involves an intensive, anecdotal, observation and analysis of an individual
case study design
whats the difference between a single subject design and a case study
case studies are more descriptive, where as a single subject design involves manipulation of some kind
whats the difference between content validity and face validity
face validity is just how it looks, content validity is looking deeper into the pieces of the measure
what is eminence based practice
based on tradition and authority
what are the three types of reliability
- internal consistency
- test-retest reliability
- inter-rater reliability
indicates that the observed difference is likely a real one and not just obtained by chance
statistical significance
a variable that explains the mechanism by which a predictor variable influences a criterion variable
mediator
what is non-probability sampling
convenience/volunteer sampling etc where there isnt equal chance for everyone
participant completes a questionnaire describing some aspect of themselves
self report measures
a standardised metric that allows the results of research studies to be combined and analysed
effect size
Both random assignment to conditions AND
experimental manipulation
experimental design