week 7- rigour and research Flashcards
rigour in research
- more rigour = more generalizable/transferable
- rigour is the quality, believability and trustworthiness of the study findings
- can be determined by validity and reliability of measurement tools
validity
measures what is intended to be measured
reliability
provides consistent results
components of observed scores
true variance (data) and error variance (random or systematic errors)
reliability coefficient
- expresses the relationship between the error variance, true variance and observed score
- can range from 0-1 (0= no relationship, 1= perfect relationship)
what is a desirable reliability coefficient?
> 0.70 indicates consistency and dependability of measurement tool
correlation
- statistical technique used to measure and describe a relationship between two variables
- the correlation coefficient (r) describes the strength and direction of the relationship
what is a desirable correlation coefficient?
> (+/-) 0.7
components of reliability
stability, consistency and equivalence
stability
an instrument is stable when repeated administration of the instrument yields the same results
how is stability measured?
test-retest reliability
consistency
all tools measure the same concept or characteristic
how is consistency measured?
chronbach’s alpha
equivalence
consistency or agreement among observers using the same measurement tool or agreement among alternative forms of a tool
how is equivalence measured?
interrater reliability
test retest reliability
- the stability of the scores of an instrument when it’s administered more than once to the same participants under similar conditions
- score from repeated testing
- comparison expressed as a correlation coefficient (pearson’s r)
chronbach’s alpha
- most commonly used test of internal consistency
- each item in the scale is simultaneously compared with the others and a total score is used to analyze the data
- many tools used to measure psychosocial variables and attitudes have a likert-type scale response format, which is suitable for testing internal consistency
what is a desirable chronbach’s alpha?
0.8-0.9<
interrater reliability
- consistency of observations between two or more observers with the same tool
- used with direct measurements of observed behaviour
- important for minimizing bias
ie. cohen’s kappa
cohen’s kappa
- a coefficient of agreement between two raters
- a cohen’s kappa of 0.8 or better is generally assumed to indicate good interrater reliability
components of validity
content validity, criterion validity and construct validity
content validity
refers to the degree to which the content of the measure represents the universe of content or the domain of a given behaviour (how well it covers all aspects of a concept)
ie. face validity
face validity
- panel of judges indicate their level of agreement with the scope of the items and the extent to which the items reflect the concept under consideration
- relevancy and accuracy
criterion-related validity
consists of concurrent and predictive validity
concurrent validity
- the degree of correlation of two measures of the same construct administered at the same time
- a high correlation coefficient indicates agreement between the two measures
predictive validity
- the degree of correlation between the measure of the concept and a future measure of the same concept
- because of the passage of time, the correlation coefficients are likely to be lower for predictive validity studies
construct validity
how well a tool measures the concept it was intended to measure
relationship between reliability and validity
- an instrument that is not reliable cannot be valid
- an instrument can be reliable without being valid
internal validity
degree to which changes in DV can be attributed to changes in IV
external validity
degree to which the study results can be generalized to samples other than the one being investigated
statistics
- a branch of mathematics focused on organization, analysis and interpretation of a group of numbers
- can provide a probability that you can rule out the play of chance
descriptive statistics
- summarize and describe a group of numbers from a research study
- allows researchers to arrange data to visually display meaning and help in understanding of characteristics and variables under study
- reduce data to manageable proportions
inferential statistics
- used to make inferences and draw conclusions based on study data about a larger group
- allows researcher to test hypothesis
- determines probability that conclusion is true
levels of measurement
nominal, ordinal, interval and ratio
nominal level of measurement (examples)
sex, relationship status, religion, hair colour, political preferences (names)
ordinal level of measurement (examples)
education, size, satisfaction level, SES (ranking between categories
interval level of measurement (examples)
body temperature (measurement between intervals, zero means nothing)
ratio level of measurement (examples)
weight, height, blood levels, heart rate, age (true zero, can say 10 is twice as old as 5)