Chapter 5 - Identifying Good Measurements Flashcards
define conceptual definition
researchers’ definition of a variable at the theoretical level, also known as a construct
define operational definition
researchers specific decision about how to measure or manipulate the conceptual variable
what is the process taken to conceptual variables?
- stating a definition of the construct
- create operational definition
what are the 3 common types of measures?
self-report, observational and physiological
define self-report measures
a method of measuring a variable in which people answer questions about themselves in a questionnaire or interview. collect all measures to create an average for the persons responses.
define observational measures
a method of measuring a variable by recording observable behaviours or physical traces of behaviours
define physiological measures
a method of measuring a variable by recording biological data
what are the 2 levels of operational variables and how its measured?
categorical: a variable whose levels are categories (ex. male and female)
quantitative: a variable whose values can be recorded as meaningful numbers
what are the 3 kinds of quantitative variables?
ordinal, interval and ratio scale
define ordinal scale
a scale whose levels represent a ranked order and in which distances between levels are not equal
ex. order of when people finish a race
define interval scale
scale that has no true zero and in which the numerals represent equal intervals between levels
ex. shoe size (no one can have a 0 shoe size)
define ratio scale
scale in which the numerals have equal intervals and the value of zero means non of the variable being measured
ex. measuring how many people get items right on a test. zero means getting nothing right
define reliability
consistency of the results of a measure
define validity
appropriateness of a conclusion or decision
what are the 3 types of reliability?
test-retest, interrater, and internal
define test-retest reliability
consistency in results every time a measure is used. applies for all measures but works best when it is a theoretical construct.
ex. taking an IQ test. if levels increase, it should be relatively consistent.
define interrater reliability
the degree to which two or more coders or observers give consistent ratings of a set of targets. most relevant for observational measures.
ex. counting how many times a child smiles in an hour. you and the other observer get the same number
define internal reliability
in a measure that has several items, the consistency in a pattern of answers, no matter how a question is phrased.
ex. having a 5 item scale where each are worded differently but meant to measure the same thing, the results should correlate
define correlation coefficient
r, ranged from -1.0 to +1.0 which indicates the strength and direction of an association between two variables.
define average inter-item correlation
AIC, is a measure of internal reliability for a set of items. it is the mean of all possible correlations computed between each item and the others
Cronbach’s alpha reflects the average on the inter-item scale. (low = not consistent, high = reliability)
define face validity
the extent to which a measure is subjectively considered a plausible operationalization of a conceptual variable in question.
appear to measure for someone who knows about the content
define content validity
the extent to which a measure captures all parts of a defined construct
define criterion validity
empirical form of measuring validity measuring association with a behavioural outcome. comparing when you already know a measure for a similar thing
ex. having goof high school grades should correlated with high grades at uni
define know-group paradigms
method to make sure you have criterion validity, where a research test 2 or more groups (know to have different variables of interest) to ensure there is a difference within the variable.
define convergent validity
empirical test of the extent to which a self-report measure correlates with other measures of a similar construct. what measure should correlate with it?
define discriminant validity
test where self-report measure doesn’t correlate with a measure that is a dissimilar construct. what measure should not correlation with it?
when is observational better than self-report and vice-versa?
Observational is better when participants are in situations when they would be influenced to change response by researcher.
Self-report is better when it comes to internal attributes that cannot be observed like self-esteen.
what are teh two subjective validities?
face and content
what are the 3 objective valdities?
criterion, convergent, discriminant
is reliability necassary?
yes it is necassary but not sufficient for validity