WEEK 3- L6.2 rd- measurement Flashcards
purpose of measurement
to link theories to the real world
connecting the two levels- turns abstract concepts to indicator variables
concept is..
a construct derived by mutual agreement from mental images that summarizes collections of related experiences/observations.
=> defining democracy
should cover multiple attributions/ indicators
constructs and indicators- what does “avoid reification of concepts mean”?
concepts such as democracy are still abstract even if we identify some common attributes
conceptual goodness acc. to gerring (8)
**1. familiarity- **established usage
**2. resonance- **does it have a cognitive click
3. parsimony- as simple as possible
**4. coherence- **internal consistency (are the different attributes related with one another)
**5. differentiation- **external diferentation/ boundedness
**6. depth- **ability to bundle many different characteristics/ attributes
7. theoretical utility- is it useful for theory building
**8. field utility- **can it capture new entities, reconceptualization without using meaning- and becoming empty
measurement examples (2)
- simple concepts (undimensional): age
- complec concepts (multidimensional): corruption, democracy, prejudice
what is corruption as a concept?
the misuse of position for private gain
what’s the indicators of corruption?
perception fo corruption by business people, experience by public, prosection of public officials
what’s the observation of corruption?
expert survey, public survey, court records
what are the steps to conceptualize multidimensional concepts?
- concept
- indicator
- observation
measures should be.. (2)
1. unbiased: free of systematic errors = accuracy = validity
2. efficient: low variance and random errors = precision = reliability
unbiased means…
high validity
efficient means…
high reliability
being high on reliability means…
the dots are very close to eachother, we always get the same results (we can rely on it) but are the results valid? we don’t know
being high on validity means…
although the dots are not close to eachother, they’re close to the actual relaity. we can’t rely on them that they’re similar but we can say that the results are valid.
different types of validity and reliability
**1. research design
*** internal validity: causal inferences
* external validity: generalizability
**2. measurement
*** measurement reliabilty: consistency and precision
* measurement validity: accuracy
“measurement reliabiltyand validity are necessary requirements for internal and external validity”
correct!
measurement error
observed measurement = true score + error score
error score: random error+ systematic error
types of measurement error, which is always wrong?
- random error (decreases reliability)
- systematic error (always wrong)
how can measurement reliability be assesed? (3)
types of measurement reliability
- test-retest reliability (stabilty over time)
- internal consistency (accross different indicators)
- intercoder/rater reliability (consistency accross researchers)
how can overall reliabilty be reported?
for internal consistency: reliability coefficient
ranges from .00 to 1.00
.7 is minimum,.8 is good
what are examples of mesurement reliability coefficients?
cronbachs alpha and split half method
three ways of assesing measurement validity
more difficult bc we dont’ know the real score
1. face validity- judgement based (not reported)
**2. content validity **(theory-based): covering all dimensions? deals with intention
3. criterion/ construct validity:
a. concurrent: criteria is available - there’s an external criteria that’s correlated
b. predictive: criteria will be available in the future (SAT?) - there’s an external criteria that’s correlated
c. convergent: correlating with existing measures
*d. discriminant: *doesn’t overlap with theoretically different stuff does your concept distinguish?
what are the types of criterion validity?
- concurrent- predictive
- convergent
- discriminant
when the score is always the same, but it’s wrong
reliabilty: yes
validity: none
when there’s some random error, but it’s close to the real score
reliability: yes with random error
validity: yes
is reliability necessary for validity?
yes reliability is necessary for validity
3 types of triangulation
- data triangulation- using different sources or measures
- investigator triangulation- different researcher
- methodological triangulation- using different methods
3 possible outcomes of triangulation
- convergence - same
- inconsistency- some differences
- contradiction- opposite
why should we use triangulation?
it help to identify problems- learning process- learning inconsistencies allow to develop better measures
principles of data quality
- transperancy
- replication- new data collection
- verification- re analysis of existing data
(…) is necessary for (…)
reliability is necessary for validity
what’s face validity
the indicator intuitively seems like a goo d measure of the concept
what’s content validity
the extent to which the indicator covers the full range of the concept, covering each of its different aspects.
what’s construct validity
examines how well the measure confirms to our theoretical expectations
looks at to what extent its associated with theoretically relevant factors
has 3 types: concurrent- predictive, convergent, discriminatory
random error decreases…
reliability
which error type is always wrong?
systematic error