SOWO 914 Flashcards
Describe the Devellis process for scale development (there are approx. 8 steps)
- Determine what to measure based on theory, literature, specificity of construct
- Generate initial item pool and determine response format
- Solicit expert feedback
- Consider inclusion of validation items
- Collect data from a large sample
- Evaluate the items (e.g., item-scale correlations, frequencies, variances, means, reliability)
- Conduct exploratory factor analysis
- Conduct confirmatory factor analysis
What are the advantages to the Devellis approach to scale development (over Bowen & Guo)?
- More widely known, cited, and used
- More cost and time effective
Describe the Bowen & Guo process of scale development (there are approx. 10 steps).
- Conduct a literature review of the construct
- Interview intended respondents about the construct
- Generate initial item pool and determine response format
- Solicit expert feedback (i.e., scholars and community experts)
- Pilot test the measure with a small sample
- Evaluate pilot test data (e.g., item-scale correlations, frequencies, variances, means, reliability)
- Collect data from a large sample
- Conduct exploratory factor analysis
- Conduct confirmatory factor analysis
- Conduct additional validity testing
What are the advantages of the Bowen & Guo approach to scale development?
- Mixed methods
- Community-engaged
- More pilot testing and qualitative work on the front end to ensure construct, cultural, and respondent-related validity
What are the disadvantages to the Bowen & Guo approach to scale development?
- Costly and time-consuming
- Less well-known by experts and funders, may not be seen as a priority by some
Internal validity
The degree to which causality can be established based on the design of a study
External validity
The degree to which study results can be generalized to other people and settings
Content validity (definition)
Quality of the evidence related to content and internal structure
Criterion-related validity (definition) and some examples
Quality of evidence based on relations b/w scores on measure and other measures of related/unrelated constructs
- Concurrent/convergent
- Divergent/discriminant
- Discriminative/known-groups
- Predictive
Respondent-related validity (definition)
Quality of evidence related to respondent understanding, interpreting, and responding to instructions/items
Practice related validity (definition)
AKA consequential validity
Quality of evidence related to relevance for social benefit, utility in practice settings, intended/unintended consequences
How would you gather and evaluate evidence for content validity?
- items capture major dimensions, dimensions are supported by statistical relationships
- Lit review
- Review extant measures
- Expert review
- EFA & CFA
What are some examples of criterion-related validity?
- Concurrent/convergent: Measures scores associated with scores of other measures of same/related construct
- Divergent/discriminant: Measure scores NOT associated with scores of measures of different/unrelated construct
- Discriminative/known-groups: measure scores can discriminate b/w groups known to differ on construct
- Predictive: measure scores predict scores on outcome measure
How would you gather and assess evidence related to criterion-related validity?
- Evidence: statistical association/lack of association b/w related, same, or unrelated constructs or groups
- Data collection w large sample involving measures and comparative measures, followed by statistical analysis
- Data collection from large sample involving measure, followed by analysis of score and respondent group membership
How would you gather and assess evidence of respondent-related validity?
- Evidence: content or format appear appropriate AND/OR statistical qualities apply across groups
- Cognitive interview data
- Literature review on developmental capabilities intended
- Readability testing
- Expert review re: appropriateness specifically
- Translation and back-translation
- Multiple group factor analysis