Exam 2 pt 2 Flashcards
translating concepts of interest in a study into something observable & measurable
Operationalizing a Concept
a method to measure (quantify) a concept or variable(s) of interest
instrument
(survey) via mail, in-person, email, phone
questionnaire
systematic coding, specified time
observation
researcher is present
interview scale
existing records
document review
instruments used in phenomenology
in depth interviews
diaries
artwork
instruments used in grounded theory
observations
open ended questions(interview)
individuals or small groups
instruments used in ethnography
observation open ended questions (interview) diagrams documents photographs
instruments used in historical
open ended questions interviews documents photographs artifacts
examines causes of certain effects
experimental/clinical trial
examines why certain effects occur
quasi experimental
examines relationships among variables
correlational
answers what questions
describes frequency of occurrence
exploratory/descriptive
unsystematic error such as a transient state in the subject, the context of a study, or in the administration of the instrument (reliability)
random error
altering the measurement of true responses in some way consistently (validity)
systematic error
Measurement Error theoretical formula
observed score= true score+error
what is reliability concerned with
the repeatability or consistency with which an instrument measures the concept of interest
three types of reliability
stability, equivalence, internal consistency
(test/retest or intra-rater)
stability
(inter-rater) – alternate form
equivalence
homogeneity – split-half reliability, item-to-total correlation, Kuder-Richardson coefficient, or Chronbach’s coefficient alpha
internal consistency
how is reliability reported
as a reliability coefficient
how are reliability coefficients (r) expressed
are expressed as positive correlation coefficients ranging from 0 to +1
how do you interpret r
r=0.80 or higher is acceptable for existing instruments
R=0.70 or higher is acceptable for a newly developed instrument
is a necessary, though insufficient, condition for validity
high reliability
Instruments may have good reliability even if they
are not valid
they don’t measure what they are supposed to measure
is concerned with the consistency of repeated measures under the same circumstances
stability
what is stability also called
test-retest reliability or intra-rater reliability
what is equivalence focused on comparing
two versions of the same instrument (alternate form reliability) two observers (inter-rater reliability) measuring the same event; consistency in raters using discrete categories
a reliability coefficient of what indicates good agreement
.75 or greater
addresses the correlation of various
items within a single instrument or homogeneity; all items are measuring the same concept
internal consistency
internal consistency is also called what
split-half, item-to-total correlation,
Kuder-Richardson coefficient, or Chronbach’s coefficient alpha
divide items on instrument in half to make two versions & use Spearman-Brown formula to compare halves
split half reliability
each item on instrument is correlated with the total score; strong items have high correlations with the total score
item to total correlation
divide the instrument with dichotomous (yes/no) or ordinal responses in half every possible way
Kuder-Richardson or KR-20
divide the instrument with interval or ratio responses in half every possible way
Cronbach’s alpha (coefficient alpha)
the extent to which an instrument accurately measures what it is supposed to measure
validity
types of validity
Content or face validity
Construct validity
Criterion-related validity
how well the instrument compares to an older instrument (concurrent validity) or is able to predict future events, behaviors, or outcomes (predictive validity)
criterion related validity
how representative is the instrument of the concept(s)
content validity
-determined by a panel of experts
extent to which the instrument performs theoretically; how well does the instrument measure a concept
construct validity
ways to determine construct validity
Hypothesis testing
Convergent or divergent or multi-trait-multi-method testing
Known group(s) testing
Factor analysis
use two or more instruments to measure the same concept (example: two pain scales)
convergent testing
compare scores from two or more instruments that measure opposite concepts (example: depression vs. happiness)
divergent testing
Administer instrument to subjects known to be high or low on the characteristic being measured
known groups (construct validity)
Use complex statistical analysis to identify multiple dimensions of a concept
factor analysis (construct validity)
what are the data collection methods
interviews
observation
text
sampling
what is crucial for qualitative data collection
trustworthiness