unit 11: Evaluating Measurements and Data Quality Flashcards
measurement
involves rules for assigning numeric values to qualities of objects ti designate the quantity of the attribute
>attributes are not constant
>
advantages of measurement
it removes the guess work when gathering information
obtain reasonably precise information
measurement is a language of communication
less vague
errors of measurement
obtained score = true score+/- error
true score is
the true value that would be obtained if it were possible to have an infallible measure of target attribute
the error of measurment
the difference between true and obtained scores-reflects extraneous factors that affect the measurement and distort the results
main factors contribute to error measurement
>situation contaminants >response set biases > transitory personal factors >administration variation >item sampling
reliability
is the consistency with which an instrument measures the attribute
stability
important factor for stability measures the extent to which the same scores are obtained when the instrument is used with the same people on separate occasions
test-retest reliabilty
assessment of stability researchers administer the same measure to a sample of people on two occasions and then compares the scores
reliability coefficient
a numeric index of a measure’s reliability, to objectively determine exactly how small the difference are. (r) range from 0.00-1.00 the higher the # the more reliable
internal consistency
scales that involve summing items usually are evaluated for their internal consistency
>reliability to the extent that all its sub parts measure the same characteristic
cronback data/coefficient alpha
this method gives an estimate of split-half correlation for all possible ways of dividing the measure into halves, not just odd versus even items
equivalnce
approach to estimating reliability-used primarily with observational instruments-determines the consistency or equivalence of the instrument by different observers or rater
interrater/interobserver reilability
degree of error can be assessed, which estimated by having two or more trained observers make simultaneous, independent observation
interpretation of reliability coefficient
are an important indicator of an instrument’s quality,
> reliability estimates vary according to the procedure used to obtain them
>instrument is related to sample heterogeneity
validity
is the degree to which an instrument measures what is supposed to be measuring.
> reliability and validity of an instrument are not totally independent
face vailidity
refers to whether the instrument looks as though it is measuring the appropriate construct
three types of valdility
content, criterion, construct
content validity
is concerned with adequate coverage of the content are being measured
relevant in measuring complex psycho social traits
content validity index
that indicates the extent of expert agreement, but ultimately the experts’ subjective judgement must be relied on
criterion-related validity
assessments, researchers seek to establish a relationship between scores on an instrument and some external criterion
validity coefficient
is a criterion-related validity that computes by using a mathematics formula that correlated scores in the instrument with scores in the criterion variable
predictive validity
a criterion-related validity that is an instrument’s ability to differentiate between people’s performance or behavior on some future criterion
concurrent validity
is a criterion-related validity that refers to an instrument’s ability to distinguish among people who differ in their present status on some criterion
construct validity
is concerned with the following questions: what construct is the instrument actually measuring?
more abstract the more diff to establish a validity of measure
> one approach is known groups technique groups that are expected to differ on the critical attribute are administered the instrument, and group scored are compared
another approach is to construct validation employs a statistical procedure known as factor analysis
employs both logical and empirical
interpretation of validity
the testing of an instrument’s validity is not proved but rather is supported by an accumulation
>validation is never ending process: the more evidence that can be gathered that an instrument is measuring what is supposed to be measuring, the greater the confidence researcher have in its validity
reliability, sensitivity, specificity and validity
most important criteria for evaluating quantitative instruments
sensitivity
is the ability of an instrument to correctly identify a “case” that is to correctly screen in or diagnose a condition
specificty
is the instrument’s ability to correctly identify non cases, that is, to correctly screen out those without the condition
assessment of qualitative data
credibility >prolonged engagement and persistent observation > triangulation >external checks > searching for dis-confirming evidence >researcher credibility Dependability confirm-ability transfer-ability
credibility
refers to confidence in the truth of the data and interpretation of them
>involves two aspect:
- carrying out the investigation in a way that believably is enhanced
> taking step to demonstrate credibility
prolonged engagement and persistent observation
is a credibility first and very important step
the investment of sufficient time in data collection activities to have in depth understanding of culture, language, or views of the group under study and to test for misinformation.
building trust and rapport with informants
triangulation
enhance credibility, refers to the use of multiple referent to draw truthful conclusions
four types of triangulation
-data source triangulation using multiple data sources in a study
-investigator triangulation- using more than one person to collect, analyze or interpret a set of data
-theory triangulation- using multiple perspective to interpret a set of data
- method triangulation using multiple methods to address a research problem
external checks
peer debriefing and member checks
searching of disconfirming
is a way to confirm credibility
the search for dis-confirming evidence occurs through purposive sampling but is facilitated through other processes already described, such as prolonged engagement and peer debriefing
negative case analysis
a process by which researchers revise their hypotheses through the inclusion of cases that appear to disconfirm earlier hypothesis
research credibility
the faith that can put in the researcher
dependability
of qualitative data refers to data stability over time and over conditions
one approach is step wise replication-which is conceptually similar to a split half technique, involves having several researchers who can be divided into two teams
another technique relating to dependability is the inquiry audit- a scrutiny of the data and relevant supporting documents by an external reviewer
confirmability
refers to the objectivity or neutrality of the data, that is, the potential for congruence between two or more independent people about the data’s accuracy, relevance, or meaning.
Inquiry audits- can be used to establish both the dependability and confirmation of the data
audit trail which is a systematic collection of documentation that allows an independent auditor to come to conclusion about the data
auditability
outside person can follow the researcher’s methods, decisions, and conclusions by maintaining an adequate decision trail
decision trail
articulates the researchers’ decision rules for categorizing data and making inferences in the analysis
transferability
refers to the extent to which the findings from the data can be transferred to other settings and is thus similar t the concept of generalizability
thick description
refers to a rich thorough description of the research setting and if the transactions and processes observed during the inquiry