U11 Flashcards

1
Q

Measurement

A

involves rules for assigning numeric values to qualities of objects to designate the quantity of the attribute

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Errors of Measurement

A

Even the best instruments have a certain amount of error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Calculating Error

A

Obtained Score = True Score+/- Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Most common factors contributing to Error

A
situational contaminants
Response set biases
Transitory personal factors
Administration Variations
Item Sampling
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Reliability

A

consistency of instrument to measure attributes accurately. 3 Parts:
Stability, Reliability Coefficient and Test-Retest reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Stability

A

of a measure is the extent to which the same scores are obtained when the instrument is used with the same people on separate occasions. Assessed through Test-retest reliability procedures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Reliability coefficient

A

a numeric index of a measure’s reliability. Range from 0.00 to 1.00

Closer to 1.00 the more reliable (stable) the measuring instrument

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Test-retest reliability

A

the further apart the tests are, the more likely the reliability is to decline.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Internal Consistency

A

for scales that involve summary items, ideally scales are measures of the same critical attribute. Internal consistency reliability is = to the extent that all its subparts measure the same characteristic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Split-half technique

A

scores are separated (positive vs negative), these two scores are used to generate a reliability coefficient. If both are really measuring the same attribute the correlation will be higher

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Cronbach alpha (Coefficient alpha)

A

gives estimate of split half correlations for all possible ways of dividing measure into 2 halves. Indexes ranges from 0.00 to 1.00 with closer to 1.00 the more internally consistent the measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Equivalence

A

primarily with observational instruments. determines the consistency or equivalence of the instrument by different observers or raters. Can be assessed with the Interrupter Reliability.. estimated by having 2+ observers make simultaneous, independent observations and calculating the differences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Interpretation of Reliability Coefficients

A
  • hypotheses not supported about be d/t unreliability of measuring tool
  • a low reliability prevents an adequate testing of research hypothesis
  • reliability estimates via different procedures for same instrument are NOT identical
  • sample heterogeneity increases reliability coefficient. (instruments are designed to measure differences, also sample homogeneity = decreased reliability coefficient)
  • longer instruments (more items) tend to have higher reliability than shorter instruments
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Validity

A

degree to which an instrument measures what it is supposed to be measuring
*can have high reliability of instrument with no evidence of validity
(high reliability does not = high validity)
*low reliability IS evidence of low validity
(low reliability = low validity)

Validity coefficient - score greater than or equal to .70 desirable (range .00-1.00)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Face validity

A

wether the instrument looks as though it is measuring the appropriate construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Content Validity

A

adequate coverage of content area being measured

  • crucial in tests of knowledge
  • based on judgement
17
Q

Criterion Validity

A

researchers seek to establish a relationship btw scores on an instrument and some external criterion (if scores correspond the instrument is said to be valid)

18
Q

Construct validity

A

challenging, what construct is the instrument actually measuring?

19
Q

Interpretation of validity

A

not an all or nothing characteristic of an instrument.

20
Q

Sensitivity and specificity

A

Sensitivity is ability of an instrument to correctly identify a case, that is, to correctly screen in or diagnose a condition

Specificity instruments ability to correctly idtenify concusses, that is, to correctly screen out those without the condition.

21
Q

Assessment of qualitative data

A

Gold Standard criteria for qualitative researchers: to establish trustworthiness of Qualitative date you need Dependability, Credibility, Confirmability and Transferability

22
Q

Credibility

A

refers to confidence in the truth of data and interpretations of them
To Demonstrate Credibility:
1- prolonged engagement
2 - persistent observation

“If prolonged engagement provides scope, persistent observation provides depth”

23
Q

Triangulation

A
can increase credibility
*to overcome the intrinsic bias that comes from single-method, single-observer and single-theory studies. more complete and contextualized portrait of the phenomenon under study. 
4 Types of Triangulation:
Data Source 
Investigator
Theory
Method
24
Q

Dependability

A

data stability over time and over conditions

There can be no credibility in the absence of dependability

25
Confirmability
objectivity or neutrality of the data. Congruence btw 2+ ppl about the datas accuracy, relevance, or meaning
26
Transferability
extent to which findings from data can be transferred to other settings and is thus similar to the concept generalizability.
27
Critiquing data quality
Qual/Quant: * can I trust the data? * does the data accurately reflect the true state of the phenomenon? Quant: * Reliability and Validity of the measures * be wary when no info on data quality, or suggest unfavourable reliability/validity * also wary when hypotheses not confirmed Qual: *high alert to info on data quality when only a single researcher collects, analyzes, and interprets all of the data.