Module 4 Section 2 Flashcards

1
Q

Measurement

A

-assigning numbers to represent the amount an attribute present in a person or object

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Advantages of Measurement

A
  • Removes guessing out of gathering data
  • tends to be more objective
  • reasonably precise
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Errors in measurement

A

-measurement errors are systemic and random error associated with a person’s score on a measure, reflecting factors other than the construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Common factors contribute to measurement errors

A

1.Situational contaminants
-awareness of observers may affect their behaviours
2.Response-set biases
-characteristics of the respondents can interfere with
the accurate measures of target attribute
3. Transitory personal factors
- fatigue, hunger, anxiety and mood can affect
motivation to cooperate
4.Administration variations
-altering collection methods can alter results
5. Item Sampling
-sampling of items used to measure a characteristics
can contribute to error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Reliability

A
  • the accuracy and consistency of a measuring instrument
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Stability

A

-the ability to give consistent results when the same people are tested at different times

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Measure with test re-test reliability

A

-researcher gives the same test to a sample on 2 occasions then compares the results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Reliability coefficient

A

-measure reliability with values between 0 to and 1. higher is equal to more reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Limitations of test-retest

A

-subjects remember the items being tested
-traits of interest change over time
-repeating could change response given
-repetition could bore the participants causing them to
respond differently

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Internal consistency

A

-extent to wich all items on an instrument measure the same critical variables or attributes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Measuring techniques

A

Split half technique
-split it into 2 halves to make comparisons
Spearman brown formula
-used to calculate the correlation coefficient
Item total correlations
-correlation between each item and total item. above
0.25 are acceptable
Kuder-Richardson (KD-20)
-yes or no response
Cronbach’s alpha
- coefficient measure in likert scale with each other simultaneously

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Equivalence

A

-the degree to wich 2 or more observers using a single instrument obtain the same results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Validity

A

-an instrument measures what it was suppose to measure
-it addresses the question of how well we measure social reality using our constructs about it
4 major types of validity measurements
1.Face validity
2. Content Validity
3. Criterion-related Validity
4. Construct validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Face Validity

A

-looks like it is measuring the target construct

Test for with: expert opinion and panel of experts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Content Validity

A

-extent the instrument covers the factors or situation under study
Test for with: verify with other evidence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Criterion related validity

A

-is the extent to which a subjects performance coincides with behaviour
Includes
1. concurrent validity:
-one measure correlates with another measure of the
same phenomenon
test for with:
test instrument with another instrument
that measures the same and is valid
2. Predictive validity
- can accurately predict a phenomenon
- test for with
-using the instrument in a study and compare with
a future outcome

17
Q

Construct validity

A
  • is the extent to which the test measures a theoretical concept
  • test for with: hypothesis testing and convergent and divergent approaches, contrasted groups, factor analysis, casual modeling
18
Q

Trustworthiness of qualitative data

A
  1. Credibility
  2. Dependability
  3. Confirmability
  4. Transferability
19
Q

Credibility

A

-the believability of findings

20
Q

Ways to ensure credibility

A
  1. prolonged engagement and persistent observations
    -increased time invested = increased credibility
    2.Triangulation
    • uses multiple methods to determine true info from
      info with errors
  2. Peer debriefing
    -receive feedback about data quality and
    interpretation from peers
  3. Member checks
    -obtain feedback from participants
  4. Searching for disconfirming evidence
    -occurs through purposive sampling
    -facilitated through such processes as prolonged
    engagement and peer debriefs
  5. Researcher credibility
    -is the researcher qualified to conduct research
21
Q

Dependibility

A
  • stability and reliability of data

- will the results be consistent after time

22
Q

Confirmability

A
  • objectivity or neutrality of the data and the data interpretation
  • 2 or more independent people would agree with datas relevance and importance
23
Q

Transferability

A

-extent to which the qualitative findings have acceptability in other settings and groups

24
Q

Selecting measurement tools

A
  • psychometrics
    • describe expertise in instrument construction. Should possess comprehensive subject expertise and knowledge and skill in test and scale constructions
25
Q

Characteristics of a good instrument

A
  1. uniform set of items and response possibilities
  2. Clear and concise statements
  3. Only 1 idea per statement
  4. Negative items and double negatives (confusing)
  5. Restricted to a few variations
  6. Not provide clues to other items
  7. Should cover a broad area of defined behaviour
  8. Adequately cover defined behaviour
  9. Measure what its intended to measure (validity)
26
Q

Steps to develop measurement instruments

A
  1. Define the concept to be measured
  2. Formulate the items
  3. assess items for content validity
  4. Develop instructions for respondents and user
  5. Pretest and pilot test the items
  6. Estimate reliability and validity