measurement Flashcards
define measurement
The act or process of measuring
- An amount, extent, or size
- A system of measures based on a particular standard
What are the 2 important steps in measurement?
a. Conceptualization
- Defining abstract ideas with specific characteristics
b. Operationalization
- Specifying how a variable or concept will be measured in a specific study
How do you operationalize variables?
putting a variable into valid, precise, and measurable terms.
- Have appropriate codes
Define continuous variables
a. Any value is acceptable ( it is not confined to groups)
b. Typically the dependent variable in research design
c. However, independent variables can be dependent as well
Define categorical/discrete variables
a. Things that belong in groups
b. Independent variables can be categorical ; dependent variable are usually not
What is the general rule of categorization?
a. Quantitative variables fall into one of 4 categories: (NOIR)
i. Nominal
ii. Ordinal
iii. Interval
iv. Ratio
What is the purpose of nominal scales?
to assign a label to a variable
i. descriptive
ii. variables are assigned to categories
iii. cannot make inferences on greater or less than
What is the purpose of ordinal scales?
suggests a preference (things put in order)
i. ordinal scales are nominal in themselves
ii. magnitude is present (I like x more than y)
- does not provide much info about how much more or how much less
- ex. Rank your fav golfer (5 is best)
- a(1) , b(2), c(3), d(4), e(5)
a. doesn’t give indication of how much more
What is the purpose of interval scales?
assign rank to certain items
i. Contain characteristics of nominal and ordinal
ii. The difference between numerical assignment is meaningful
- (ex. The difference between 10-20 is 10 but so is the difference between 20-30)
a. Ex. On a 1-5 scale (5 is best), rank them related to your favorite one
a(1.5), b(2.3), c(3), d(4.5), e(4.9)
Meaningful difference between ranks
What is the purpose of ratio scales?
ranks, but with the inclusion of the absolute zero
i. Zero represents the absence of a trait
ii. Provides meaningful differences
iii. Contains characteristics of other scales
- Ex. Number of PGA tour wins
a. Quantify against a value of 0
Explain validity
how accurately a method measures what it is intended to measure
Explain reliability
consistency of a measurement
i. gives the same result on multiple occasions
ii. need to determine reliability to see how variable result are
What are the sub-types of reliability?
a. inter-rater
b. test-retest
c. parallel forms
d. split half
e. internal consistency
What are the sub-types of validity?
construct
face
content
criterion
Explain inter-rater
comparison of how 2 people rate the same thing
Explain test-retest
Individual takes same test at 2 different times
i. Correlate scores
ii. Percent agreement score in categorical data
iii. Significant correlation results in high test-retest reliability
iv. High percent agreement
Important to consider time between tests
Explain parallel forms
Use of 2 tests measuring the same thing
i. Compare the results of both tests to see how consistent they are
2 different tests?
Explain split half
Divide measure into 2 sections
i. Give participants the half measures
ii. Compare results on both halves
- Useful when you have a longer test
- Also useful when no other measure exists
Explain internal consistency
a. Works with items on a test/scale
b. Looks at how unified the items on a questionnaire are
i. You want the items that measure the same construct to be related
c. chronbach alpha coefficients
d. Typically reported in the method section
e. Cut off for reliable scale: >.70
How to increase reliability?
a. Increase sample size
b. Eliminate unclear questions
c. Standardize testing conditions and instructions
d. Moderate the degree of difficulty of the tests
e. Minimize the effects of external events
f. Maintain consistent scoring procedures
Explain construct validity
a. How well a test measures the concept it was designed to evaluate
b. Making sure the test is measuring the construct it is suppose to
i. Need to consider operational definitions of a concept
ii. Should not measure unrelated constructs
iii. Should be useful in predicting behaviours
c. To establish construct validity:
i. Correlate the new test with established test
ii. Show that people who are different on am of test score differently
Explain face validity
Whether a test appears to measure what its supposed to measure
- How the public sees the test
i. Does the participant see the link between the test and what is being measured
High face validity = easily detectable by participant
Explain content validity
Assess whether the test is a representative of all aspects of the construct
- Does the test capture all of the behaviours relate to the construct?
i. If it does = high content
ii. If it does not = low content
Explain criterion validity
Evaluates how accurately a test measures the outcome it was designed to measure