measurement Flashcards
what is measurement?
rules for assigning numbers to objects (concepts) in such a way as to represent quantities of attributions
why is assigning measurement units important?
give numbers to the objects of our study, so that we can describe them and analyze them statistically
what are the four processes that form a part of psychological measurement?
conceptualization, operationalization, reliability, validity
what is the conceptualization process of psychological measurement?
defining what you want to study, - a good definition, lined up with theory, is really important, and must always be your starting point.
what is the operationalization process of psychological measurement?
translate the concept into a plan for measurement - turning it into a sequence of operations that result in a measurement, going from this kind of abstract concept, to some sort of number that represents a certain level of our construct
what is reliability?
the extent to which the measure gives the same answer on repeated trials
what is the reliability process of psychological measurement?
all measurement contains some error - X= t + e
X - observed score (true score plus error)
t - true score, never able to know precisely
e - random errors or inconsistencies that occur during testing
what is random error?
error we have no control over - unpredictable, averages out
what is meant by random error being unpredictable?
Random errors fluctuate from one test administration to another. They can be positive or negative, meaning they can slightly inflate or deflate your observed score relative to your true score.
what is meant by random error averaging out?
In a well-designed test with a large sample size, random errors tend to cancel each other out. This is because some errors might push your score up slightly, while others might pull it down slightly. Overall, their net effect on the average score of the group tends to be minimal
what are some examples of random error?
Test anxiety, fatigue, momentary lapses in concentration, distractions in the testing environment
what is systematic error?
consistent, repeatable inaccuracies that arise from flaws in the measurement process or instrument, rather than from random chance
what are the outcomes of systematic error?
consistent bias, impacted reliability
what is meant by consistent bias in systematic errors of sampling?
Systematic errors consistently affect your score in the same direction, either inflating or deflating it relative to your true score
why does systematic error impact reliability in sampling?
Systematic errors can significantly impact the reliability of a test because they don’t cancel out and can lead to biased results
what are examples of systematic error in sampling?
A faulty measuring scale that consistently reads 5kg too high, a test question with unclear wording that everyone misinterprets in the same way, a scoring bias by the marker who tends to be harsher on certain types of responses
what are the types of reliability?
test-retest reliability, parallel forms reliability, split half reliability, internal consistency
what is test-retest reliability?
administering the same test to the same person at two different points in time
what does test-retest reliability measure?
It measures the consistency of scores over time
how does test-retest reliability work?
assess the correlation between the two administrations of the tests, and you’d expect a high correlation (high correlation=high test-retest reliability)
what is parallel forms reliability?
two versions of the same test that measure the same construct
what does parallel forms reliability measure?
consistency of results between two equivalent versions of the same test. It assesses whether different versions of an assessment produce similar scores when administered to the same group of people under similar conditions.
how does parallel forms reliability work?
assess the correlation between the scores on the two different measures, the higher the correlation, the better the reliability. (high correlation=high parallel forms reliability)
what is split half reliability?
divide a test in half to see if the items on one half correlate to the items on the other half
what does split half reliability measure?
measures the internal consistency of a test, assessing how well the items that compose the test yield consistent results
how does split half reliability work?
The scores on both halves are compared to see if they are similar. High correlation between the halves indicates good reliability. (high correlation between test halves=high split half reliability)
what is internal consistency for reliability?
the extent to which all items or components of a test measure the same construct and produce similar results
what does internal consistency measure?
it’s a score that tells us the extent to which each item correlates with each other item. Most often we use Cronbach’s Alpha to assess this
how does internal consistency work?
assessing how well the items within a test or measurement instrument correlate with each other, indicating that they are all measuring the same underlying construct
what is validity?
the extent to which a measure does what it’s intended to do
what are the types of validity?
criterion validity, construct validity, content validity
what is criterion validity?
Criterion validity assesses how well a test or measure relates to an external outcome, essentially gauging its usefulness in real-world scenarios.
what are the types of criterion validity?
predictive and concurrent
what is predictive criterion validity?
how well a test predicts a future outcome
what is concurrent criterion validity?
how well a test correlates with another measure of the same construct at the same time
what is content validity?
how accurately a test measures what it’s designed to measure - It focuses on the test content itself and ensures it adequately covers the relevant aspects of the target construct
what is construct validity?
assesses whether the test truly assesses the underlying concept it’s supposed to measure -
what does convergent construct validity refer to?
the measure shows a relationship to things that theoretically are related
what does divergent construct validity refer to?
the measure discriminates between things that are unrelated
what are the levels of measurement?
nominal, ordinal, interval, ratio
what is nominal data?
represents categories with no inherent order or ranking - the number is just a label for categories that are different from each other
what is ordinal data?
ranking the categories in a specific order - It tells you which is higher or lower, but not by how much - number represents categories that are different to each other and order matters
what is interval data?
equal intervals between each value on the scale, you can order the data and determine the difference between values - differences between numbers meaningful, numbers may be added or subtracted
what is ratio data?
has all the characteristics of interval data (ordered, equal intervals), but with a true zero point - zero point represents a complete absence of the quantity being measured, you can add, subtract, multiply or divide