test 2 Flashcards

1
Q

what is a latent variable?

A

another word for construct. Something we are interested in that is hopefully influencing our measurement that get on the basis of our operational definition for our research.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Compare the difference between having your data in a circle versus a square.

A

circle- represents something that we are not directly observing/interested in
square- represents something that is an actual score that we observe.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

true score vs error

A

true score; the real, expected influences (hope to have much more of this)
error: undefined/unexpected influences, much less of this

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

explain the idea of the tennis ball

A

Take 3 flawed measurements and take average. Even if they are all wrong but make them work together. Error is randomly distributed, even if the measurement are wrong.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

reliability

A

the degree to which the result of a measurement, calculation, or specification can be depended on to be accurate. CONSISTENCY

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do we figure out what is error & what is true score?

A

We are interested in reliability of a questionnaire as a whole not the reliability of an individual question. Having multiple questions in our questionnaire we can broad our ideas of how to have true score.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The classical measurement model assumption 1

A

The individual items of a questionnaire each have error and true score.

  • The amount of error varies randomly
  • The mean error across items of 0 (sample sizes)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The classical measurement model assumption 2

A

the error in one item is not correlated with the error in any other item
- why must this be true based on previous assumptions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The classical measurement assumption 3

A

the error in the items is not correlated with the true score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Parallel test model

A
  • extends the classical measurement model
    1) the latent variable influences all items equally
  • all items/construct correlations are the same
    2) Each item has the same amount of random error
  • the combined influences of all other factors are the same
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Parallel test model assumptions

A

1) only random error
2) errors are not correlated with each other
3) errors are not correlated with true score
4) latent variable affects all items equally
5) amount of random error for each item is equal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

to achieve perfect reliability you have to…

A

eliminate error in your measurement

  • extreme measurement error prevents you from observing any associations
  • the degree of error in a measure can be estimated by correlating that measure with itself
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what are the forms of reliability

A

cronbach’s alpha- most commonly reported. easiest to use with SPPS
Split half- less common, also easy to use
test-retest- optimal, when possible, requires 2x resources
alternate form- even less common, requires 2 identical measures
omega- newest form, doesn’t require tau equivalence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Cronbach’s alpha relies on several key assumptions

A

single factor model

  • essential tau-equivalence
  • error is random
  • equivalent influences of true score
  • equivalenet inter-item correlations
  • inter-item correlations would all be equal in a large enough sample
  • items would have equal variability in a large enough sample.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Dropping bad items

A

If an item is not “equivalent’’ to the others then dropping it will affect alpha

  • if it has a low inter-item correlation, Alpha goes up
  • if it has a high inter-item correlation, alpha goes down
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

identifying good items

A

inter-item correlations do not have to be very large

-there can be a lot of error in good items

17
Q

problems with alpha

A

because of its rigid assumptions, its often an underestimation of reliability
-its just internal consistency
The correlation coefficient
It also assumes normality

18
Q

Deviations of normality

A

they both threaten the validity of alpha

  • Skewness: a longer than normal tail
  • Kurtosis: a taller/wider than normal spread
19
Q

Kurtosis vs. Normal

A

Platykurtic (negative) wider ones with a pulled down peak

Leptokurtic (positive) distributions are tall and skinny

20
Q

Skewness vs. Normal

A

Positive means right tail is pulled out

Negative means left tail is pulled out

21
Q

three types of validity

A

a test’s ability to measure what it is suppose to

1) content validity
2) criterion validity
3) construct validity

22
Q

Content validity

A

when its items cover all aspects of the construct being measured.Key issue: representation of range
What are all the key thoughts, behaviours, skills, etc.
Representation by importance
How much of the total score is represented by each?
Success of representation is usually determined by:
1) advanced with prior literature review
2) Expert review

23
Q

Criterion Validity

A

refers to the degree to which a measurement can accurately predict validity
evaluation of your measures through comparison with an important “gold standard” outcome;
-delinquency assessment: correlation with numbers of offences
-Job suitability: employees rating on jobs
-Graduate records exams: grad school GPA

24
Q

Construct Validity

A

is the degree to which a test measures the construct that it is supposed to measure.
Two definitions:
1) An overall category encompassing all other forms of validity
2) The relationship of your measure with others theoretically- relevant measures
Overall category;
-A consideration of all evidence to indicate validity
-Everything we have, and will discuss
- Contains second definition
-Establishes wether we measured the ‘thing’ we wanted
relationship to other measures: convergent validity & discriminant validity

25
Q

Convergent validity

A

Shows appropriately strong correlation with related constructs

  • not facets or the construct you’re measuring
    ex. Anxiety should show positive relations with depression and stress
26
Q

Face validity

A

does my measure seem to measure the right thing?
-Must consider whether there are other reasonable interpretations.
Very commonly argued for, as its the easiest
-potentially least useful.

27
Q

Internal validity

A
  • It is the extent to which you can account for other plausible explanations
  • You need to either rule them out or show they’re actually implausible
  • error will again be a concern
28
Q

Error and validity

A
  • We use correlations between our measure & another measure to quantify validity.
  • We expect our correlations to be more modest than for reliability assessments
  • An R of about .10 is only good for showing discriminant validity
  • An R of about .60 is very good for showing convergent validity
  • An R of about .85 or higher is probably bad
29
Q

Threats to validity

A

-Several common sources of error can cause problems (especially when systematic)
Motivation:
-We could end up with incomplete data
-we could have different context effects
-we could end up with only random data
Distractions
- we need our participants to be able to concentrate
Sampling bias
-We need to ensure we’ve sampled the right people
-Criterions validity is often particularly sensitive to maturation threats
-Both reliability and validity can suffer from instrumentation threats
-Order effects: practice, fatigue and boredom