Quiz 1 that i created Flashcards

1
Q

What is a variable?

A

A property that can take different values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Operational definition is

A

converting construct into a measurable variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A construct variable is

A

an abstract (non-observable)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Continuous variable is

A

any value along a continuum within a defined range

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A discrete variable is

A

described in whole units. Cannot be halved

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the levels of measurement?

A

Ratio, interval, ordinal, and nominal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Ratio is

A

numbers that represents units with equal intervals. Measured from a true zero, and has no negatives. Highest and carries the most info. Ex: distance, age, time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Interval is

A

numbers have equal intervals but no true zero. Ex: calendar years, temp

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Ordinal

A

numbers indicate a rank order

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Nominal

A

numbers are category label. Dichotomous. yes or no answers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is an independent variable?

A

What you can manipulate/specify.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are dependent variables?

A

What you measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Types of independent variable

A

Active and Atrribute

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Attribute IV

A

cannot be manipulated. EX: gender

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Active IV

A

can be manipulated. Ex: treatment given to a group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Repeated factors

A

same group/people are measured in all levels of an IV. They are their own controls. (within subject)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Independent factors

A

different groups for each level. of IV. (between subjects)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Single factor design

A

Just one independent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Multifactorial design

A

two or more IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Univariate design

A

only 1 dependent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Multivariate design

A

multiple dependent variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

what is reliability

A

the extent to which a measurement is consistent & free from error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

what is measurement error

A

having the idea that a measurement has a margin of error.

Observed score = true score +/- error

24
Q

Types of measurement error are

A

Systematic, and random

25
Q

What is systematic error

A

error that is constant in its ways. Either always overestimating or underestimating

26
Q

What is random error

A

error that is due to chance from measurement. The measurement are unpredictable

27
Q

Sources of measurement error

A

Rater, instrument, or variability of characteristic being observed

28
Q

Ways to improve reliability

A

standardize measurement methods, train & test observers, refine & calibrate instruments, blind rater to reduce bias.

29
Q

what are the two reliability coefficients and what are they used for?

A

ICC: for continuous scale score

Cohen’s kappa: categorical scale scores

30
Q

what is MDC?

A

The ability of an instrument to detect change above measurement error. (Minimal Detectable Change)

31
Q

Types of reliability

A

test-retest, inter-rater, intra-rater, alternate/parallel, internal consistency, split-half

32
Q

What is the test-retest?

A

Used to establish that an instrument is capable of measuring a variable consistently. Ignores the rater.
Conditions being measured has not changed between tests.

33
Q

What is Inter-rater reliability?

A

Making sure that two or more people can agree on a measurement for the same group. Best assessed in a single trial.

between rater

34
Q

What is intra-rater reliability?

A

Same rater taking measurements for the same group, on multiple occasions.
Issue with this is rater bias. Can be avoided by blinding

35
Q

What is alternate/parallel reliability?

A

Reliability between two different things/instruments.

Measured with correlation coefficients

36
Q

What is internal consistency reliability?

A

looking to see if all the items on a document are internally consistent. How well will the items reflect the same results.
Mostly used on questionnaires
Make sure there is no redundancy
Usually measured with a Cronbach’s alpha

37
Q

What is split-half reliability?

A

Taking half of the items provided and comparing it with the other half

38
Q

Which types of reliability are most relevant for clinicians?

A

Test-retest, inter and intra rater

39
Q

What types of reliability are mostly for questionnaires, surveys and comparing different types of tests?

A

Alternate/parallel, internal consistency, Split half reliability

40
Q

What is measurement validity?

A

the extent to which an instrument measures what it is intended to measure

41
Q

A test cannot be _____ if its ____, but can be ____ but not ____

A

valid, unreliable. reliable but not valid

42
Q

Types of measurement validity

A

Face validity, content validity, criterion validity, and construct validity

43
Q

What is face validity?

Subjective or objective?

A

when an instrument appears to test what its supposed to.

Least rigorous, subjective, scientifically weak

44
Q

What is content validity?

What is it used for?

A

Do measurements adequately represent concept & unrelated concepts. Used with questionnaire development

45
Q

What is criterion validity?
Subjective or objective?
How is it measured?

A

can the outcomes of the instrument be substituted for an established gold standard.
Highest and most objective form
Measured by correlation coefficients between measure & source value

46
Q

Types of criterion validity

A

Concurrent validity and predictive validity

47
Q

What is concurrent validity?

A

measurements between test taken within the same time

48
Q

What is predictive validity?

A

establishes that the outcome of the target test can be used to predict a future score/outcome

49
Q

What is construct validity?

A

how well a tool measures an abstract, concept/construct.

Ways to test are not ideal

50
Q

Types of construct validity

A

Known group, and convergent validity

51
Q

What is known group validity?

A

do test result differ between two different known groups

52
Q

What is convergent validity?

A

is there a correlation with similar text?

53
Q

______ is often the primary focus of research outcomes & must be able to trust that change is “real”

A

Measuring change

54
Q

What are the issues affecting validity of change

A
  • levels of measurement: Ordinal ratio, ex: is a change from 5:4 same as 2:1
  • Reliability: is change a measurement error?
  • Stability: are there meaningless natural fluctuations?
  • Baseline score: floor effect (minimum) or ceiling effect (maximum)
55
Q

What is responsiveness?

A

the ability of an instrument to detect minimal change over time.

56
Q

What is MCID

A

the ability of an instrument to detect minimally important change. Smallest difference that signifies an important rather than a trivial difference. should be larger than MDC