Exam 2 - Study Material (Quiz 4) Flashcards

1
Q

What is measurement error?

A

The difference between the actual value of a quantity and the value obtained by a measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is error variance?

A

The extent to which the variance in test scores is attributable to error rather than a true measure of the behaviors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is an observed test score?

A

Derived from a set of items actually consists of the true score plus error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the 2 types of errors with instruments?

A

—1. CHANCE – TRANSIENT

  • —Subject factors- calm versus anxious
  • —Instrument factors- misplaced cuff (occurs if an instrument is 99% accurate, 1% of the time it is inaccurate

2. SYSTEMATIC- CONSISTENT

  • —Subject factors- social desirability
  • —Instrument factors- not calibrated correctly
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is reliability?

A
  • —Degree of consistency with which and instrument measures an attribute (concept)
  • —Precision, accuracy, stability, equivalence and homogeneity over repeated measures
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a reliability coefficient?

A
  • Ranges from —0 to 1
  • The higher the error, the lower the coefficient
  • Must have a coefficient of at least .70
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the 3 measures of reliability for an instrument?

A

—1. Stability

  • —Test-retest
  • —Parallel or alternate form

—2. Homogeneity

  • —Item-total
  • —Split-half
  • —KR-20
  • —Cronbach’s alpha

—3. Equivalence

  • —Parallel or alternate form
  • —Interrater reliability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is stability and what does it measure?

A
  • —Same results are obtained with repeated measures over a period of time
  • —Measure concept consistently over a period of time
  • —Test-retest reliability
    • —Example - Test-interval was 2 weeks and r = .77
  • —Parallel or alternate form
    • —2 forms of same test- Partner Relationship Inventory: “I am able to tell my partner how I feel” vs “My partner tries to understand how I feel”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is test-rerest reliability?

A

Administration of the same instrument twice to the same subjects under the same conditions within a prescribed time interval, with a comparison of the paired scores to determine the stability of the measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is homogenity?

A
  • Is the instrument measuring under similar conditions
  • —Internally consistent- measures the same attribute (concept)
    • —Item to total: relationship between each item and total scale
    • —Split- half: two halves
    • —Kuder-Richardson: dichotomous response format -“Yes/No”, or “true/false”
    • —Cronbach alpha: every possible split half -Likert scale
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is split-half reliability?

A

An index of the comparison between the scores on one half of a test with those on the other half to determine the consistency in response to items that reflect specific content

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a kuder-richardson?

A

It is an estimate of the homogeneity used for instruments that have a ditchotomous response format

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a cronbach’s alpha?

A

Test of internal consistency that simultaneously compares each item in a scale to all others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is equivalence?

A

Degree to which different investigators with the same instrument obtain the same results

  • Parallel or alternate forms
  • —Interrater reliability: Observation consistent expressed as Kappa from +1 to 0. .80 and .68 acceptable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is interrator reliability?

A

The consistency of observations between two or more observers; often expressed as a percentage of agreement between raters or observers or a coefficient of agreement that takes into account the element of chance. This usually is used with the direct observation method.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is kappa?

A

Expresses the level of agreement observed beyond the level that would be expected by chance alone. Kappa (K) ranges from +1 (total agreement) to 0 (no agreement). K greater than .80 generally indicates good reliability. K between .68 and .80 is considered acceptable/substantial agreement. Levels lower than .68 may allow tentative conclusions to be drawn when lower levels are accepted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is validity?

A
  • —Degree to which an instrument measures what it is supposed to measure
  • —If a tool is not reliable, it cannot be valid.
  • —Are investigators measuring what they think they are measuring
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the 3 types of validity?

A
  1. —Content

—2. Criterion-related

  • —Concurrent
  • —Predictive
  1. Construct
    * —Six types
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is content validity?

A
  • The degree to which the content of the measure represents the universe of content, or the domain of a given behavior.
  • —Ability of instrument to adequately represent the domain of the concept being tested.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is content validity index?

A

A —calculation of the agreement among panel of experts that tested the abiltiy of the instrument, values range from .70-.80 to 1.0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is criterion-rlated validity?

A

—It is the extent to which an instrument corresponds to some other observation (the criterion) that accurately measures the concept or phenomenon of interest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are the 2 types of criterion related validity?

A
  1. Concurrent
  2. Predicitve
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is Concurrent validity?

A

The degree of correlation of two measures (or tests) of the same concept that are administered at the same time.

One old test vs a new test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is predicitve validity?

A

The degree of correlation between the measure of the concept and some future measure of the same concept.

—Test that predicts a future concept

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is construct validity?

A

—It is the extent to which a test measures a theoretical construct or trait (or concept). Validates underlying theory of measurement.

  1. —Hypothesis-testing
  2. —Convergent, divergent and multitrait-multimethod approaches
  3. —Contrasted groups-Known groups
  4. —Factor analytic approach
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is a hypothesis-testing approach?

A
  • It is a method used when an investigator uses the theory or concept underlying the measurement instruments to validate the instrument
  • the investigator does this by developing hypothesis regarding the behavior of individuals with varying scores on the measurement instrument, collecting data to test the hypothesis,
27
Q

What is convergent validity?

A

A strategy for assessing construct validity in which two or more tools that theoretically measure the same construct are administered to subjects. If the measures are positively correlated, convergent validity is said to be supported.

28
Q

What is divergent validity?

A

Uses measurement approaches that differentiate one construct from others that may be similar

29
Q

What is a multitrait-multimethod approach?

A

A type of validity that uses more than one method to assess the accuracy of an instrument (e.g., observation and interview of anxiety).

30
Q

What is the critiquing criteria for an instrument?

A

A.Do instruments selected for the study adequately measure study variables?

B.Do the instruments in the study have adequate validity and reliability?

C.Have strengths & weaknesses of the reliability and validity of each instrument been presented?

D.What additional reliability or validity testing is needed to improve instrument quality ?

E.Are the strengths and weaknesses related to reliability and validity of instruments addressed in the Discussion, Limitations or Recommendations of the study?

31
Q

What is face validity?

A

A type of content validity that uses an expert’s opinion to judge the accuracy of an instrument. (Some would say that face validity verifies that the instrument gives the subject or expert the appearance of measuring the concept.)

32
Q

What is internal consistency?

A

The extent to which items within a scale reflect or measure the same concept.

33
Q

What is a likert scale?

A

A scale that is formatted to ask subjects to respond to a question on a scale of varying degrees of intensity between two extremes

34
Q

What is systematic (constant) error?

A

Measurement error that is attributable to relatively stable characteristics of the study sample that may bias their behavior and/or cause incorrect instrument calibration. Such error has a systematic biasing influence on the subject’s responses and thereby influences the validity of the instruments

35
Q

What are the different levels of measurement?

A

1. Nominal- Exhaustive, mutually exclusive variables-

  • Dichotomous-two true values i.e. Male/female
  • Categorical -more than two true values i.e. marital status
  • Ex: diagnoses

2.Ordinal – Variables defined to have an order or relative (not equal) ranking of object

  • Ex: Edema rank

3. Interval – Equal or standard interval between the numbers, no absolute 0

  • Ex: Temperature

4. Ratio – Equal interval and absolute zero

  • Ex: blood pressure
36
Q

What is nominal measurement?

A

Level used to classify objects or events into categories without any relative ranking (e.g., gender, hair color).

37
Q

What is a dichotomous variable?

A

A nominal variable that has two categories (e.g., male/female).

38
Q

What is a categorical variable?

A

A variable that has mutually exclusive categories but has more than two values.

39
Q

What is an ordinal measurement?

A

Level used to show rankings of events or objects; numbers are not (equidistant equally distant), and zero is arbitrary (e.g., class ranking).

40
Q

What is an interval measurement?

A

Level used to show rankings of events or objects on a scale with equal intervals between numbers but with an arbitrary zero (e.g., centigrade temperature).

41
Q

What is a measurement of a ratio measurement?

A

Level that ranks the order of events or objects, and that has equal intervals and an absolute zero (e.g., height, weight).

42
Q

What are descriptive statistics?

A

Organizes, summarizes and describes data by measuring:

  • Frequency Distribution
  • Central Tendency
43
Q

What is frequency distribution?

A
  • Descriptive statistical method for summarizing the occurrences of events under study.
  • Each event counted and grouped by frequency of occurrence. Frequency distribution forms:
  1. Table Format
  2. Histogram
  3. Frequency polygon
44
Q

What are the measures of central tendency?

A

}MODE – most frequent

§All levels of measurement

}MEDIAN – middle score; 50% above and 50%

§Ordinal, Interval and Ratio

}MEAN – average of all

§Interval, Ratio

45
Q

What is a normal distribution?

A

Data group themselves about a midpoint in a distribution closely approximating the normal curve

“Theoretical concept”

46
Q

What is variability or dispersion?

A
  • Relates to spread of data
  • Enables you to evaluate homogeneity or heterogeneity of the sample
47
Q

What is a standard deviation?

A
  • Average deviation of scores from mean
  • Most frequently used measure of variabiltiy
48
Q

What are inferential statistics?

A
  • Assess probability what I found (statistics) in one group (sample) really occurs in the larger group (population)
    • Parameter- population characteristic
    • Statistic -sample characteristic
  • Allow use to test hypothesis using negative inferences and assess level of significance
  • When using inferential statistics actually testing the research (scientific) hypothesis and the null hypothesis
49
Q

What is hypothesis testing?

A
  • Decide if differences/associations found represent chance (or error) or if the independent variable produced the change in the dependent variable- Probability
  • Based on negative inference
  • Underlying the research hypothesis is the null and the null is tested
50
Q

What is sampling error?

A

tendency of statistics to fluctuate from one sample to another

51
Q

What is probability?

A

The probability of an event is the event’s long-run relative frequency in repeated trials under similar conditions.

52
Q

What is level of significance? (Alpha level)

A

(Alpha Level) It is the risk of making a type I error, set by the researcher before the study begins.

53
Q

What is a type 1 error?

A
  • Reject null when should not (p < .05)
  • Reject a true null hypothesis
  • The rejection of a null hypothesis that is actually true.
54
Q

What is a type 2 error?

A
  • The acceptance of a null hypothesis that is actually false.
  • Accepting a false Null hypothesis
55
Q

What are the different types of significance?

A
  • Statistical Significance– Numeric significance
  • Clinical Significance- how much change, cost etc.
56
Q

What are nonparametric statistics and what are some characteristics of this?

A

Statistics that are usually used when variables are measured at the nominal or ordinal level because they do not estimate population parameters and involve less restrictive assumptions about the underlying distribution.

  • Used with nominal and ordinal data
  • Used with small samples
  • Distribution-free (does not require assumption of normal distribution)
  • Not based on population parameters
57
Q

What are parametric statistics and their characteristics?

A

Inferential statistics that involve the estimation of at least one parameter, require measurement at the interval level or above, and involve assumptions about the variables being studied. These assumptions usually include the fact that the variable is normally distributed.

  • Require interval and ratio data
  • Assume variables normally distributed
  • More powerful because assumes variables are normally distributed
58
Q

Describe statistical decision making?

A

Always statistically test null hypothesis

  • How often or what is the probability these results occurred by chance
  • Decide whether to reject null or not

Reject the null gives support to the research hypothesis

59
Q

What is a continuous variable?

A

A variable that can take on any value between two specified points (e.g., weight).

60
Q

What is modalitiy?

A

The number of peaks in a frequency distribution.

61
Q

What are Multivariate Statistics?

A

A statistical procedure that involves two or more variables.

62
Q

What is a parameter?

A

A characteristic of a population.

63
Q

What is a scientific hypothesis (research hypothesis)?

A

The researcher’s expectation about the outcome of a study; also known as the research hypothesis.