PSYC-549 Applied Measurement Techniques Flashcards

1
Q

Achievement test

A

Tests individual on previous learning. Generally used in schools and educational settings. Measures or evaluates previous knowledge. can be multiple choice/essay, standardized, 1st, 2nd, 3rd place; higher scores represent mastery

Ex:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Aptitude test

A

Test designed to measure one’s potential for learning a specific skill

Measures potential to learn or natural ability to do something/perform on tasks or react to different situations. Often used for measuring high school students’ potential for college. Are prone to bias.

Ex: career aptitude test, IQ test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Assessment interview

A

Initial information-gathering interview

intent to develop a treatment plan, typically during the initial session/meeting.
May be structured, with specific questions in a specific order strictly adhered to in order to provide better reliability and validity.
Or maybe unstructured where the interviewer follows their own line of questioning allowing them to pursue relevant topics as they arise.
May be used in conjunction with other assessment techniques to formulate a better picture of individual, diagnosis, and treatment (e.g. psychological tests, behavioral observations)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Clinical vs. Statistical significance

A

Clinical - Practical importance to real world conditions. is the obtained result important or meaningful?; Evidence Based Treatment; Will it be meaninigful in the real world? Looks from a therapeutic standpoint, looks at bigger picture

Statistical - The degree to which a result is not reasonably attributable to chance. is the obtained result likely to be attributable to chance factors?; Empirically Supported Treatment; Looks from an experimental standpoint, data driven

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Construct

A

Characteristic which varies from individual to individual, but which is not directly observable.
The characteristic is an internal event or process that must be inferred from external behavior. Constructs may be derived from theory, research, or observation.
Tests generally are designed to measure a internal construct.

Example: The counselor administered a paper and pencil assessment measure that solicited responses related to fidgeting, excessive worrying, difficulty concentrating - all representing the construct of anxiety.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Correlation vs. Causation

A

Correlation refers to the relationship between two variables (not causation)

Causation refers to a variable’s influence on another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Criterion-referenced scoring/tests

A

Test-taker demonstrates a specific ability with performance measured against a fixed set of predetermined standards

in order to establish a cut-off score the test is given to 2 groups - a group that has knowledge in the area/has been taught and a group that does not; the least frequent score from either group - the antimode - establishes the point at which mastery begins

Ex: achievement tests - school math test, driver test, golf (other single player sports)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Criterion-related validity

A

How well a measure predicts outcome of another measure or performance in a specific setting (i.e. GRE and graduate school performance). evidence is provided by high correlation between a test and well-defined standard

Ex: driver test in car; comparison of scores on the SAT with first semester grade point average (GPA) in college; this assesses the degree to which SAT scores are predictive of college performance (predictive validity)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Cross-validation

A

Cross-Validation is a statistical method of evaluating and comparing learning algorithms by dividing data into two segments: one used to learn or train a model and the other used to validate the model.

Evaluating a measure by segmenting the data and running an analysis on each separate sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Normal curve

A

A symmetrical, bell-shaped distribution

aka normal distribution; refers to “bell-shaped” curve formed on histogram when data has a normal distribution; symmetry; most data focused toward mean/average with less toward extremes; random sampling tends to follow normal curve; parametric statistics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Norm-referenced scoring/tests

A

standardized tests that are designed to compare and rank test takers in relation to one another.

Norm-referenced tests report whether test takers performed better or worse than a hypothetical average student, which is determined by comparing scores against the performance results of a statistically selected group of test takers, typically of the same age or grade level, who have already taken the exam.

Ex: cognitive tests - SAT, GRE, IQ, behavior assessments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Objective tests

A

not open to interpretation; there is a correct and a wrong answer; scoring is standardized;
reduces bias and subjectivity of grader and person taking it

Ex: school tests - multiple choice, true/false

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Projective tests

A

Unstructured, interpretive tests used to subjectively interpret deeper meaning

tests in which the stimulus or the required response or both are ambiguous. The general idea behind protective tests is that a person’s interpretation of an ambiguous stimulus reflects his or her unique characteristics.

Have fallen out of favor in recent years. Usually these types of tests require extensive training to accurately interpret responses.

Ex. Tests include the Rorschach inkblot test and the TAT among others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Reliability (types of)

A

extent to which a test or measure yields consistent results across administrations

Test-Retest: consistency of one person’s scores over multiple attempts

Parallel-Forms: used to assess the consistency of the results of Multiple separate, but equal, forms of a test are developed and scores correlated.
constructed in the same way from the same content domain

Split-Half: One test is divided in half and the halves are correlated

Inter-Rater: Correlation between two independent rater scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Standard deviation

A

Measure of variability in a set of scores;
average amount that scores differ from mean score distribution

gives an approximation of how much a typical score is above or below the average score.
represents the spread of scores(smaller = closer to mean; larger = larger distribution/spread)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Standard error of measurement

A

estimate of how much individual’s score would be expected to change of re-testing with same/equivalent form of test

Creates confidence interval (range of where true score would fall). The true score is always an unknown because no measure can be constructed that provides a perfect reflection of the true score. Error can occur systematically due to instrument in construction or the user may have made an error in use of the instrument

17
Q

Standard scores

A

Standardized scores with a fixed mean and SD to which raw scores can be converted for comparison

used to make comparison among scores; can covert raw scores into standard scores to make objective comparisons

Ex: score on most school tests

18
Q

Standardization sample

A

Representative group of people who take the test and establish the norms

A comparison group consisting of individuals who have been administered a test under standard conditions - that is, with the instructions, format, and general procedures outlined in the test manual for administering the test. Creates norms for testing. May not be representative, but should be. frame of reference for test score interpretation for norm-referenced test

Ex: need sample of cadets from The Citadel - the sample of cadets must be representative (reflect or mimic) the entire core of cadets

19
Q

Test bias

A

the tendency of scores on a test to systematically over- or underestimate the true performance of individuals to whom that test is administered, particularly because they are members of specific groups (e.g., ethnic minorities, one or the other gender).

Difference in test scores that can be attributed to demographic variables

20
Q

Validity (types of)

A

the extent to which a test measures what it claims to measure

Content: how well a test encompasses the full domain of it’s aim

Criterion-Related: compares the indicator to some standard variable that it should be associated with if it is valid.

Construct: how well a test measures the construct it purports to

21
Q

Variance

A

A measure of variability based on the squared deviations of the data values about the mean

how much something varies from the mean; distance from mean; variance, range, and standard deviation are all measures of variability

Ex: how much a sample of peoples’ heights varies from the average height