Test1 Flashcards

Don't Fail

1
Q

Define Test

A

A measurement instrument that consists of a SAMPLE OF BEHAVIOR obtained under STANDARDIZED conditions and evaluated using established scoring rules

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

3 Test User Qualifications

A

Level A: Limited range ex: achievement/educational; No specialized training required except maybe a bachelors degree
Level B: Some specialized training req.; ex: aptitude/personality; req. masters deg. and course work on testing
Level C: Extensive training req.; ex: intelligence/projective; advanced training and doctoral degree plus licensure req.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

2 Main Reasons for using Tests

A
  1. Efficiency

2. Objectivity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

3 Uses of Tests

A
  1. Classification
  2. Research
  3. Diagnosis and treatment planning
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

5 Major Categories of Tests

A
  1. Mental Ability- cognitive functioning
  2. Achievement- what they know
  3. Personality- normal or psychopathological
  4. Interests, attitudes, and values- career
  5. Neuropsychological- CNS/Brain viewing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Major Source of Info About Tests

A

Published: tests in print
Unpublished: directory of unpublished mental measures
ETS Test Collection of both published and unpublished tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

2 Systematic Reviews

A
  1. Mental Measurements Yearbook

2. Test Critiques

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

5 Factors Affecting Responses to Assessment

A
  1. Motivation
  2. Anxiety
  3. Coaching
  4. Physical/Psychological Conditions
  5. Social Desirability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

3 Levels of defining a variable

A
  1. Construct- general definition of a variable
  2. Measure- operational definition- often a test
  3. Raw Data- numbers resulting from the measure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why do we convert raw scores to normed (z) scores?

A

Raw scores mean nothing! At least from z scores, they may be converted to t-scores or percentiles or whatever to make meaning of the data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why is a normal curve important?

A

Everything is based on a normal curve because it is the assumed distribution- allows comparison to others

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why do we convert z-scores to standardized scores?

A

Z-scores are hard to interpret so we convert to get better meaning from the data compared to others

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Age Norms

A

normal scores for a particular age

ex: normal height for a 10yo

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Grade Norms

A

Normal scores for a particular grade

ex: typical reading level for 5th grade

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

National Norms

A

represents the entire national population- SAT/GRE or WISC-II

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

International Norms

A

Developed in the context of international studies of school achievement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Conveniency Norms

A

Groups from single geographic location- limited range

ex: Self-concept test based on 250 8th graders from a north east city

18
Q

User Norms

A

based on groups who actually took the test

ex: SAT

19
Q

Subgroup Norms

A

taken from total norm group: separate norms may be provided by sex, race, etc…
ex: Zac is in the 60th percentile nationally but 30th percentile in his group

20
Q

Local Norms

A

scores compare nationally but also in relation to the scores of other people within the group
ex: how seniors this year compare to seniors of previous years

21
Q

Difference between norm-referenced and criterion referenced

A

Norm-Referenced: representative sample of individuals to compare scores to each other
Criterion-Referenced: used to compare scores to a predetermined criterion/standard ex:licensure

22
Q

X = T + E

A

From classical test theory
X= obtained score
T= True score
E= Error measurement

The consistency scores plus the score effect from inconsistency equals the obtained score

23
Q

3 sources of unsystematic measurement errors

A
  1. Item selection (test content)- what questions should be included?
  2. Test administration- room temp, lighting, noise level
  3. Test Scoring- subjectivity in projective/essay tests
24
Q

Reliability

A

consistency in measurement- NOT perfect or absolute but a matter of degree

25
Q

Correlation

A

Quantitive magnitude and direction of relaitonship

26
Q

Test-retest reliability

A

Reliability coefficient is obtained by giving the same test tot he same individuals on two separate occasions

27
Q

Inter-Scores Reliablity

A

AKA inter-observer or inter-rater reliablity

2 or more scorers/raters work independently to keep from influencing each other

28
Q

Alternate Form Reliability

A

2 or more forms of a test with a similar number of items, time limits, and content, given to the same examinees

29
Q

Split-Half Reliability

A

Shows internal consistency by comparing the first half of the test to the second half

30
Q

What does reliability coefficient tell us?

A

Degree of reliability - indicates the proportion of variance in obtained test scores that is accounted for by variability in the true score- with tests it also indicates consistency of obtained scores

31
Q

Examples of standardized scores

A

Percentile, t-scores, stens, stanines

32
Q

Validity

A

Measuring what we intent to measure- asking about validity of the interpretation of the score for a particular purpose as a matter of degree (not all or none)
ex: Rorschach scale for depression

33
Q

Face validity

A

Does the test APPEAR to measure what it is intended to measure- not whether it actually does or not

34
Q

Content Validity

A

Deals with relationship between content of the test and a well developed domain of knowledge/behavior- does the test cover a good representative sample of all possible contents of the domain
Application: education/employment

35
Q

Criterion-Related Validity

A

Indicates the degree of relationship between the predictor (test) and criterion (level of performance the test is trying to predict). 2 kinds:
Concurrent: data collected before or at the same time as the test is given
Predictive: criterion data is collected after the test is given

36
Q

Construct Validity

A

General validity of the measurement tool- does the intrument measure the construct it is intended to measure
ex: altruism and 3 qualities- you have to test whether those three things are actually qualities of altruism

37
Q

Which is more important- reliability or validity?

A

Validity- who cares if it is reliable if it doesn’t measure what you need it to measure?

38
Q

Sir Frances Galton

A

Founder of psychological testing; measured sensory characteristics in a big makeshift lab to evaluate mental ability

39
Q

James McKeen Cattell

A

Furthered Galton’s research- his battery of tests led to conceptual grandfather of ACT/SATs- he coined the phrase “mental test”

40
Q

Alfred Binet

A

Father of intelligence testing- first to actually use mental tests (rather than sensory) like word usage and connection

41
Q

Lewis Terman

A

“Benchmark” of intelligence testing, when Stanford-Binet test came to USA for immigrants

42
Q

Robert Yerkes

A

created terrible intelligence tests for WWI placement