Week 8- The Assessment Process Flashcards

1
Q

What is a basic explanation of Norm-Reference Assessments?

A

What a persons score is compared to the scores o other people.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How are Norms obtained?

A

By testing a clearly defined, standard, group and looking at the scores they obtain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do you work out where a person ranks among the Norm group?

A

By converting the raw score into a derived score.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What type of Norm Assessment is useful in achievement testing?

A

Age Norms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What type of Norm Assessment is useful in testing the achievement levels of school children?

A

Grade Norms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are Percentile Ranks (in Norm Assessment)?

A

The percentage of the norm group that earn the same or less than the person.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the 3 Benefits of having Norms?

A
  1. Provides information about the person’s level of functioning.
  2. Takes little time to administer for the amount of information obtained.
  3. Provides more information than could be obtained through just observation.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the 4 Cons of Norms?

A
  1. Norms are based on samples that DO NOT adequately represent the type of population to which the scores are compared to.
  2. Grade, and age norms, can be seen as saying that the person should perform at that level for the current test and also for other tests in other areas of function.
  3. Quickly out of date.
  4. Sample size needs to be adequate.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the three steps in developing Norms?

A
  1. DEFINE the target population
  2. SELECT the (representative) sample
  3. STANDARDISATION. This then generates the Norms.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the 4 ways in which we can evaluate the Norm Groups?

A
  1. REPRESENTATIVENESS of the group (i.e. Cultural Norms)
  2. DATE the norms were established
  3. SIZE of the norm group
  4. RELEVANCE of the Norms to the test taker
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How large should the sample size of the Norm group be?

A

Over 100 but usually over 1000

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Size of the Norm Group should be large enough to:

A
  1. Ensure stability of the test

2. Ensure inclusiveness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are Criterion-Referenced Tests concerned with?

A

With what a person does, not how they compare with others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What type of test assess’s the performance on a test in comparison to an established standard?

A

Criterion-Referenced Tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What kind of test would you use to establish if a person can to can’t use Public Transport?

A

Criterion-Referenced Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What test focuses on specific content domains?

A

Domain Referenced Tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the benefits of Criterion-Referenced Tests?

A

It provides information that is relevant when making a decision like: i) Is the person is ready to go onto the next level. ii) Are there any sub-skills what need more attention?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the Cons of Criterion-Referenced Tests?

A
  1. Focuses on specific facts, not understanding
  2. Need to ask if the items fully represent the knowledge you want to assess?
  3. Puts performance in yes/no, black/white categories?
  4. Reporting results can be quite lengthy
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the 4 steps when writing a Criterion-Referenced Tests?

A
  1. State the general objective.
  2. Task analyse the skill
  3. For each task write a test item.
  4. Decide on the level of success required for mastery of each component.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is Terminal Objective?

A

When writing a Criterion-Referenced Test, it is when test authors decide on the level of success required for mastery.

21
Q

What is involved in the task analysis component of writing a Criterion-Referenced Test?

A

List the component tasks needed that make up the skill.

22
Q

What are three ways in which to reexamine a test to gauge reliability?

A
  1. Same test at different times (re-test reliability)
  2. Use different items that measure the same construct
  3. Take the test under different conditions
23
Q

What are 4 Assumptions to the theory of reliability?

A
  1. People’s traits are stable
  2. Errors are random and occur because of natural variance.
  3. The Observed score is the person’s true/actual score
24
Q

What is an OK reliability coefficient?

A

anything above .80

25
Q

What are the types of reliability coefficient?

A
  1. Test-Retest
  2. Parallel Form
  3. Internal Consistency
26
Q

What does Test-Retest reliability measure?

A

It measures stability over time.

27
Q

How short should the interval be in Test-Retest reliability?

A

2-4 Weeks

28
Q

What are some of the issues of Test-Retest reliability?

A

Results are influenced by:

  1. How the test is given
  2. How much the person remembers or has learnt.
  3. Practice effects
  4. Memory
29
Q

Describe Parallel-Form Reliability?

A

Alternate forms of the same test is given to the same participant. There should be equivalent means and variances and a high reliability coefficient.

30
Q

What are the Pro’s of Parallel-Form Reliability?

A
  1. Memory and Practice effects are reduced

2. Good for follow up studies

31
Q

What are the Con’s of Parallel-Form Reliability?

A
  1. Some tests (like the WISC) don’t have alternate forms

2. Carryover effects. The strategies used on one test can be used on the other

32
Q

What are the 2 ways in which Internal Consistency is gained?

A
  1. Split-Half Reliability

2. Kuder-Richardson Reliability and Coefficient Alpha

33
Q

Define Split-Half Reliability?

A

It is a way of obtaining Internal Consistency. The test is divided into 2 half are the items are correlated to each other.

34
Q

When should you use the Kuder-Richardson?

A

If the test is not equivalent (i.e if the first half of the test is harder than the second half).

35
Q

What is the KR-20 good for measuring?

A

Dichotomous items

36
Q

What is the Cronbachs A good for measuring?

A

Things on a continuous variables (i.e Height, Weight)

37
Q

What are the 5 factors that affect reliability?

A
  1. Test length- Longer test = higher reliability
  2. Test-Retest interval- Shorter= Less reliability
  3. Variability of Scores
  4. Guessing
  5. Variation within test situation
38
Q

What is the formula for SEM?

A

SEM=SDx√(1-r)

39
Q

What does the SEM actually measure?

A

It measures the average amount the person’s observed score diverts from their true score.

40
Q

What is the formula for CI?

A

CI=Obtained Score +/- Z (SEM)

41
Q

What are the four things Validity must have?

A
  1. Must be established in reference to the purpose of the test
  2. Is a matter of degree, not just a yes/no answer
  3. Must not use just one type of validity, no one type covers everything
  4. Tests are not valid if used in isolation from the social system they are used in.
42
Q

What are the three types of Validity?

A
  1. Face Validity
  2. Content Validity
  3. Criterion Related Validity
43
Q

Describe Content Validity?

A

Refers to if the items on the test represent the domain being tested.

44
Q

What are the core issues of Content Validity?

A
  1. APPROPRIATENESS of the items
  2. COMPREHENSIVNESS. Do they cover the whole domino?
  3. MASTERY. Does it cover all levels of mastery?
45
Q

What are the three Procedures for getting Content Validity?

A
  1. Specialists map the domain
  2. Make the test accurate
  3. Put the validation process in the test manual
46
Q

What is/ isn’t Content Validity good for?

A

Good for achievement and occupational tests.

Not good for personality or aptitude tests.

47
Q

Define Criterion-Related Validity?

A

When you compare the test scores to another test.

48
Q

What are the two types of Criterion-Related Validity?

A
  1. Concurrent- Uses a current criterion to use as a comparison.
  2. Predictive- Uses the test score at one time and compared it to another criterion at a future time.