Psychological Testing & Assessment; Assumptions & Norms Flashcards

1
Q

The gathering and integration of Psychology-related data for the purpose of making a psychological evaluation that is accomplished through the use of tools.

A

Psychological Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The process of measuring Psychology related variables by means of devices or procedures designed to obtain a sample of behavior.

A

Psychological Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Its objective is typically to answer a referral question, solve a problem or arrive at a decision through the use of tools of evaluation.

A

Psychological Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

In psychological assessment, the _ is the key to the process of selecting tests and other tools of evaluation as well as in drawing the conclusions.

A

Assessor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the typical outcome of Psychological testing?

A

Test scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the 2 different approaches to assessment?

A

Collaborative Psychological Assessment
Dynamic Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

In this approach, the assessor and assessee may work as partners from the initial contact through final feedback.

A

Collaborative Psychological Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

An interactive approach to Psychological Assessment that usually follows a model of: evaluation> intervention of some sort> evaluation. This provides a means for evaluating how the assessee processes or benefits from some type of intervention.

A

Dynamic Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A measuring device or procedure.

A

Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Psychological tests almost always involve analysis of _.

A

Sample of behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The subject matter of the test.

A

Content

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Form, plan, structure, arrangement and layout of test items as well as related considerations. It also refers to the form in which a test is administered.

A

Format

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Demonstration of various kinds of tasks demanded of the assessee, as well as trained observation of an assessee’s performance.

A

Administration procedures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

For tests that are designed for administration on _ may require an active and knowledgeable test administrator.

A

One-to-one basis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The process of assigning such evaluative codes or statements to performance on tests, tasks, interviews or some other behavior samples.

A

Scoring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Most tests of intelligence come with __, that are explicit about scoring criteria and the nature of interpretations.

A

Test manuals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Refers to how consistently, how accurately a psychological test measures what it purports to measure, the usefulness or practical value that a test or other tool of assessment has for a particular purpose.

A

Psychometric soundness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

The method of gathering information through direct communication involving reciprocal exchange.

A

Interview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Samples of one’s ability and accomplishment.

A

Portfolio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Refers to records, transcripts and other accounts in written, pictorial or other form that preserve archival information, official and informal accounts and other data and items relevant to an assessee.

A

Case History Data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A report or illustrative account concerning a person or an event that was compiled on the basis of case history data.

A

Case study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Monitoring the actions of others or oneself by visual or electronic means while recording quantitative and/or qualitative information regarding those actions.

A

Behavioral observation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Observe behavior of humans in natural settings in which the behavior would typically be expected to occur.

A

Naturalistic Observation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A tool of assessment wherein assessees are directed to act as if they were in a particular situation. Assessees may then be evaluated with regard to their expressed thoughts, behaviors, abilities and other variables.

A

Role-Play Tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

It can serve as test administrators and as highly efficient test scorers.

A

Computers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What are the different types of scoring reports?

A

Simple scoring report
Extended scoring report
Interpretive report
Consultative report
Integrative report

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

A scoring report that includes statistical analysis of the testtaker’s performance.

A

Extended scoring report

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

A scoring report that includes numerical or narrative interpretive statements in the report. Some of it contain relatively little interpretation and simply call attention to certain high, low or unusual scores that need to be focused on.

A

Interpretive report

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

A scoring report that is usually written in language appropriate for communication between assessment professionals and may provide expert opinion concerning analysis of the data.

A

Consultative report

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

A scoring report that integrates data from the sources other than the test itself into the interpretive report.

A

Integrative report

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Create tests or other methods of assessment.

A

Test developer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Psychological tests and assessment methodologies are used by a wide range of professionals. These are called _.

A

Test user

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Anyone who is the subject of an assessment or an evaluation.

A

Testtaker

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Reconstruction of a deceased individual’s psychological profile on the basis of archival records, artifacts and interviews previously conducted with the deceased or people who knew him or her.

A

Psychological autopsy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Test that evaluates accomplishment or the degree of learning that has taken place.

A

Achievement test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Tool of assessment used to help narrow down and identify areas of deficit to be targeted for intervention.

A

Diagnostic test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Nonsystematic assessment that leads to the formation of an opinion or attitude.

A

Informal evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

In this setting, tests are mandated early in school life to help identify children who may have special needs.

A

Educational settings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

In this setting, tests and many other tools of assessment are used to help screen for or diagnose behavior problems.

A

Clinical settings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Group testing in clinical settings is primarily used for _ - identifying those individuals who require further diagnostic evaluation.

A

Screening

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

In this setting, the ultimate objective of many such assessments is the improvement of the assessee in terms of adjustment, productivity or some related variables.

A

Counseling setting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

In these settings, a wide range of achievement, aptitude, interest, motivational and other tests may be employed in the decision to hire as well as in related decisions regarding promotion, transfer, job satisfaction and eligibility for further training.

A

Business and Military Setting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What is the well known application of measurement in governmental settings?

A

Governmental licensing certificate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

An observable action or the product of an observable action including test-or assessment-related responses.

A

Overt Behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

The more the testtaker respond in a particular direction as keyed by the test manual as correct or consistent with a particular trait, the higher that testtaker are presumed to be on the targeted ability or trait.

A

Cumulative scoring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Understanding of behavior that has already taken place. It is typically the use of psychological tests in forensic matters.

A

Postdict

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Refers to factors other than what a test attempts to measure will influence performance on the test.

A

Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Component of a test score attributable to sources other than the trait or ability measured.

A

Error variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

What are the potential sources of error variance?

A

Assessee
Assessor
Measuring instruments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

The test performance data of a particular group of testtakers that are designed for use as a reference when evaluating or interpreting individual test scores.

A

Norms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

The group of people whose performance on a particular test is analyzed for reference in evaluating the performance of individual testtakers.

A

Normative sample

52
Q

The process of deriving norms.

A

Norming

53
Q

A method of evaluation and a way of deriving meaning from test scores by evaluating an individual teststaker’s score and comparing it to scores or a group of testtakers.

A

Norm-referenced testing and assessment

54
Q

The process of administering a test to a representative sample of testtakers for the purpose of establishing norms.

A

Standardization

55
Q

In the process of developing a test, a test developer has targeted some defined group as the population for which the test is designed.

A

Sampling

56
Q

The complete universe or set of individuals with atleast one common, observable characteristic.

A

Population

57
Q

A portion of the universe of people deemed to be representative of the whole population.

A

Sample

58
Q

The process of selecting the portion of that universe deemed to be representative of the whole population.

A

Sampling

59
Q

A sampling method where differences with respect to some characteristics of subgroups within a defined population are proportionately represented in the sample. It helps prevent sampling bias and ultimately aid in the interpretation of findings.

A

Stratified sampling

60
Q

Arbitrarily selecting some people because it is believed to be representative of the population.

A

Purposive sampling

61
Q

People who are most available to participate in the study.

A

Incidental sample or convenience sample

62
Q

What are the 6 types of norms?

A

Percentile
Developmental Norms
National Norms
National Anchor Norms
Subgroup Norms
Local Norms

63
Q

An expression of the percentage of people whose score on a test falls below a particular raw score.

A

Percentile

64
Q

Applied broadly to norms developed on the basis of any trait, ability, skill or other characteristic that is presumed to develop, deteriorate or otherwise be affected by chronological age, school grade or stage of life.

A

Developmental Norms

65
Q

A developmental norm that indicates the average performance of different samples of testtakers who were at various ages at the time the test was administered.

A

Age norms

66
Q

A developmental norm designed to indicate the average test performance of testtakers in a given school grade.

A

Grade norms

67
Q

Derived from a normative sample that was nationally representative of the population at the time the norming study was conducted.

A

National Norms

68
Q

A type of norms that provides some stability to test scores by anchoring them to other test scores.

A

National anchor norms

69
Q

A normative sample that is segmented by any of the criteria initially used in selecting subjects for sample.

A

Subgroup norms

70
Q

A type of norm that provides normative information with respect to the local population’s performance on some test.

A

Local Norms

71
Q

The distribution of scores obtained on the test from one group of testtakers is used as the basis for the calculation of test scores for future administration of the test.

A

Fixed-reference group scoring system

72
Q

Evaluating the test score in relation to other scores on the same test. The usual area of focus is how an individual performed relative to other people who took the test.

A

Norm-referenced

73
Q

A method of evaluation and a way of deriving meaning from test scores by evaluating an individual’s score with reference to a set standard.

A

Criterion referenced testing and assessment

74
Q

It is an index of reliability, a proportion that indicates the ratio between the true score variance on a test and the total variance.

A

Reliability coefficient

75
Q

The degree of the relationship between various forms of a test that can be evaluated by means of an alternate-forms or parallel-forms coefficient of reliability.

A

Coefficient of Equivalence

76
Q

The means and the variances of the observed test scores are equal.

A

Parallel forms

77
Q

The estimate of the extent to which item sampling and other errors have affected test scores on versions of the same test when, for each form of the test, the means and variances of observed test scores are equal.

A

Parallel Forms reliability

78
Q

Typically designed to be equivalent with respect to variables such as content and level of difficulty.

A

Alternate forms

79
Q

Refers to an estimate of the extent to which these different forms of the same test have been affected item sampling error of other error

A

Alternate forms reliability

80
Q

It refers to the degree of correlation among all the items on a scale. Calculated from a single administration of a single form of a test.

A

Internal consistency of reliability

81
Q

An index of _ consistency is useful in assessing the homogeneity of the test.

A

Inter-item consistency

82
Q

The statistic of choice for determining the inter-item consistency or dichotomous items, primarily those items that can be scored right or wrong.

A

Kuder-Richardson Formula 20 or KR-20

83
Q

It is appropriate for use on tests containing no dichotomous items.

A

Coefficient alpha

84
Q

Because negative values of alpha are theoretically impossible, it is recommended under such circumstances that the alpha coefficient be reported as _.

A

Zero

85
Q

It is a measure used to evaluate the internal consistency of a test that focuses on the degree of difference that exist between item scores.

A

Average proportional distance

86
Q

The higher the reliability of a test, the _ the SEM/SEE.

A

Lower

87
Q

Yield insights regarding a particular population of test takers as compared to the norming sample described in a test manual.

A

Local validation studies

88
Q

It describes a judgement of how adequately a test samples behavior representative of the universe of behavior that the test was designed to sample.

A

Content validity

89
Q

He developed a formula termed content validity ratio.

A

C.H. Lawshe

90
Q

A method for gauging agreement among raters or judges regarding how essential a particular item is.

A

Quantification of content validity

91
Q

When fewer than half the panelists indicate “essential” in content validity ratio.

A

Negative CVR

92
Q

When exactly half the panelists indicate “essential” in content validity ratio.

A

Zero CVR

93
Q

When more than half the panelists indicate “essential” in content validity ratio.

A

Positive CVR

94
Q

The content validity ratio ranges between _.

A
  1. and .99
95
Q

The standard against which a test or a test score is evaluated.

A

Criterion

96
Q

A judgment of how adequately a test score can be used to infer an individual’s most probable standing of some measure of interest.

A

Criterion- related validity

97
Q

What are the 3 characteristics of a criterion

A

Relevant
Valid
Uncontaminated

98
Q

The term applied to a criterion measure that has been based, atleast in part, on predictor measures.

A

Criterion contamination

99
Q

It is a type of criterion-related validity that indicates the extent to which test scores may be used to estimate an individual’s present standing on a criterion.

A

Concurrent validity

100
Q

It is a type of criterion-related validity. It is the measure of the relationship between the test scores and a criterion measure obtained at a future time.

A

Predictive validity

101
Q

Judgments of criterion-related validity are based on 2 types of statistical evidence:

A

Validity coefficient
Expectancy data

102
Q

A correlation coefficient that provides a measure of the relationship between test scores and scores on the criterion measure.

A

Validity coefficient

103
Q

The degree to which an additional predictor explains something about the criterion measure that is not explained by predictors in use.

A

Incremental validity

104
Q

Table that illustrate the likelihood that the test takers will score within some interval of scores on a criterion measure-an interval that may be seen as “passing” or “acceptable”.

A

Expectancy table

105
Q

A graphic representation of an expectancy table.

A

Expectancy chart

106
Q

A judgment about the appropriateness of inferences drawn from test scores regarding the individual standings on a variable called a construct.

A

Construct Validity

107
Q

An informed, scientific idea developed or hypothesized to describe or explain behavior. Unobservable, presupposed traits that a test developer may invoke to describe test behavior or criterion performance.

A

Construct

108
Q

Viewed as the unifying concept for all validity evidence.

A

Construct validity

109
Q

It refers to how uniform a test is in measuring a single concept.

A

Homogeneity

110
Q

It may be used in estimating the homogeneity of a test composed of multiple-choice items.

A

Coefficient alpha

111
Q

What are the evidence of construct validity?

A

Homogeneity
Changes with age
Pretest-posttest changes
Distinct groups
Convergent evidence
Discriminant evidence
Factor Analysis

112
Q

A validity coefficient showing little relationship between test scores and/or other variables with which scores on the test being construct validated should not theoretically be correlated. This provides what kind of evidence?

A

Discriminant evidence

113
Q

An experimental technique useful for examining both convergent and discriminant validity. It is the matrix or table that results from correlating variables (traits) within and between methods.

A

Multitrait-multimethod matrix

114
Q

It is designed to identify factors or specific variables that are typically attributes, characteristics or dimensions on which people may differ. It is employed as a data reduction method in which several sets of scores and the correlations between them are analyzed.

A

Factor analysis

115
Q

Factor analysis is conducted on three basis:

A

Exploratory factor analysis
Confirmatory factor analysis
Factor loading

116
Q

A factor analysis that typically entails estimating or extracting factors, deciding how many factors to retain, and rotating factors to an interpretable orientation.

A

Exploratory factor analysis

117
Q

Factor analysis wherein the researchers test the degree to which a hypothetical model includes factors that fit the actual data.

A

Confirmatory factor analysis

118
Q

It conveys information about the extent to which the favor determines the test scores.

A

Factor loading

119
Q

High factor loadings would provide _ evidence of construct validity.

A

Convergent

120
Q

Moderate to low factor loadings would provide _ evidence of construct validity.

A

Discriminant

121
Q

A factor inherent in a test that systematically prevents accurate, impartial measurement.

A

Bias

122
Q

A numerical or verbal judgment that places a person or an attribute along a continuum identified by a scale of numerical word descriptions known as rating scale.

A

Rating

123
Q

A judgment resulting from the intentional or unintentional misuse of a rating scale.

A

Rating error

124
Q

3 types of rating errors:

A

Leiniency or Generosity error
Severity error
Central Tendency Error

125
Q

One way to overcome restriction of range errors. It is a procedure that requires the rater to measure individuals against one another instead of against an absolute scale.

A

Ranking

126
Q

The extent to which a test is used in an impartial, just and equitable way.

A

Fairness