Final Exam Review Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

ratio

A

has a TRUE ZERO, unlike the rest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

most psychological tests

A

tend to be ORDINAL, but we treat, them as INTERVAL

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

central tendency

A

statistic that indicates the average or midmost score between the extreme scores in a distribution
○ Mean
○ Median: most useful when there are outliers
○ Mode: when two scores occur with the highest frequency, it is called BIMODAL

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

variability

A

indication or degree to which scores are scattered or dispersed in a distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Range

A

difference between highest and lowest scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Interquartile range

A

difference between third and first quartiles of distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

○ Semi interquartile range

A

interquartile range divided by 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

○ Average deviation

A

the average deviation of scores in a distribution from the mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Variance

A

the arithmetic mean of squares of the differences between the scores in a
distribution and their mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Standard deviation

A

the square root of average squared deviations around the mean. The
square root of the variance. TYPICAL DISTANCE OF SCORES FROM THE MEAN

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

skewness

A

the extent of symmetry in a distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

positive skew

A

relatively few scores fall at the high end of a distribution (most scores at low end), hard test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

negative skew

A

relatively few scores fall at the low end of a distribution, most scores at high end, easy test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

kurtosis

A

the steepness of a distribution in its center

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

platykurtic

A

relatively flat

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

leptokurtic

A

relatively peaked

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

mesokurtic

A

somewhere in the middle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

normal curve

A

bell shaped, smooth, mathematically defined curve that is highest in its center and perfectly symmetrical

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Area under the normal curve

A

can be divided into areas of defined units of standard deviations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What percent of scores fall between one standard deviation above and below the mean?

A

68% of scores fall between one SD above and below the mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What percent of scores fall between two standard deviation above and below the mean?

A

95% of scores fall between two SD above and below the mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What percent of scores fall between three standard deviation above and below the mean?

A

99% of scores fall between three SD above and below the mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Positive correlation

A

as one variable increases or decreases, so does the other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Negative correlation

A

as one variable increases, the other decreases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Weak correlation

A

variables do not have strong relationship with one another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

restriction of range

A

leads to weaker correlations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Correlation coefficient

A

vary in magnitude between -1 and 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

correlation of 0

A

no relationship

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

pearson R

A

a method of computing correlation when both variables are linearly related and
continuous

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Coefficient of determination​:

A

the variance that variables share with one another (found by
squaring r)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

spearman rho

A

a method for computing correlation used primarily when sample sizes are small or
variables are ordinal in nature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

raw score

A

unaltered measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Standardized score

A

raw score that has been converted from one scale to another scale, where the
latter scale has some arbitrarily set mean and standard deviation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What are standard scores good for?

A

○ Scores are easier to interpret
○ Can compare individuals across different studies
○ Highly skewed data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Z score

A

conversion of raw score into a number indicating how many SDs the raw score is
above or below the mean
z=(x-_X)/s

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

T score

A

can be called a fifty plus or minus ten scale, mean set at 50, SD set at 10

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Normalizing a distribution

A

involves “stretching” the skewed curve into the shape of a
normal curve and creating a corresponding scale of standard scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

meta analysis

A

a family of techniques to statistically combine information across studies to
produce single estimates of the data under study
○ Estimates are in the form of effect size which is often expressed as a correlation
coefficient
○ Useful because it examines the relationship between variables across many separate
studies
○ Important consideration: quality of population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Psychological testing assumptions:

A

○ 1. Psychological states and traits exist
○ 2. Psychological states/traits can be quantified or measured
○ 3. Test related behavior predicts non test related behavior
■ Responses predict real world behavior as well as future behavior ○ 4. Tests and other measurement techniques have strengths and weaknesses
■ Appreciate limitation of tests
○ 5. Various sources of error are part of the assessment process
■ Error: long standing assumption that factors other than what a test attempts to measure will influence performance on a test
■ Error variance: component of a test score attributable to sources other than the trait or ability being measured
○ 6. Testing and assessment can be conducted in a fair manner
■ Some problems are more political than psychometric
○ 7. Testing and assessment benefit society

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

traits

A

a trait is any distinguishable, enduring way in which one varies from one another
Relatively stable, may change overtime
Nature of situation influences how traits are manifested

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

states

A

less enduring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

constructs

A

an informed, scientific concept developed to describe or explain behaviors Cannot see/touch constructs, but can infer their existence from overt behavior Constructs -> traits -> states

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

reliability

A

the ​CONSISTENCY​ of the measuring tool, the precision with which the test measures and the extent to which error is present in the measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

validity

A

test measures what it intends to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Reliability is NECESSARY

A

but not SUFFICIENT for validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Norm referenced testing

A

deriving meaning from test scores by evaluating an individual test taker’s score and comparing it to a group of test takers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Norms test performance data

A

from a specific group of test takers that are designed for use as a reference when evaluating individual test scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Normative sample

A

the reference group to which test takers are compared

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

standardization sample

A

process of giving a test to a representative sample of test takers for the
purpose of establishing norms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

​Stratified Sampling​:

A

Sampling that includes different subgroups, or strata, from the population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

​ Stratified-random Sampling​

A

every member of the population has an equal chance of being included in the sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

​Purposive sampling​:

A

arbitrarily selecting a sample that is believed to be representative of the population and it doesn’t use probability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

​Incidental/convenience sample​

A

sample that is convenient or available for use
○ May not be representative of the population
○ Generalization of findings from convenience samples must be made with caution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

​Standardization​:

A

the process of administering a test to a representative sample of test takers for the purpose of establishing norms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

IN ORDER TO STANDARDIZE A TEST:

A

○ Standardize the administration (including the instructions)
○ Recommend a setting for the administration and the required materials
○ Collect and analyze data
○ Summarize data using descriptive statistics (ex. Measures of central tendency
and variability)
○ Clearly describe the standardization sample characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

stratified sampling method

A

involves the division of a population into smaller groups known as strata. In stratified random sampling, the strata are formed based on members’ shared attributes and/or characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

What are some cultural considerations in test construction/standardization?

A

○ Become aware of the cultural assumptions on which the test is based
○ Consider consulting with members of the particular cultural communities regarding the appropriateness of particular assessment techniques, tests, or test items
○ Strive to incorporate assessment methods that complement the worldview and lifestyle of assessees who come from a specific cultural and linguistic population ○ Be aware of equivalence issues across cultures, including equivalence of language used and the constructs measured
○ Score, interpret, and analyze assessment data in its cultural context with due consideration of cultural hypotheses as possible explanation for findings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

criterion referenced test

A

: test takers are evaluated as to whether they meet a set
standard or threshold (ex. A driving exam, performance on a licensing exam)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

​Random Error​:

A

a source of error in measuring a targeted variable caused by
unpredictable fluctuations and inconsistencies of other variables in the measurement process (ex. Noise)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

systematic error

A

a source of error in measuring a variable that is typically constant or proportional to what is presumed to be the true value of the variable being measured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

test construction

A

variation may exist within items on a test or between tests
○ (ex. Item sampling, or content sampling)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

test administration

A

sources of error from the testing environment
○ Also, test taker variables such as pressing emotional problems, physical discomfort, lack of sleep, and effects of drugs/medication
○ Examiner related variables such as physical appearance and demeanor may
play a role

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Test Scoring and Interpretation:​

A

computer testing reduces error in test scoring but many tests still require expert interpretation (ex. Projective tests)
○ Subjectivity in scoring can enter into behavioral assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

test retest reliability

A

an estimate of reliability obtained by correlating pairs of scores from the same people on 2 different administrations of the same test
● Most appropriate for variables that should be stable over time (ex. personality) and not appropriate for variables expected to change over time (ex. mood)
● Estimates tend to decrease as time passes
● With intervals over 6 months, the estimate of test-retest reliability is called the coefficient of stability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

parallel forms

A

: for each form of the test, the means/variances of observed test scores are equal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

alternate forms

A

different versions of a test that have been constructed so as to be parallel
○ DO NOT meet the strict requirements of parallel forms but typically item content
and difficulty are similar between tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

coefficient of equivalence

A

the degree of the relationship between various forms of a test
Reliability is checked by administering two forms of a test to the same group - Scores may be affected by error related to the state of test takers (ex. Practice, fatigue, etc.) or item sampling
○ Split-half reliability + Spearman-Brown formula

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

split-half reliability

A

obtained by correlating two pairs of scores obtained from
equivalent halves of a single test administered once.
○ 3 STEPS:
1. Divide the test into equivalent halves
2. Calculate a Pearson r between scores on the two halves of the test
3. Adjust the half-test reliability using the Spearman- Brown Formula

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

spearman brown formula

A

allows test developer/user to estimate internal consistency reliability from a correlation of two halves on a test
● SB: used to estimate the effect of shortening test length; sees how well homogenous items correlate with one another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

inter-item consistency

A

degree of relatedness of items on a test
○ Form of measuring test consistency without developing an alternate form of the test
○ Able to gauge the homogeneity of a test
○ Ideal in some cases because it is cost efficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

coefficient alpha

A

mean of all possible split-half correlations, corrected by Spearman Brown Formula ○ Most popular approach for internal consistency
○ Values range from 0 to 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

average proportional distance (APD)

A

Focuses on the degree of difference between scores on test items. It involves averaging the difference between scores on all the items and then dividing them by the number of response options on the test, minus t

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

inter scorer reliability

A

the degree of agreement/consistency between two or more scorers with regard to a particular measure
○ Often used with behavioral measures
○ Guards against biases or idiosyncrasies in scoring
○ Coefficient of inter-score reliability: scores from different raters are correlated
with on another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Understand how homogeneity vs heterogeneity of test items impacts reliability

A

● The more homogenous a test is, the more inter-item consistency it can be expected to have
● Test homogeneity is desirable because it allows relatively straightforward test score interpretation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Know the relation between range of test scores and reliability

A

● IF THE VARIANCE OF EITHER VARIABLE IN A CORRELATIONAL ANALYSIS IS RESTRICTED BY THE SAMPLING PROCEDURE USED, THEN THE RESULTING CORRELATION COEFFICIENT TENDS TO BE LOWER
● IF THE VARIANCE OF EITHER VARIABLE IN A CORRELATIONAL ANALYSIS IS INFLATED BY THE SAMPLING PROCEDURE USED, THE RESULTING CORRELATION COEFFICIENT TENDS TO BE HIGHER

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

What is the impact of a speed test or power test on reliability?

A

● Designed the speed test so that test takers can’t finish the test; low reliability
● Power: time limit is long; there are many items; can’t attempt all the items and is varied

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

Classical Test Theory CTT (AKA True-Score Model):

A

the most widely used model because of its simplicity
■ CTT assumptions are more readily met than Item Response Theory (IRT)
■ Problematic assumption of CTT has to do with equivalence of items on a
test
■ Typically yields longer tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

true score

A

value that according to classical test theory genuinely reflects an
individual’s ability (or trait) level as measured by a particular test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Item Response Theory (IRT):

A

Provides a way to model the probability that a person with X ability with be able to perform at a level of Y
○ Refers to a family of methods and techniques
○ Incorporates considerations of item difficulty and discrimination

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

Generalizability Theory

A

A person’s test scores vary from testing to testing because of variables in testing situation
○ Cronbach encouraged test developers and researchers to describe the details of
the particular test situation or universe leading to a specific test score
○ A universe is described in terms of its facets, including the number of items in the test, the amount of training the test scorers have had, and the purpose of the test
administration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

Standard Error of Measurement

A

the amount of error inherent in an observed score of measurement ○ Usually the higher the reliability means the lower the SEM value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

Be able to calculate the confidence interval if given the standard error of measurement and the confidence level index

A

● C.I=X+/- Index (z score) * SEM​ple cut scores
○ Confidence interval = mean +/- index (z score value) x standard error of
measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

Be able to use confidence interval information to interpret test scores

A

● Confidence intervals tell you the interval in which you assume the true score lies within

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

Standard error of difference​:

A

a measure that can aid a test user in determining how large a difference in test scores should be expected before it is considered statistically significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

face validity

A

a judgment concerning how relevant the test items appear to be

○ If a test appears to measure what it’s supposed to be (“on the face of it”) it could be high in face validity
○ A perceived lack of face validity may contribute to a lack of confidence in a test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

Content Validity​:

A

how adequately a test samples behavior representative of the universe of behavior that the test was designed to sample
○ “Do the items in the test adequately represent the content that should be included in the test?”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

test blueprint

A

plan regarding the types of information to be covered by the
items, the # of items tapping each area of coverage, the organization of the items
in the test, etc.
○ Typically established by recruiting a team of experts on the subject matter and
obtaining expert ratings on the degree of item importance as well as scrutinize
what is missing from the measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Criterion-Related validity​:

A

A judgment of how adequately a test score can be used to infer an individual’s most probable standing on some measure of interest (i.e. the criterion)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

Construct Validity ​(umbrella for validity)

A

Ability of test to measure theorized
construct (ex. Intellect, personality, etc.) that it aims to measure. Measure of validity that is arrived at by executing a comprehensive analysis of:
○ 1. How scores on a test relate to other test scores and measures
○ 2. How scores on the test can be understood within some theoretical framework
for understanding the construct that the test was designed to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

The Validity Coefficient​:

A

correlation coefficient between test scores and scores on the criterion measure ○ Validity coefficients are affected by restriction or inflation of range

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

Incremental Validity​:

A

degree to which an additional predictor explains something about the criterion measure that isn’t explained by predictors already in use
○ “To what extent does a test predict the criterion over and above other variables?”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

Understand what constitutes good face validity and what happens if it is lacking/why we might not want the test to be face validity

A

● If a test seems subjectively relevant and transparent from the perspective of the test taker, it has good face validity
● One may not want the test to be face valid if they do not find it to be so

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

criterion

A

the standard against which a test/test score is evaluated
● An adequate criterion is valid for the matter at hand, valid for the purpose it is being used, and uncontaminated, meaning it is not part of the predictor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

​Concurrent validity​

A

an index of the degree to which a test score is related to some criterion measure obtained at the same time (concurrently)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

​Predictive validity​

A

an index of the degree to which a test score predicts some criterion, or outcome, measure in the future. Tests are evaluated as to their predictive validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

Base Rate​:

A

extent to which the phenomenon exists in a population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

Hit Rate​:

A

accurate identification (true-positive/negative)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

​Miss Rate​:

A

failure to identify accurately (false positive/false negative)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

False positive

A

a miss wherein the test predicted that the test taker did possess the
particular characteristic or attribute being measured when in fact the test taker did not

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

False negative​:

A

is a miss wherein the test predicted that the test taker did not possess the particular characteristic or attribute being measured when the test taker actually did

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

What happens to the validity coefficient when you restrict or inflate the range of scores?

A

● Using a full range of test scores enables you to obtain a more accurate validity coefficient, which will usually be higher than the coefficient you obtained using a restricted range of scores
Increased range = higher validity coefficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

Incremental validity​

A

degree to which an additional predictor explains something about the criterion measure that isn’t explained by predictors already in use

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

“To what extent does a test predict the criterion over and above other variables?”

A

IMPORTANCE: Adding an additional predictor will change the criterion measure, regardless of the already established predictors. If the additional predictor has value to the test, it is important.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

If a test has high construct validity, what does this tell you about the test?

A

● It means the test is a valid measure of the construct, thus a good test
● Be familiar with the different types of evidence for construct validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

Evidence of homogeneity:​

A

how uniform a test is in measuring a single concept

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

Evidence of changes:

A

​some constructs are expected to change over time (ex.Reading rate)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

Evidence of pretest/posttest changes​:

A

test scores change as a result of some
experience between a pretest and posttest (ex. therapy)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

Evidence from distinct groups:

A

scores on a test vary in a predictable way as a
function of membership in some group (ex. Scores on the psychopathy checklist for prisoners vs. civilians)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

Convergent evidence​:

A

correlate highly in the predicted direction with scores on older, more established tests designed to measure the same constructs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

Discriminant evidence​:

A

showing little relationship between test scores and other variables with which scores on the test being construct validated should not theoretically be correlated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

factor analysis

A

A new test should load on a common factor with other tests of the same construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

bias

A

A factor inherent in a test that systematically prevents accurate, impartial measurement
Implies systematic variation in scores
○ Prevention during development is best cure for bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

Fairness​:

A

the extent to which a test is used in an impartial, just, equitable way

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

How do bias and fairness relate? Can you have an unbiased, yet unfair test?

A

A test can not be fair if it is biased and vice versa. A test can be free of bias but still be unfair

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
115
Q

rater error

A

a judgment resulting from the intentional or unintentional misuse of a rating scale ○ Raters may either be too lenient (leniency error/generosity error), too severe
(severity error), or reluctant to give ratings at the extremes (central tendency
error)
■ EX. Leniency error: teacher being an easy grader
■ EX. Severity error: Movie critics panning everything they review
■ EX. Central tendency error: an employer will most likely rate most of their employees towards the middle between 1-10 instead of 1 or 10

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
116
Q

halo effect

A

the tendency to give a particular person a higher rating than they
objectively deserve because of a favorable overall impression
○ Raters failure to discriminate among conceptually distinct and potentially independent aspects of a ratee’s behavior

117
Q

utility

A

: the usefulness or practical value of a test

118
Q

Psychometric soundness

A

Generally, the higher the criterion of a test, the greater the utility ○ Follows the trend that a higher criterion validity generally results in higher
utility (a valid test is not necessarily a useful test)

119
Q

costs

A

in context of the test are disadvantages or losses that present themselves in the construction or usefulness of a test.
○ Costs may include purchasing a test, a supply bank of test protocols, and computerized test processing
○ Other economic costs are more difficult to calculate such as the cost of not testing or testing with an inadequate instrument

120
Q

benefits

A

refers to profits, gains, or advantages the result from the test given
○ Higher worker productivity and profits for a company
○ Some potential benefits include: an increase in the quality of workers’
performance; an increase in the quantity of workers’ performance; a decrease in
the time needed to train workers; a reduction in the number of accidents; a
reduction in worker turnover

121
Q

​Economic Costs​:

A

include materials purchased to study for exam, administration of exam, computerized processing, test banks, etc.

122
Q

non-economic costs

A

usually societal consequences like location or accessibility
(online test)

123
Q

Economic Benefits​:

A

generally increase in quality of worker performance, quantity of worker performance, decrease in time needed to train workers, less worker turnover

124
Q

utility analysis

A

a cost–benefit analysis designed to determine the usefulness and/or practical value of an assessment tool
○ This type of analysis results in us knowing if there is a utility gain in these
methods whether it is monetary or not
○ “Which test gives us the most bang for the buck?”
○ Some utility analyses are straightforward, while others are more sophisticated, employing complicated mathematical models
○ Endpoint of a utility analysis yields an educated decision as to which of several alternative courses of action is most optimal (in terms of costs and benefits)

124
Q

​Non-Economic Benefits​:

A

include overall better work environment and worker morale

125
Q

The decision theory and utility

A

​(Cronbach and Gleser (1965): a practical method of test being made to tailor the ability of the applicant instead of the other way around
○ Classify decision problems
○ Look at selection strategies (single step process or multistep process)
○ Quantitative analysis (test utility, selection method, cost of test, expected test
outcome)
○ Adaptive treatment, recommendation of job requirement dependent on individual’s abilities

126
Q

​The Expectancy Table:​

A

the likelihood that a test taker will score within some interval of criterion scores for the measure of interest

127
Q

​Brogden-Cronbach-Gleser formula​:

A

is used to calculate the dollar amount of a utility gain resulting from the use of a particular selection instrument under specified conditions

128
Q

fixed cut scores

A

are made on the basis of having achieved a minimum level of proficiency on a test (e.g., a driving license exam)

129
Q

Multiple cut scores​ –

A

The use of multiple cut scores for a single predictor (e.g., students may achieve grades of A, B, C, D, or F).

130
Q

Multiple hurdles ​-

A

achievement of a particular cut score on one test is necessary in order to advance to the next stage of evaluation in the selection process (e.g., Miss America contest).

131
Q

​Discriminant analysis​:

A

a family of statistical techniques used to shed light on the relationship between identified variables (such as scores on a battery of tests) and two (and in some cases more) naturally occurring groups (such as persons judged to be successful at a job and persons judged unsuccessful at a job).

132
Q

IRT Based Methods​:

A

In an IRT framework, each item is associated with a particular level of difficulty
○ In order to “pass” the test, the test taker must answer items that are deemed to be above some minimum level of difficulty, which is determined by experts and serves as the cut score

133
Q

The Known Groups Method​:

A

entails collection of data on the predictor of interest from groups known to possess, and not to possess, a trait, attribute, or ability of interest
○ After analysis of a data, a cut score is chosen that best discriminates between groups.
○ One problem with known groups method is that no standard set of guidelines exist.
○ (EX. A math placement course: the cut score that would be selected is the score at the point of least difference between groups)

134
Q

The Angoff Method​:

A

the judges’ predictions are averaged to yield cut scores for the test
○ Can be used for personnel selection, traits, attributes, and abilities
○ Problems arise if there is low agreement between experts

135
Q

Utility Gain​:

A

refers to an estimate of the benefit (monetary or otherwise) of using a particular test or selection method

136
Q

What can influence the size of the job applicant pool?

A

● Economic climate
● Complexity of the job
● The job offer overall may not be taken by top performers

137
Q

What is the relationship between cutoff scores and selection ratio?

A

● Cutoff scores look for a percentage of test takers who performed exceptionally well (ex. Top 10% of scores)

138
Q

Are false positives seen more often at higher or lower cut scores?

A

● False positives are seen more often at higher cut scores

139
Q

Know the difference between relative and fixed cut scores

A

● Relative cut score refers to the top 10% of scores
● Fixed Cut score refers to the minimum score to pass the exam (ex. Driving test)

140
Q

What are the relations between the five stages of making a test?

A

● FIVE STAGES:
○ 1. Test conceptualization
■ “The impetus for developing a new test is some thought that “there ought to be a test for…”
■ The motivation could be a desire to improve psychometric problems with other tests, a new social phenomenon, or any number of things
■ There may be a need to assess mastery in an emerging occupation ○ 2. Test construction
○ 3. Test try out ○ 4. Analysis
○ 5. Revision
○ TTTAR

141
Q

What questions need to be considered during the ‘conceptualization’ phase?

A

-What is the test designed to measure? -What is the objective of the test?
-Is there a need for this test?
-Who will use this test?
-Who will take this test?
-What content will the test cover?
-How will the test be administered?
-What is the ideal format of the test?
-Should more than one form of the test be developed?
-What special training will be required of test users for administering or interpreting the test?
-What types of responses will be required of test takers?
-Who benefits from an administration of this test?
-Is there any potential for harm as the result of an administration of this test? -How will meaning be attributed to scores on this test?

142
Q

pilot work

A

create a prototype of a test and receive feedback; focus groups; expert panels

143
Q

scaling

A

designing or calibrating the measure
○ Type of scales: unidimensional, multidimensional, categorical, ordinal, etc
○ Rating scale: words, statements, or symbols on which test takers can indicate
the strength of a particular trait, attitude, or emotion
○ (ex. Likert Scale)

144
Q

comparative scaling

A

entailed the judgment of the stimulus along with other stimulus one the same scale

145
Q

categorical scaling

A

stimuli are placed into a group of two or more alternate categories (i.e. index cards)

146
Q

Guttman scale

A

Items range sequentially from weaker to stronger expressions of the attitude, belief, or feeling being measured.
○ All respondents who agree with the stronger statements of the attitude will also
agree with milder statements.

147
Q

unidimensional scales

A

only one dimension is presumed to underlie the ratings

148
Q

multidimensional scales

A

more than one dimension is thought to underlie the ratings

149
Q

selected

A

multiple choice, surveys

150
Q

constructed

A

short response, interviews

151
Q

What are some benefits of using computerized adaptive testing?

A

● Computerized adaptive testing (CAT) - an interactive, computer-administered test- taking process wherein items presented to the test taker are based in part on the test taker’s performance on previous items
● CAT is able to provide economy in testing time and number of items presented
● CAT tends to reduce floor effects and ceiling effects.

152
Q

​Floor effect​ –

A

an assessment tool for distinguishing test takers at the low end of the ability, trait, or other attribute being measured

153
Q

Ceiling effect​ –

A

assessment tool for distinguishing test takers at the high end of the ability, trait, or other attribute being measured

154
Q

Cumulatively scored​:

A

an assumption that the higher the score on the test, the higher the test taker is on the ability, trait, or other characteristic that the test purports to measure

155
Q

Classic scoring​:

A

responses earn credit toward placement in a particular class or
category with other test takers whose pattern of responses is presumably similar in
some way
○ Ex. diagnostic testing

156
Q

Ipsative scoring​:

A

comparing a test taker’s score on one scale within a test to another scale within that same test

157
Q

interactionism

A

-The concept by which ​heredity​ and ​environment​ are presumed to interact and influence the
development of one’s intelligence. Nature + Nurture

158
Q

Wechsler’s definition of intelligence​

A

-Intelligence is the aggregate or global capacity of the individual to act purposefully, to think rationally
and to deal effectively with his environment.

159
Q

Galton

A

most intelligent people are the ones with the best sensory abilities; first to publish on the heritability of intelligence (​interactionism​). Intelligence is in the genes.

160
Q

Binet

A

Test scores are a measure of performance not strictly TRUE intelligence. Intelligence is a relative contribution of abilities (all interact to produce a solution). Builds on Galton

161
Q

Terman

A

pioneer of educational psychology, test construction and standardization; revised the Binet-Simon scale to what is now known as the Stanford-Binet

162
Q

Weschler

A

​Intelligence is not the mere sum of abilities, believed it was important to measure several aspects (1939 developed an intelligence test which included non-verbal tasks). Builds on Binet initial beliefs

163
Q

Piaget

A

intelligence as an evolving biological adaptation to the outside world; stages of cognitive development.

164
Q

What is the ​Binet-Stanford intelligence test​? What are its main features?

A

-It was the first published intelligence test to provide organized and detailed administration and scoring
instructions.
-It was also the first American test to employ the concept of IQ. And it was the first test to introduce the concept of an ​alternate item​, an item to be substituted for a regular item under specified conditions (such as the situation in which the examiner failed to properly administer the regular item).
- An individually administered intelligence test that aims to gauge intelligence through 5 factors of cognitive ability. ​(​These five factors include fluid reasoning, knowledge, quantitative reasoning, visual-spatial processing and working memory)
- Uses CHC Theory
- Doesn’t place a ton of emphasis on g (unlike Wechsler)
- Adaptive testing
-Fluid reasoning -Knowledge -Quantitative processing -Visual/spatial processing -Working memory
-Ratio IQ- full scale IQ mean of 100 SD: 15
-Subtest to measure other factors. There’s consistency among the subtest M:10 SD:3 - Both verbal & nonverbal responses are measured

165
Q

What is ​factor analysis​?

A

-Factor-analytic theories of intelligence​: identify the ability or ​groups of abilities​ t​ hat constitute
intelligence
Factor analysis​: statistical techniques designed to determine the existence of underlying relationships between sets of variables. Class of mathematical procedures designed to identify factors or specific variables that are typically attributes, characteristics, or dimensions on which people may differ

166
Q

Charles Spearman’s theory​ (g and s):

A
  • proposed the existence of a general intellectual ability factor (g) and specific factors of intelligence (s).
  • (g) was assumed to afford the best prediction of overall intelligence.
167
Q

Group factors:​

A

​an intermediate class of factors common to a group of a c t i v i t i e s b u t n o t a l l ​ , n​ e i t h e r a s g e n e r a l a s ( g ) n o r a s s p e c i f i c a s ( s ) . ​

168
Q

Guilford and Thurstone​:

A

d​ emphasized the value of (g)

169
Q

Gardner:​

A

​theory of seven intelligences

170
Q

Horn and Cattell​:

A

t​wo major types of cognitive abilities = crystallized (Gc) + fluid (Gf) intelligences

171
Q

Carroll​:

A

t​hree-stratum theory of cognitive abilities (​The top stratum or level in Carroll’s model is g, or general intelligence. The second stratum is composed of eight abilities and processes: fluid intelligence (Gf ), crystallized intelligence (Gc), general memory and learning (Y), broad visual perception (V), broad auditory perception (U), broad retrieval capacity (R), broad cognitive speediness (S), and processing/decision speed (T). Below each of the abilities in the second stratum are many “level factors” and/or “speed factors”—each different, depending on the second-level stratum to which they are linked. For example, three factors linked to Gf are general reasoning, quantitative reasoning, and Piagetian reasoning. The three-stratum theory is a hierarchical model, meaning that all of the abilities listed in a stratum are subsumed by or incorporated in the strata above.)

172
Q

Crystallized intelligence ​(Gc)

A

​ includes ​acquired skills and knowledge​ ​that are dependent on exposure to a particular culture as well as on formal and informal education

173
Q

Fluid intelligence (​ Gf):​

A

nonverbal, relatively ​culture-free​, and independent of specific instruction – ​ability to adapt​ in novel situations, problem solving
Crystallized is generally more formal and stable, where fluid is more impacted by age/injury

174
Q

What is the CHC model​?

A

stands for ‘​Cattell-Horn-Carroll’
moves away from the large emphasis on g seen earlier.
Remove g because it has little practical relevance to cross-battery assessment and interpretation.
featuring 10 “broad-stratum” abilities and over 70 “narrow-stratum” abilities → McGrew-Flanagan CHC model
Kevin S. McGrew

175
Q

What are Thorndike’s three clusters of ability?

A

Social intelligence, concrete intelligence, and abstract intelligence
Defined general mental ability as the number of modifiable neural connections

176
Q

Simultaneous (parallel) processing​:

A

the integration of information occurs ​all at one time;​ integrated, synthesized at once, as a whole

177
Q

Successive (sequential) processing​:

A

information is individually processed in a sequence​; logical, analytic

178
Q

test administration

A

-test manual with normative data and cutoffs
-environmental controls
-instructions and opportunities for examinee to ask questions -teaching items
-scripted prompts
-careful recording of responses and behavioral observation -reversing and discontinuation rules
-importance of rapport building

179
Q

How do ceiling and floor effects apply to intelligence testing?

A

The term ​floor​ refers to the lowest level of the items on a subtest The highest-level item of the subtest is the ​ceiling

180
Q

What is a basal level?

A

The baseline level required to continue a test. Achieve a certain score or mark to move to the next level

181
Q

Know the importance of developmental level and age on intelligence testing and interpretation.

A

-ratio IQ
-deviation IQ= a comparison of the performance of the individual with the performance of others of the same age in the standardization sample
-We used to use age scale and now we use point scale

182
Q

Who constructed the Stanford-Binet-5 and what theory was it based upon?

A

Based on the CHC Model of intellectual abilities
- Revised from its original by Lewis Terman and his student (and later colleague) Maud Merrill 1937
- Routing test (each had teaching items [not graded])
- The SB5 is exemplary in terms of what is called adaptive testing

183
Q

Who constructed the WAIS-IV? What are its main features? How is it different/similar to the Stanford-Binet-5?

A
  • David Wechsler
  • Comprised of 4 index scores: Verbal Comprehension, Working Memory, Processing Speed, and Perceptual Reasoning
  • Co-normed​ with the Wechsler Memory Scale (WMS)
  • The WAIS-IV has 10 core and 5 supplemental subtests
  • Contains practice or teaching items
  • Good internal consistency reliability and validity
  • Looks like a factor-analytic structure, can see perceptual reasoning and full scale IQ
184
Q

Be familiar with Wechsler’s different tests (WAIS, WISC, etc.) and when would you use them.

A
  • WISC – Wechsler Intelligence Scale for Children (first published in 1949)
  • WPPSI – Wechsler Pre-School and Primary Scale of Intelligence (first published in 1967)
  • Wechsler Adult Intelligence Scale (WAIS) - co-norm Wechsler Memory Scale-Third
    Edition (WMS-III)
  • Wechsler-Bellevue 1 (W-B 1) -point scale, items were classified by subtests rather than
    by age. The test was organized into six verbal subtests and five performance subtests, and all the items in each test were arranged in order of increasing difficulty. test suffered from some problems: (1) The standardization sample was rather restricted; (2) some subtests lacked sufficient inter-item reliability; (3) some of the subtests were made up of items
    that were too easy; and (4) the scoring criteria for certain items were too ambiguous - Wechsler Abbreviated Scale of Intelligence (WASI) first published 1999, as of 2011
    moved to the WASI-2
185
Q

What is the formula for IQ ratio?

A

IQ ratio = mental age/chronological age x 100

186
Q

What are the cautions and challenges of short forms of intelligence tests?

A
  • Short forms reduce reliability and validity of their corresponding longer forms, and assumptions, observations and conclusions made using the results of short forms must be seen as “estimates” and made with caution.
  • Used as a screening tool for the full version
187
Q

Army Alpha Test​ ​-

A

administered to Army recruits who could read. It contained tasks such as general information questions, analogies, and scrambled sentences to reassemble.

188
Q

Army Beta Test

A

designed for administration to foreign-born recruits with poor knowledge of English or to illiterate recruits (defined as “someone who could not read a newspaper or write a letter home”). It contained tasks such as mazes, coding, and picture completion (wherein the examinee’s task was to draw in the missing element of the picture)

189
Q

Armed Service Vocational Aptitude Battery (ASVAB)

A

Administered to prospective new recruits in all the armed services (also available to high-school students and other young adults who seek guidance and counseling about their future education and career plans)

190
Q

Flynn effect

A

=The progressive rise in intelligence test scores that is expected to occur on a normed intelligence test
from the date when the test was first normed; “​intelligence inflation​”

191
Q

What are some ways of reducing culture loading on tests?

A
  • Panels of experts may evaluate the potential bias inherent in a newly developed test
  • The test may be devised so that relatively few verbal instructions are needed to
    administer it or to demonstrate how to respond.
  • A tryout or pilot testing with ethnically mixed samples of test takers may be undertaken.
  • If differences in scores emerge solely as a function of ethnic group membership,
    individual items may be studied further for possible bias
192
Q

Dynamic testing- ​

A

Assessors do everything in their power to help the test taker master material in preparation for retesting (target areas test taker struggles in, etc.)

193
Q

Achievement test:​ ​

A

After knowing something about their innate abilities, how do they translate these abilities to a specific task/environment? (ex: assessment of grammar abilities, etc.) that they must perform to get to an answer (deductive reasoning, etc.) Are they performing at a level we expect them to regarding ability level?

194
Q

Aptitude testing:​ ​

A

determining ​readiness,​ likelihood/potential to succeed, predicts someone’s performance later on; broader skills/abilities (i.e. predictive validity)

195
Q

What are the challenges of educational testing with young children?

A

Early development is marked with spurts and lags, making people at similar chronological ages possibly on very different levels, depending on the test.

196
Q

Barnum effect

A
  • ​A generalization such that “there is something for everyone.” (e.g. horoscopes, fortunes, and many
    personality assessments)
197
Q

personality

A

An individual’s unique constellation of psychological traits that is relatively stable over time.
“Psychological qualities that contribute to an individual’s enduring and distinctive patterns of feelings, thinking, and behaviors”

198
Q

Trait:​

A

​“Psychological qualities that contribute to an individual’s enduring and distinctive patterns of feelings, thinking, and behaviors”

199
Q

Contribute to:

A
  • factors that ​causally​ influence an individual’s distinctive and enduring
    tendencies (e.g ​to some extent​ they determine how the person feels,
    thinks, and behaves)
200
Q

Enduring:

A
  • Qualities that are at least somewhat ​consistent​ over time (e.g. a talkative
    person)
201
Q

Distinctive:

A
  • Features that ​differentiate​ people from one another (e.g. being more/less
    talkative than most people)
  • Individual Differences
202
Q

State:​ ​

A

the ​transitory​ exhibition of some personality trait; a relatively temporary predisposition.

203
Q

Type​:

A

a​ constellation of traits that is similar in pattern to one identified category of personality within a taxonomy of personalities.

204
Q

What is self-concept?

A

One’s attitudes, beliefs, opinions, and related thoughts about oneself
- Some self-concept measures are based on the notion that states and traits related to self-concept are to a large degree ​context-dependent

205
Q

Self-concept differentiation

A

the degree to which a person has different self-concepts in different roles
Low self-concept differentiation​ indicates a person who perceives him/herself
similarly across social roles, and therefore tends to have a more cohesive sense of self and is generally healthier psychologically.

206
Q

Faking Bad​

A
  • want someone to believe you are worse off (more symptoms, more problems, lower ability level,etc.) Situations that this would be advantageous for would be when being assessed for disability benefits, etc.
207
Q

impression management

A

Attempting to manipulate others’ opinions and impressions through the selective exposure of some information, including false information, usually coupled with the suppression of other
- information

208
Q

What is the purpose of validity scales?

A

to assist in judgments regarding how honestly the test taker responded and whether responses were products of response style, carelessness, deception, or misunderstanding

209
Q

Socially Desirable:

A

​present oneself in a favorable, socially acceptable or desirable light

210
Q
  • Acquiescence:
A

​agree with whatever is presented

211
Q
  • Nonacquiescence:
A

​disagree with whatever is presented

212
Q
  • Deviance: ​
A

make unusual or uncommon responses

213
Q
  • Extreme:
A

​make extreme (as opposed to middle) ratings on a rating scale

214
Q
  • Gambling/Cautiousness:
A

guess (or not guess) when in doubt

215
Q
  • Overly Positive:
A

​claim extreme virtue through self-presentation in a superlative manner

216
Q

Be able to provide examples of self-report (objective) and performance-based (projective) tests

A

Objective-​The MMPI is one of the most common

Projective- ​Rorschach Inkblot ​Test​, the Thematic Apperception ​Test​ (TAT), the Contemporized-Themes Concerning Blacks ​test​, the TEMAS (Tell-Me-A-Story), and the Rotter Incomplete Sentence Blank (RISB)

217
Q

Criterion group:

A

​ a reference group of test takers who share specific characteristics and whose responses to test items serve as a standard according to which items will be included or discarded from the final version of a scale

218
Q

Development of a test by means of empirical criterion keying involves the following:

A
  • Creation of a large preliminary pool of test items from which the final form of the test will be selected.
  • Administration of the preliminary pool to at least two groups of people: (1) a criterion group of people known to possess the measured trait; and (2) a random sample.
  • Conduct an item analysis to select items indicative of membership in the criterion group.
  • Obtain data on test performance from a standardization sample of testtakers who are
    representative of the population from which future testtakers will come.
219
Q

Be familiar with MMPI and its 3 validity scales

A

*Used in forensic psychology, personnel selection, therapeutic assessment *Validity scales, clinical scales, and supplemental scales
* 566 items – typically 60-90 minutes
* 6t​ h​ grade reading level
*True/False response
*Raw scores converted to T scores
*Configural interpretation – profile patterns
*2-point elevation code type interpretation is most common strategy *Appropriate for individuals 14 years old and older
* The MMPI has three scales built into the measurement to combat the problems inherent in self-report methods: ​the L scale​ (the Lie scale),​ the F scale​ (the Frequency scale), and the​ K (Correction scale).

220
Q

The L scale ​

A

will call into question the ​examinee’s honesty

221
Q

The F scale ​

A

contains items that are infrequently endorsed by non psychiatric populations and do not fall into any known pattern of deviance, which can help determine how serious an examinee takes the test as well as identify malingering

222
Q

The K score​

A

is associated with ​defensiveness​ and social desirability. Reflection of the frankness of the test taker’s self-report

223
Q

What was the rationale for the MMPI-2 and MMPI-RF revisions?

A

*The MMPI-2 was normed on a more representative standardization sample.
*Some content was rewritten to correct grammatical errors and make the language more contemporary and less discriminatory.
*Items were added that addressed topics such as drug abuse, suicidality, marital adjustment, attitudes toward work, and Type A behavior patterns.
*Three additional validity scales were added:​ Back-Page Infrequency (Fb)​, ​True Response Inconsistency (TRIN)​, and​ Variable Response Inconsistency (VRIN).
*567 items, 394 identical to the original (107 new items) *Appropriate for 18 years and older

224
Q

○ What was the approach to test development for the PAI?

A
  • Used in forensic psychology, personnel selection, therapeutic assessment
    *Validity scales, clinical scales, treatment consideration scales, and interpersonal scales *​Emphasis on content and discriminant validity in test construction
    *344-items
    *4-point Likert scale
225
Q

OCEAN

A

openness, conscientiousness, extraversion, agreeableness, neuroticism

226
Q

Data reduction methods ​

A

in the design of personality measures is to aid in the identification of the minimum number of variables or factors that account for the intercorrelations in observed phenomena.

227
Q

openness

A

Imaginative and creative, Openness to ideas or experiences, Artistic interests, Not the same thing as intellect/IQ

228
Q

Conscientiousness:​

A

Competence, Order, Dutifulness, Achievement-striving, Self-discipline, Deliberation

229
Q

Extraversion​-

A

Warmth, Gregariousness, Assertiveness, Activity, Excitement-seeking, Positive emotions

230
Q

Agreeableness​-

A

Trust, Straightforwardness, Altruism, Compliance, Modesty,Tender-mindedness

231
Q

Neuroticism​-

A

Anxiety, Angry hostility, Depression, Self-consciousness, Impulsiveness,
Vulnerability to stress

232
Q

How can culture impact interpretation of personality testing?

A

-Some cultures value conformity and little competition while others value individuality, different people in different cultures have ranging views on small and big things but have a major effect on one’s motivational and incentive systems.

233
Q

Instrumental values​-

A

guiding principles to help one attain some objective (honesty, imagination, ambition)

234
Q

Terminal values​-

A

guiding principles and a mode of behavior that is an endpoint objective (comfortable life, an exciting life, a sense of accomplishment)
- Tied to identity and worldview

235
Q

Acculturation:

A

= ​an ongoing process by which an individual’s thoughts, behaviors, values, worldview, and identity develop in relation to the thinking, behavior, customs, and values of a particular cultural group
-Acculturation begins at birth and proceeds throughout development. -Different cultures have different values.
-People’s interpretation is influenced by experiences and cultural background, so it is important to take into consideration their cultural background when assessing their personality.

236
Q

What is a projective test?

A

*Response is a reflection of the individual’s own unique patterns of conscious and unconscious needs, fears, desires, impulses, conflicts, and perceptions (true). We get these patterns from showing an individual ​ambiguous ​stimuli and having them interpret it. Their interpretation reveals to us their personality.

237
Q

What are the five categories of projective tests and how are they used?

A
  1. Inkblots as Projective Stimuli
  2. Pictures as Projective Stimuli
  3. Words as Projective Stimuli
  4. The Production of Figure Drawings
  5. Projective Methods in Perspective
238
Q

*Association techniques

A

-Word Association Test, Rorschach Inkblot Test

239
Q

*Construction techniques

A

-Draw-A-Person test, the Thematic Apperception Test (TAT)

240
Q

*Completion techniques

A

-Sentence-Completion Tests

241
Q

*Arrangement or selection of stimuli

A

-Pick your favorite color, picture, or other stimuli

242
Q

*Expression techniques

A

-​Puppet play, artwork

243
Q

○ What are the strengths and weaknesses?

A

-Reliability and interpretation might be problematic, thus the validity might be a weakness.

244
Q

What is the difference between clinical and counseling psychology?

A

● Clinical- prevention, diagnosis, and treatment of disordered behavior, specific (more severe form of psychopathology)
● Counseling- more general, everyday concerns

245
Q

What settings would each be typically employed in?

A

● Clinical- hospital, mental health centers, academia
● Counseling- marriage, family, career

246
Q

What is a clinical interview?

A

● interview designed to assist clinicians and researchers in diagnostic decision-making

247
Q

Clinical Interview:

A

*To arrive at a diagnosis
*To pinpoint areas that must be addressed in psychotherapy
*To determine whether an individual will harm himself or others
*General interview questions
*Demographic data
*Reason for referral
*Medical history and present medical conditions *Familial medical history
*Psychological history and present psychological conditions
*History with medical or psychological professionals

248
Q

Purpose of clinical interview:

A

=Arrive at diagnosis, pinpoint areas to address, determine if cause harm to self or others, what else needs to be done

249
Q

What is premorbid functioning?

A

= ​the level of psychological and physical performance before the development of a disorder, illness, or disability

250
Q

Describe the biopsychosocial model/why is it important?

A

=Examining biological, social, and psychological factors to see why disorder happens. Is used in many fields like doctor or social worker. To treat people’s health, we need to consider these three factors. If we only focused on the pain, we wouldn’t get the outcome we want.

251
Q

What type of system is used in the DSM-5? Why is the DSM important?

A

Classification system: A common language for clinicians to communicate about their patients and establishes consistent and reliable diagnoses that can be used in the research of mental disorders.

252
Q

What are frequent tests used for assessment in clinical psychology?

A

● Interview – structured, semi-structured, and unstructured interviews
● Self-report measures
● Behavioral measures
● Cognitive measures
● Social and functioning measures

253
Q

What is the importance of medical/case history?

A

-data can be acquired from interviewing the assessee and significant others in his or her life, hospital, school, military, and employment records

254
Q

What are the kinds of things that are observed yet not explicitly asked in a clinical interview?

A

-Appearance
-eye movement
- behavior
-facial expression -attitude
-The interviewer may jot down subjective impressions about the interviewee’s general appearance (appropriate?), personality (sociable? suspicious? shy?), mood (elated? depressed?), emotional reactivity (appropriate? blunted?), thought content (hallucinations? delusions? obsessions?), speech (normal conversational? slow and rambling? rhyming? singing? shouting?), and judgment (regarding such matters as prior behavior and plans for the future).

255
Q

What is a mental status examination?

A
  • used to screen for intellectual, emotional, and neurological deficits
  • includes questioning/observation with regards to appearance, behavior, orientation, memory, consciousness, affect, mood, personality, sensory abilities, psychomotor activity, etc.
256
Q

What does it mean to be oriented times 3?

A

-oriented to person, place, and time. Get Oriented X3 if the person is oriented (able to correctly state his name, where he is at, and the date.)

257
Q

What are the most common cultural considerations in clinical assessment?

A

-acculturation, values, identity, worldview, language

258
Q

A.D.R.E.S.S.I.N.G. – (what does each letter stand for?)

A

-Age, Disability, Religion, Ethnicity, Social status, Sexual orientation, Indigenous heritage, National origin, Gender

259
Q

Know some of the basic differences between the perspectives of psychology and law

A

○ ​ ​Descriptive-​ psych, describes human behavior vs. ​Prescriptive​- law, how people should act
○ ​ Empirical-​psychology vs. ​Authoritative​-law

260
Q

For what purpose is therapeutic assessment used vs. forensic assessment?

A

Therapeutic- for client
Forensic- for 3rd party (judge, attorney, insurance company)

261
Q

Additional considerations and applications of clinical assessment

A

○ Danger to self (suicide) or other (homicide)
○ Competency and responsibility (defendant’s ability to understand the charges against him and assist in his own defense)
○ Readiness and injury (probation or not, evaluating alleged emotional injury [The court will evaluate the findings in light of all of the evidence and make a determination regarding whether the alleged injury exists and, if so, the magnitude of the damage])
○ Profiling (criminals leave psychological clues about who they are, personality traits they possess, and how they think. Interview and case history are used to match psychological clues with possible suspects.)
○ Custody evaluation (a psychological assessment of parents or guardians and their parental capacity and/or of children and their parental needs and preferences)
○ Child and elder abuse

262
Q

What are the two symptom clusters of ADHD

A

-Symptoms of inattention
-Symptoms of hyperactivity and impulsivity

263
Q

What are some of the challenges in diagnosing ADHD?

A

-Client characteristics: *Medication seeking
*Academic accommodations *Testing accommodations
-“Popularity of a diagnosis” by MD and psychiatrists *A child who acts out => ADHD
*A student with bad grades => ADHD
*Someone who has concentration issues => ADHD *Someone who procrastinates? => ADHD
*How many false positives?
-Lack of standard for testing
-Comorbidity of symptoms (need symptoms from both hyper and inattention)

264
Q

What is a referral question? Why is it important?

A

*A clear and specific question(s) agreed upon by both the assessor and the client; a mandate, and a responsibility to your client
*As a clinical psychologist doing assessment you want to make sure you answer the referral questions, but do not go beyond it
*Be aware of your limited role as an assessor, not a therapist

265
Q

What are the primary types of tests used to assess ADHD (e.g., self-report, cognitive, etc.)?

A

​*Semi-structured Interview
*Self-report of ADHD
*Self-report measure of psychopathology
*Cognitive (IQ) test
*Academic achievement test
*Neuropsychological test of attention during low stimuli setting *Neuropsychological test of sustained mental effort *(Personality tests)
*(Test of dyslexia)

266
Q

What are the sex differences in PTSD?

A
  • ~10% of women and ~4% of men
267
Q

-Establish self-report symptoms of ADHD:

A

1) Semi structured interview-
● Developmental hx
● Family hx
● Social hx
● Educational trajectory
● Mental illness in the family
● Depression symptoms, anxiety symptoms, schizophrenia symptoms etc.
2)​Conners’ Adult ADHD Rating Scale Self-Report Long Version (CAARS-S:L)

268
Q

What are the three common ways to assess PTSD discussed in lecture?

A
  1. Structured Interviews
    *CAPS: Clinician Administered PTSD Scale *PSS-I: PTSD Symptom Scale – Interview *SCID: Structured Clinical Interview for DSM-5 *SIP: Structured Interview for PTSD
  2. Self-Report Measures
    *PCL: PTSD Checklist
    *Mississippi Scale for Combat Related PTSD *DTS: Davidson Trauma Scale
    *PDS: Posttraumatic Stress Diagnostic Scale
  3. Screening
    * TSQ: Trauma Screening Questionnaire *PC-PTSD-5: Primary Care PTSD Screen for DSM-5
269
Q

Neurology-

A

​branch of medicine that focuses on nervous system and its disorders

270
Q

Neuropsychology-

A

​ branch of psychology that focuses on relationship of brain function and behavior

271
Q

Prefrontal-

A

planning, decisions

272
Q

temporal

A
  • language, auditory
273
Q

occipital

A

vision

274
Q

parietal

A

sensory (touch)

275
Q

What are lesions? How can they manifest? Their impacts?

A

-alteration of tissue from injury or infection
-different behavior deficits depending if focal (one location) or diffuse (spread out)

276
Q

Be able to define the difference between brain damage and organicity

A

brain damage- physical/functional impairment of ​central nervous system​ that leads to cognitive, emotional, or sensory deficit
Organicity- organic brain damage

277
Q

When is neuropsychological evaluation indicated?

A

● referred to a psychologist for screening of suspected neurological problems
● A battery of tests will be conducted (most likely including an intelligence test, a
personality test, and a perceptual-motor/memory test
● If suspicious neurological signs and/or head trauma history, referred for a more detailed
evaluation

278
Q

Know the difference between hard and soft signs.

A

● Hard sign-definite neurological deficit (cranial nerve damage)
● Soft sign- suggestive of neurological deficit (apparent inability to accurately copy
a stimulus figure when attempting to draw it)

279
Q

-​What kind of sign is it if someone cannot trace a shape with a pencil?​

A

Soft

280
Q

-​What about if someone’s reflexes don’t work properly​?​

A

Hard

281
Q

What are the primary goals or objectives of neuropsychologists?

A

to draw inferences about the structural and functional characteristics of a person’s brain by evaluating an individual’s behavior in defined stimulus-response situations

282
Q

Types of neuropsychological evaluation and tests and what type of functioning are they testing

A

● Tests of general intellectual ability (Wechsler tests)
● Sorting tests measure one element of executive function, which may be defined as organizing,
planning, cognitive flexibility, and inhibition of impulses and related activities associated with the frontal and prefrontal lobes of the brain (Tower of Hanoi/ Clock-drawing test(CDT)) Tests of Executive Function
● Tests to Measure the Ability to Abstract (Wechsler Similarities subtest)
● Tests of Perceptual, Motor, and Perceptual-Motor Function (Bender Visual-Motor Gestalt Test)
Tests of Verbal Functioning Tests of Memory

283
Q

What areas may a neuropsychologist employ knowledge from?

A

Neuroanatomy, neurochemistry, and neurophysiology to interpret findings and employ tests

284
Q

Be able to define the different types of long-term memory

A

Procedural (skills), declarative (factual material), semantic (type of declarative,
general facts) and episodic (type of declarative, facts from a specific situation/experience)

285
Q

Fixed-​

A

sample neuropsychological functioning, less demanding, more general.

286
Q

Flexible-​

A

need skill and expertise, answer specific referral questions. hand-picked for some purpose relevant to the unique aspects of the patient

287
Q

What are some of the brain imaging tools we have access to?

A

○ Which examines brain structure?-​ CT scan, xray, MRI
○ Which examines brain function?- ​cortical stimulation, evoked potential (EP), TDCS – transcranial direct current stimulation, PET – positron emission tomography, EEG – electroencephalogram, fMRI – functional magnetic resonance imaging

288
Q

What are some limitations to brain imaging studies?

A

Expensive, time, difficult to locate precise area, possibility of biased and picked results from desirable areas