Final Exam Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Define Survey

A
  • Surveys focus on group outcomes
  • Surveys allow us to collect information so that we can describe and compare how people feel about things (attitudes), what they know (knowledge), and what they do (behavior)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what factors determine the type of survey software to use

A

is dependent on the specific needs, desire and constraints of the company or person

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

the characteristics of a good survey

A

o Have specific and measurable objectives
o Contain straightforward questions that can be understood similarly by most people
o Have been retested to ensure that there are no unclear questions or incorrect skip patterns
o Have been administered to an adequate sample of respondents so that the results are reflective of the population of interest
o Include the appropriate reporting of results (both verbal and written)
o Have evidence of reliability and validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

define experimental research techniques

A

help us determine cause and effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

define descriptive research techniques

A

help us describe a situation or phenomenon

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what type of research techniques do surveys mostly use

A

descriptive research techniques

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

why do we develop a new test

A
  • Meet the needs of a special group of test takers
    o There are subgroups that need to be assessed. Like a new job that wasn’t there before.
  • Sample behaviors from a newly defined test domain
  • Improve the accuracy of test scores for their intended purpose- low quality
  • Tests need to be revised/modified (ex. old items, old norms)
    o Possibly uses wording that is no longer acceptable Ex. multiple personality disorder is not DID
    o Old normative groups: you cannot compare someone from the modern day to someone from 10 years ago
  • Tests may assess clinically useful constructs, but may be impractical for real – world clinical applications
    o Ex. can we look at IQ clinically the same way we do in business
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what are the 4 distinct stages of developing a test

A
  1. Test conceptualization
  2. Test structure and format
  3. Standardization
  4. Plan implementation (revisions)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what are the 2 questions that must be answered in order for you to know if there is a point creating a new test

A

will the test improve practice/ research

& will improve our knowledge of human behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what are the tests in Phase 1: test conceptualization

A
  1. conduct a review of literature and develop a statement of need for the test
  2. describe the proposed uses and interpretations of results from the test
  3. describe who will use the test and why (including statement of user qualifications)
  4. develop conceptual and operation definitions of construct you intend to measure
  5. determine whether measures of dissimulation are needed and if so what kind
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

steps to defining the test universe

A

o Prepare a working definition of the construct (more conceptual in nature)
o Locate studies that explain the construct
o Locate current measures of the construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what is included in the purpose of the test

A

what the test will measure and how the test users will use the test scores
the information that the test will provide to the test user

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what do you do if there are no studied or measures on the construct

A

you go the theoretical model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what do you do if there is no theoretical model

A

you go to the studies, measure or theoretical models of constructs which are similar and create a new theoretical model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

define operation definitions

A

specific behaviors that represent the purpose

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what does a test plan or table of specification include

A

a definition of the construct, the content to be measured (test domain), the format for the questions, and how the test will be administered and scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

what are the steps of phase 2: specification of test structure and format

A
  1. age range appropriate for this measure
  2. testing format (ex. individualized or group, print of computerized) who will complete the test (ex. the examiner, the exmianee, and some other informant)
  3. the structure of the test (ex. subscales, composite scores, etc) and subscales (if any) will be organized
  4. written table of specifications
  5. item formats (given by subsets or subscales if any, with sample items illustrating ideal items) and a summary of instructions for administration and scoring
  6. written explanation of mehtods for item development (how items will be determined- will you need content experts to helpwrite or review items?), tryout, and final item selection
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what does a test format refer to

A

refers to the type of questions the test will contain (usually one format per test for ease of test takers and scoring)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

what are the two elements of test formats

A

o Stimulus (ex. a question or phase)
 Stimulus to which the test taker responds
 Ex. multiple choice is the question and the mechanism is the four of five possible answer in the question
o Mechanism for response (ex. multiple choice, true- false, essay, boarding licensing exam)
o May be objective (agreement) or subjective (possible disagreement) test format

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

define structured record reviews

A
  • Forms that guide data collection from existing record (ex. using a form to collection information from personnel files)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

what are structured observations

A

which are forms that guide an observe in collecting behavioral information (ex. using form to document the play behaviors of children on the playground)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

define objective/ structured test types

A

Has one correct answer or that provide evidence of a specific construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

types of objective/ structured test types

A

o Selected response
o Multiple choice
o True false, forced choice
o Likert scales (also typical)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

types of subjective/ free response test types

A

o Essay, short answer
o Interview questions
o Fill in the blank
o Projective techniques

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

define subjective/ free response test types

A

constructed response. Do not have on correct answer. Based on the interpretation that the response is correct or not correct or providing evidence of a specific construct is left to the judgement of the person who scores the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

which is most preferred objective test types or subjective test types

A

objective test types

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

how do objective and subjective test formats differ in sampling

A

o Objective tests are faster an therefore the test developer can cover a wider array of topis thereby increasing the available evidence of validity based on test content
o When the testing universe covers a wide array of topics objective tests are better

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

how do objective and subjective test formats differ in test construction

A

o Objective items especially M/C items requires extensive through and development time to come up with all the balanced possible responses
o Subjective tests required fewer items and are easier to construct
o Subjective tests are better suited for testing higher order skills such as creativity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

how do objective and subjective test formats differ in scoring

A

o Objective scoring is simple and can be done by a computer or an aide with a high degree of reliability and accuracy.
o Scoring subjective items require time consuming judgements by an expert

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

how do objective and subjective test formats differ in response sets

A

o On objective tests, test takers can guess the correct answer and they can choose answers based on social desirability
o For subjective tests, test takers may bluff or pad answers with superfluous or excessive information. Scorers might be influenced by irrelevant factors such a spoor verbal or writing skills

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

what are distracters/alternatives

A

The wrong answers in a multiple choice test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

pros of a multiple choice test

A
  • More answer options (4-5) reduce the chance of guessing that an item is correct
  • Many items can aid in student comparison and reduce ambiguity, increase reliability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

cons of a multiple choice test

A
  • Measures narrow facets of performance
  • Reading time increased with more answers
  • Transparent clues (ex. verb tenses, or letter uses “a” and “an”) may encourage guessing
  • Difficult to write four or five plausible choices
  • Takes more time to write questions- limit use of “none of the above” or “all of the above” to “+ or – worded items – never/always”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

advantages of structured response/ selected response test types

A
  • Great breadth (3 of items, covering content)
  • Quick scoring
  • Decreases influence of possible factors that may influence error (ex. writing ability)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

disavantages of structured response/ selected response test types

A
  • Limited depth
  • Hard to write
  • Difficult to assess higher levels of skills and at times you cannot measure it (writing ability and running ability)
  • Guessing/ memorization vs knowledge
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

disadvantages of forced choice test types

A

has very little face validity which may produce poor responses form test takers. Making a number of decisions between or among apparently unrelated words or phrases can become distressing and test takers who want to answer honestly and accurately often become frustrated with forced choice questions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

advantages of forced choice test types

A

the items are more difficult for respondents to guess or fake

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

where are forced choice items test types mostly used

A

used primarily in personality and attitude tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

define structured interviews

A

have scoring and have criteria for scoring (like a rubric)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

define unstructured interviewing

A

have no scoring or criteria for scoring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

what are projective techniques

A
  • Projective techniques are often employed in clinical setting
    o Uses a highly ambiguous stimulus to elicit an unstructured response (ie the test takers “projects” his or her perception and perspective onto a neutral stimulus)

EX. THE PAINT SPLATTER ROCHSCHER TEST

42
Q

advantages of subjective items/ free response/ constructed response items

A
  • Easier to write
  • Can test higher cognitive skills
  • Encourages organized/developed thoughts
  • Eliminates guessing
43
Q

disadvantages of subjective items/ free response/ constructed response items

A
  • Difficult to grade- influence of feigning and impact of writing ability
  • Judgement error (ex. interrater reliability)
  • Required advance- objective scoring key
  • Fewer items
44
Q

what type of response item format is a likert scale

A

a typical response item format

45
Q

define performance assessments

A

require test takers to directly demonstrate their skills and abilities to perform a group of complex behaviors and tasks ex. an audition of a musician trying out for a band
o The setting in which these tasks are demonstrated is made as similar as possible to the conditions that will be found when the tasks are actually performed

46
Q

define simualtions

A

require test takers to demonstrate their skills and abilities to perform a complex task
o the tasks are not performed in the actual environment in which the real tasks will be performed often due to safety or cost- related concerns

47
Q

define portfolios

A

collection of work products that a person gathers other time to demonstrate his or her skills and abilities in a particular area

48
Q

what is dissimulation

A

o When a person misrepresent himself or herself in positive or negative manner
o Decreases the reliability and validity of the measurement

49
Q

define response sets

A

Are patterns of responding that result in misleading information and limit the accuracy and usefulness of the test scores

50
Q

reasons why people lie or answer fakely or random on a test

A

o 1. Information requested is too personal
o 2. Answer items carelessly
o 3. May feel coerced into completing the test or do not motivated to give maximum effort
o 4. Believe that is how they are supposed to answer

51
Q

define social desirability

A
  • Some test taskers choose socially acceptable answers or present themselves in a favorable light
    answering in a way that makes them look socially acceptable or in a favourable light
52
Q

define faking

A

some test takers may respond in a particular way to cause a desired outcome

53
Q

define random responding

A

responding to items in a random fashion by marking answer without reading or considering them

54
Q

reasons why people fake bad

A

o Cry for help
o Want to plea insanity in court
o Want to avoid draft in military
o Want to show psychological damage

55
Q

define acquiescence

A

A tendency to agree with the idea or behaviors presented

they Believe this is how they are supposed to answer

56
Q

suggestions for writing good test items

A

o Consider the time necessary to complete
o Prepare answer key ask an expert to review items to reduce ambiguity and inaccuracy
o Use multiple independent scorers/raters
o Score essays anonymously
o Identify item topics by consulting the plan
o Be sure that each item is based on an important learning objective or topic
o Write items that assess information or skills drawn only from the testing universe
o Write each item in a clear and direct manner
o Use vocabulary and language appropriate for the target audience
o Avoid using slang of colloquial language
o Make all items independent
o Ask someone else (preferably a subject matter expert) to review items in order to reduce unintended ambiguity and inaccuracies

57
Q

what should administration instructions include

A

o Whether the test should be administered in a Group or individual administration
o Requirements for location (ex. quiet, privacy)
o Required equipment (computer, pencil)
o Time limits or approximate completion time
o Script for administrator and answers to questions test takers may ask
o Credentials or training require for the test administrator

58
Q

define population

A

all members of the target audience

59
Q

define sample

A

administering a survey to a representative subject of the population

60
Q

define probability sampling

A

the type of sampling that uses statistics to ensure that a smaple is representative of a population

61
Q

define simple random sampling

A

every member of a population has an equal chance being chosen as a member of the sample
o Ex: if your population is every student at GH for this sampling method you could have the name of every single student and put it in a hat or whatever and randomly select participants

62
Q

simple random sampling, stratified random sampling, and cluster sampling are all examples of what sampling method

A

probability sampling methods

63
Q

define systematic random sampling

A

every nth person is chosen (ex. every 3rd person)

64
Q

denim stratified random sampling

A

population is divided into subgroups (ex. age, gender, SES, race)
o the population is divided into subgroups or strata
o a random sample is selected from a stratum
o A certain amount of people have to be in the sample in each subgroup ex. there must be at least 50% males and 50% females

65
Q

define cluster sampling

A

used when it is not feasible to list all the individuals who belong to a particular population and is a method often with surveys that have large target populations (ex. east, west, central, north, south)

Dividing the population into clusters and picking a certain number of clusters to be in your sample. Ex. you divide the population into 5 clusters and pick 3 of those clusters to be part of your sample)

66
Q

define non probability sampling

A

a type of sampling in which not everyone has an equal chance of being selected from the population

67
Q

define convince sampling

A

the survey researcher uses any available group of participants to represent the population

68
Q

define sample size

A

refers to the number of people needed to represent the target population accurately
o The more similar the members of the population the smaller the sample needs to be. The more dissimilar the members of the population the larger the sample needs to be
o The fewer the people chosen to participate in the test, the more error the survey results are likely to include

69
Q

define homogeneity of the population

A

how similar the people in your population are to one another (more similar the smaller the size)

70
Q

define sampling error

A

a statistic that reflects how much error can be attributed to the lack of representation of the target population by the same of respondents chosen

71
Q

define distributing the survey

A

how will the instrument/ test be given to the respondent (mail, phone, weblink, in person)

72
Q

what is a cumulative/summative model of scoring method

A

o Assumes that the more a test taker responds in a particular fashion the more he/she has of the attribute being measured (ex. more “correct” answers, or endorses higher numbers on a Likert scale)
o The test taker receives one point for each correct answer and the total number of correct answers becomes the raw score on the test

73
Q

define semantic differential

A

adjective pairs at each end of the continuum

74
Q

what is a ipsative model of scoring methods

A

test takers is given 2 or more options to choose from
o The ipsative model only tells u information regarding where test takers stand relative to themselves on the constructs that the test is designed to measure

75
Q

what scoring method model is most commonly used

A

Cumulative/summative model

76
Q

what is the categorical model scoring method

A
is used to put the test taker in a particular group or class
o	Test takers scores are not compared to that of other test takers but rather compare the scores on various scales within the test takers (which scores are high and low) or pattern of responses
o	Looks at the patterns within to see if you are more of or less of and then puts you in a category
77
Q

what is a pilot test

A
  • A scientific evaluation of the test’s performance

- Administering the test to a sample of test’s targets audience and analyzing the data obtained from the pilot test

78
Q

in a pilot test what is the depth and breadth of the pilot test dependant on

A

depends on the size and complexity of the target audience and the construct being measured

79
Q

define item analysis

A

how developers evaluate the performance of each test item

80
Q

define item difficulty

A

percentage of test takers who respond correctly (vs. total # of people)- assesses the p value (percentage value)

81
Q

define discrimination index

A

compares the performance of those who obtained very high test scores (the upper group [U]) with the performance of those who obtained very low test scores (the lower group [L]) on each item.

82
Q

equation for calculating the upper group (U)

A

U = (# of people who responded correctly )/(total # of people in the upper group) × 100

83
Q

equation for calculating the lower group (L)

A

L = (# of people in the lower group her responded correctly )/(# of people in the lower group ) × 100

84
Q

equation for calculating discrimination index

A

Discrimination Index= U -L

85
Q

what number of discrimination index is best

A

30

86
Q

what does it mean if you have a low or negative discrimination index

A
  • If the D value is low or negative that means that the item is not discriminating between higher scorers and low scorers in this case the test developers have to discard and rewrite items that have low or negative D values
87
Q

define item-response theory (IRT)

A

estimate of the ability of test takers that is independent of the difficulty of the items presented as well as estimates of item difficult

88
Q

define characteristic curve (ICC)

A

the line that results when we graph with the level of ability on the construct being measured
o We can determine the difficulty of an item on the ICC by locating the point at which the curve indicates. Probability of .05 of answering correctly. The higher the ability level associated with this point, the more difficult the question

89
Q

what is an optimal .p value

A

0.5

90
Q

what does a .p value of .7 .8 or .9 mean

A

question or test is too easy

91
Q

what does a .p value of .3 .4 or .2 mean

A

question or test is too hard

92
Q

define item bias

A

when an item is easier for one group than for another group
o the preferred method of researchers involves the computation of item characteristics by group (ex. men and women) and using the ICCs to make decisions about item bias

93
Q

define interitem correlation matrix

A

displays the correlation of each item with every other item
o Provides important information for increasing the test’s internal consistency
o Ideally each item should be correlated with every other item measuring the same construct and should not be correlating with items that do not measure the same construct

94
Q

define phi coefficients

A

the result of correlating two dichotomous (having only two values) variables

95
Q

define item total correlation

A

a measure of the strength and direction of the relation between the way test takers responded to one time and the way they respond to all of the items as a whole

measures the strength of the relationship between the way test taker answers one question to the rest of the questions

96
Q

what does a negative item total correlation mean

A

the people who answered a question correctly actually did worse on the test than people who got the question wrong
o These questions should be revisited and edited or taken out

97
Q

define validation

A

the process of obtaining evidence that the test effectively measures what it is supposed to measure (ie. Reliability and validity)

98
Q

define cross-validtation

A

a final round of test administration to another sample (target)

99
Q

what is written in the Manual

A
  • Gives all the information that was undertaken in the first 3 phases
  • Include an adequate description of the test development process so others can replicate what was accomplished and for users to evaluate the usefulness of the test for their purposes
    o The reason for this is so that the user is able to use the instrument and if they ever want to replicate what was done, they can
  • The specific contents will vary according to the types of test and its applications, and some measures may have special legal requirements
100
Q

pros and cons of publishing the test with a publisher

A

o Pros: the publisher has expertise that you will get through the test publisher
 The publisher has a larger network and range for marketing the test
o Cons: The test publisher owns your test your just the author

101
Q

pros and cons of self publishing your test

A

o Pros: you own the test
o Cons: you most likely to not have a great reach or range and therefore cannot network or market that much. Publishes typically have a bigger network