Final Exam Flashcards

1
Q

What is secondary data analysis?

A

A type of research that analyzes data collected by others

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is secondary data?

A

Data collected and recorded by someone else before and for a purpose other than the current project

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a systematic review?

A

A review of the evidence on a research question that uses systematic and explicit methods to identify, select, and critically appraise relevant primary research, and to extract and analyze data from the studies that are included in the review

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is meta-analysis?

A
  • Quantitative techniques to summarize or integrate findings from a body of literature
  • Statistical analysis that combines the results of multiple quantitative studies
  • Uses the results of individual research projects on the same topic
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does PRISMA stand for?

A

Preferred Reporting Items of Systematic Reviews and Meta-Analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is PRISMA?

A
  • useful research tool that helps to declare or report on how you choose the articles for your review
  • It consists of a 27 item checklist and a 4 phase flow diagram
  • Identification, screening, eligibility and included
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the advantages of secondary data analysis?

A
  • saves time and effort in research instrument development
  • does not require a pilot study
  • may include a standardized research tool with proven validity
  • may focus on the topic that you would not ask about
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the disadvantages of secondary data analysis?

A
  • requires review of the original instrument
  • may not have all the questions you need
  • questions may be in a format that is inappropriate for your study
  • contextual info (protocols, showcards, etc.) may be missing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is content analysis (general usage)?

A
  • Refers to research techniques for analyzing the content of written, spoken, or pictorial communication
  • The analysis may relate the occurrence of coded content to other factors (e.g characteristics of the producer or effects on the receiver)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is content analysis (specific usage)?

A
  • refers to quantitative technique that attempts to quantify the meaning of a communication content
  • critical to answering the classic question: who is saying what, to whom, why, how, and with what effect?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the process in developing the coding frame?

A
  • read through all the responses
  • establish categories from the responses
  • can every response be assigned to a category? If not then refine the categories by merging similar ones and deleting redundant ones. If yes, then the final categories emerge
  • cross-category analysis - concentrate upon the relationship between the categories
  • the big picture - explain the contents of each category in relation to the others
  • present your (time and context-bound) theory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How long does it take to develop the coding frame?

A

Depends on the amount of knowledge you retrieve from the original data analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the types of observation?

A
  • participant vs non-participant
  • standardized vs non-standardized
  • covert vs overt
  • direct vs indirect
  • natural vs contrived
  • personal vs mechanical
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Explain participant vs non-participant observational research

A

Participant
- as observers you are part of the observation, actively manipulating the situation you are observing
Non-participant
- you stay at a distance and don’t manipulate anything

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Explain standardized vs non-standardized observational research

A

Standardized
- very detailed, actual behavioural pattern described
Non-standardized
- you don’t have coding schemes, observational schedule, or don’t know what you are going to observe and try to notice all aspects that you can and believe might be relevant with the aim of your research question

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Explain covert vs overt observational research

A

Covert
- when you as a researcher behave like regular customers who just watch and observe
Overt
- you don’t hide yourself and declare yourself as a researcher

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Explain direct vs indirect observational research

A

Direct
- e.g. observing people at bus stop
Indirect
- analyse the content of the direct process and based on that you might discover something new - you look at the outcomes of the processes and not at the process as such

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Explain natural vs contrived observational research

A

Natural
- natural environment
Contrived
- typically done in a lab

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Explain personal vs mechanical observational research

A

Personal
- e.g. your own limits
Mechanical
- e.g. audio recordings, video recordings
- in case of mechanical you don’t use an instrument because it depends on what the machine can pick up

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Can you observe attitudes?

A

No, because it’s something that happens in our heads, but you can observe the behavioural patterns

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is an observation plan/schedule?

A
  • can be a simple list of things to look for in a particular situation
  • can be far more complex; a minute-by-minute count of events such as verbal interactions between subjects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What goes into qualitative research designs?

A

In-depth interviews
- one to one communication
Focus groups
- researcher and group
- shows the group dynamic and arguments and counterarguments
Semi-structured interviews
- some questions must be asked in a standardized way
- intention is not to describe the sample or group but to detect issue on individual level or make a social diagnosis
- one to one communication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are projective techniques?

A

indirect interview methods that allow respondents to project their ideas, beliefs, and feelings onto a third party or into a task situation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Describe projective techniques

A
  • the researcher sets up a situation for the respondents and ask them to express their own views, or to complete/interpret some ambiguous situations presented to them
  • involves situations in which participants are placed in simulated activities in the hope that they will reveal things about themselves that they might not reveal under direct questioning
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

When are projective techniques generally used?

A

when language barriers, respondents illiteracy, social barrier or psychological barriers create difficulties

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Give the types of projective techniques and examples of each

A

Association techniques
- word association
Completion techniques
- sentence completion
- story completion
Construction techniques
- cartoon test
- collage
Expressive techniques
- role playing
-third person technique
- brand party
- obituary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is a word association task?

A
  • it records the first thought that comes to a respondent in response to a stimulus
  • respondents are presented with a list of words, one at a time and asked to respond to each with the first word that comes to mind
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is the sentence completion test?

A

Respondents are asked to complete a series of incomplete sentence, often related or neutral to the topic of interest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is the cartoon test?

A

Researcher shows an ambiguous picture and the respondent tells about the story

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is a collage?

A
  • a pattern (larger picture, story etc.) created by sticking pictures or materials onto a surface
  • respondents assemble images that represent their thoughts and feelings
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What are the expressive techniques?

A

Role playing
- it’s important to make them stop afterwards because participants might enjoy their role
Third person technique
Brand party
- they need to explain if a brand had a party: who would be there, what kind of music, who is not invited etc.
Obituary
- give the task that a certain brand died and talk about its life and its failures and successes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What are the guidelines for using projective techniques?

A
  • should be used because the required info cannot be obtained accurately by direct methods
  • should be used for exploratory research to gain initial insights and understanding
  • because of the complexity, they should not be used naively
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What are the 9 ways to evaluate questions?

A
  • desk based evals
  • expert panels
  • respondent debriefing
  • analysis of existing data
  • split-ballot experiment
  • behavioural coding
  • interviewer rating
  • standard field pilots
  • cognitive testing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What are desk based evaluations and why might they be helpful?

A

The use of textbook etc. as reference to check if your questions follow the rules etc
Actively try to find and solve problems with your questions
May be helpful to spot mistakes that you didn’t see before

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What are expert panels?

A

Bringing experts in to review your work/questionnaire and to get their inputW

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is the risk with expert panels?

A

It may make the researcher inclined to prepare the survey for the experts, as opposed to for the general public
More likely to use jargon etc. rather than layman’s terms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is respondent debriefing?

A

Asking respondents to provide feedback on how it was to take the survey

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What is analysis of existing data and what are the benefits?

A
  • similar to desk based evals
  • may use parts of surveys from previous questionnaires
  • you will have the data on the performance of the questions which is a big advantage (may for example see that a q didn’t provide variability which is problematic)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What is a split-ballot experiment and name a disadvantage?

A

When you take a pool of respondents and then divide them (randomly but evenly)
Then distribute different instrument to each sub-group and see which performs best
Example: see if it is better to use a 5 point or a 10 point scale
Disadvantage: expensive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What is behavioural coding?

A

when you observe the reactions of respondents to the questions (how long it takes, facial reaction etc.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What is interviewer rating?

A

when you get feedback from interviewer on the performance of questions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What are standard field pilots?

A

when you test the questionnaire on a test group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What is cognitive testing and why is it used?

A
  • Process used to study the manner in which target audiences understand, mentally process and respond to the survey
    questions
  • Used to reveal mental processes used when answering
  • Used to understand how respondents interpret the questions
  • We want to make qs as easy to understand and answer as possible
  • Helps to perfect the question
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

What are the four stages in question response process?

A
  • understand the question
  • retrieval from memory
  • judgement
  • responseH
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

How can cognitive interviewing be used to prevent problems in step 1 of question response process?

A
  • when trying to understand the question, the interviewee may translate the question to their own terms
  • this is not desirable as we want everyone to understand our phrasing to prevent people interpreting the question differently
  • we can use cognitive interviewing to check for this
  • this is generally not a very problematic step
46
Q

How can cognitive interviewing be used to prevent problems in step 2 of question response process?

A
  • can be a problematic step
  • to address this, ask additional qs such as “how did you decide your answer”
47
Q

How can cognitive interviewing be used to prevent problems in step 3 of question response process?

A
  • the respondent may not want to answer the question (possibly due to social desirability factor)
  • to address the problem, ask “is this distressing for you” etc. during cognitive interviewing
48
Q

How can cognitive interviewing be used to prevent problems in step 4 of question response process?

A
  • for example, in scale question, respondent may be deciding between 7 & 8
  • in cog. int. ask why they chose their answer, ask for other examples, ask if the question design provides them with adequate response options
49
Q

What are the cognitive interviewing methods?

A
  • Think aloud interviews
  • Probe interviews
50
Q

Describe think aloud interviews.

A

Requires that respondents verbalize their thought processes as they answer the survey

51
Q

Where are think aloud interviews more or less effective?

A

More:
- verbal info (not spatial or non-verbal)
- problem oriented questions
- conscious processing of info
Less:
- understanding of terms
It may rely on short term memory recall

52
Q

What are the types of probes used in probe interviewing?

A

Types of probes:
- scripted and unscripted
- concurrent (between qs) and retrospective (prov at end of survey)
- general and specific

53
Q

Give the pros and cons of scripted vs unscripted probes

A

Scripted:
- pros: interview is focused around objectives
- cons: might be too rigid
Unscripted/spontaneous:
- pros: allows more flexibility
- cons: no coordination of probing across interviewers

54
Q

Give the pros and cons of concurrent vs retrospective probes

A

Concurrent:
- pros: question is fresh on the mind
- cons: potential bias (switching of tasks can be distracting)
Restrospective:
- pros: avoids bias and task switching
- cons: long gap between question and probe

55
Q

Name some other types of probes

A
  • frame of reference (what were you thinking about while answering?)
  • encourage narrative (to learn what the short response to the question means)
  • redundancy (how is the phrase “give advice about X” different from the phrase “talk about X”)
  • acceptability (asking if respondent was offended by sensitive qs)
56
Q

What are the advantages of think aloud interviews?

A
  • less interviewer bias
  • less interviewer training
  • open-ended format (unanticipated answers)
  • interviewer more free to listen and less potential for bias
57
Q

What are the disadvantages of think aloud interviews?

A
  • need to subject training
  • less control
  • more difficult to tell if respondent can answer
  • potential bias in info processing
58
Q

What are the advantages of verbal probing?

A
  • more control (topics; depth)
  • little training of subject
59
Q

What are the disadvantages of verbal probing?

A
  • greater risk of reactivity
  • potential for bias
  • need for interviewer training
60
Q

What are the possible visual effects in scale development?

A

Horizontal x vertical
- horizontal usually best, never use diagonal writing
One column x two or more columns
- avoid more columns as can be confusing as they may read the info in rows and mix it up
Stand alone x matrix
- matrix better on paper but not on screen
- matrix hard if not one word items
- matrices can be bigger - can frustrate respondents
Status bar, sections, return option
- status bar can be inaccurate if there’s skip questions or questions that require more info -demotivating for respondents
- keep whole survey in one section for self monitored status
Reminders
- 2 reminders can make response rate double
- >2 reminders unhelpful

61
Q

What are ways in which you can improve scale questions?

A
  • provide a meaningful scale
  • use a balanced scale (but you can rescale it)
  • mind the meaning of the mid category (don’t use not sure)
  • consider a “don’t know” response (if you don’t, people may choose mid option and skew your data)
62
Q

What should you take into account when trying to provide a meaningful scale?

A

Low vs high number of categories
- usually short is best for respondents, long best for researchers (if they need to do factor analysis)
- sometimes long is better if people don’t want to fall into the extreme categories
Odd vs even number of categories
- odd provides mid category which many may choose as an “out” or safe option
All categories vs top boxes (outer categories) labeled
- don’t label all if its a long scale
- labels like “somewhat likely” might have different interpretations based on culture etc

63
Q

What is gamification?

A

The process of taking game design features and incorporating them into learning activities or research

64
Q

What are the effects of gamification?

A

Higher attractiveness
- more cognitively demanding as you have to interpret the pictures; not recommended
Colours
- different perceptions of colours can cause problems
Context and images
- may affect construal of category
- powerful contextual stimuli

65
Q

How does the order in which the alternatives are listed affect the distribution of replies?

A
  • if you start with positive options, people will answer more positively
  • e.g. poor to excellent or the other way around
66
Q

What effect can fatigue have on responses?

A
  • surveys that took more than 17.5 minutes led to predicted completion rates of less than 70%
  • surveys with more than 30 screens/questions are predicted to exceed acceptable dropout rates
  • other reports state 55 clicks as a threshold level
67
Q

What are the types of constructs?

A
  • single item (single pointer) (can be risky to rely on just one question)
  • composite indicator/measure
  • observed characteristics
68
Q

How do you operationalize?

A
  • conceptualise the concepts being worked with
  • specify the method of measurement
  • define explanatory models
69
Q

What are the possible negative outcomes if operationalisation is done incorrectly?

A

Completely wrong operational definition
- we are measuring something else than what we expect
Imprecise operational definition
- real characteristics and indicators used are different but there’s some overlap
Reductionist operational definition
- all the items you have are part of the construct but they don’t cover the construct in its whole
Extensive operational definition
- cover too much; what you aim to cover is contained in it but it also includes unrelated info

70
Q

What are the steps in the scale development process?

A
  • determine clearly what it is you want to measure
  • generate an item pool (through research or focus group)
  • determine the format for measurement
  • have the initial item pool reviewed by experts (face validity etc)
  • consider inclusion of validation items
  • apply items to a development scale (must also filter out items)
  • evaluate the items: the heart of scale development (can run an exploratory factor analysis)
  • optimize scale length
71
Q

What are the three stages of the scale development process?

A
  • item generation and selection phase
  • scale refinement phase
  • scale validation phase
72
Q

Explain the item generation and selection phase

A
  • contains item generation and selection
  • qualitative inquiry stage
  • content and face validity assessment
73
Q

Explain the scale refinement phase

A

Pilot test sample/stage
- item analysis
- exploratory factor analysis
- consistency and reliability assessment
Purification stage/callibration sample
- confirmatory factor analysis
- unidimensionality and reliability assessment
- convergent and discriminant validity assessment

74
Q

Explain the scale validation phase

A

Validation sample/stage
- replication of confirmatory factor analysis
- unidimensionality and reliability assessment
- convergent and discriminant validity assessment
- nomological validity assessment

75
Q

What are some questions you may ask during the planning of scale development?

A
  • how many items are necessary
  • which response scale is appropriate
  • how to score the test
  • which psychometric model is appropriate
  • what item evaluation process is suitable
  • how to administer the test
76
Q

What are the 3 measurement models?

A
  • unidimensional (all items in matrix building one single construct)
  • oblique 3-factor model (3 dimensions that are interrelated; confirmatory factor analysis will help to see if this is the right model, and if the items are measuring what they are meant to measure)
  • hierarchical model (see if there’s other roots that are deeper behind this model)
77
Q

What are the 3 critical interactions in operationalization?

A
  • client/consultant - researcher
  • research objective - items in the instrument
  • instrument - respondent (about understanding of questions and whether they are appropriate for the target group)
78
Q

What is reliability?

A
  • the degree to which a variable has nearly the same value when measured several times
  • precision: absence of random error
  • error: discrepancy between the observed result and the true value
79
Q

What is validity?

A
  • the degree to which a variable actually represents what it is supposed to represent
  • accuracy: getting the correct result
  • bias: systematic error; estimates are moved in one direction
80
Q

What is the best way to assess reliability?

A

Comparison among repeated measures

81
Q

What is the best way to assess validity?

A

Comparison with a reference standard

82
Q

What is reliability’s value to a study?

A

Increases power to detect random effects

83
Q

What is validity’s value to a study?

A

Increases validity of conclusions

84
Q

What is reliability threatened by?

A

Random error (variance)

85
Q

What is validity threatened by?

A

Systematic error (bias)

86
Q

Describe reliability vs validity on a dartboard diagram

A
  • reliability is when all the points are close to together, regardless of their positioning on the board
  • validity is when the points are when the points are at the bullseye
87
Q

What are some areas across which there must be consistency in order to be reliable

A
  • items/subscales/total scales (internal consistency)
  • data collectors (inter-rater reliability or inter-observed agreement)
  • time (test-retest reliability)
88
Q

Define internal consistency

A

Extent to which the items on an instrument adequately and randomly sample one construct

89
Q

How do you assess internal consistency?

A

If the instrument adequately and randomly samples one construct, and if it were divided into two equal parts, both parts should correlate strongly

90
Q

What is the metric for internal consistency?

A

Cronbachs’ coefficient alpha
- the average split-half correlation based on all possible divisions of an instrument into two parts

91
Q

How should one interpret Cronbach’s alpha?

A

a ≥ 0.7 - adequate for measures under development
a ≥ 0.8 - adequate for basic research
a ≥ 0.9 - adequate for measures on which decisions are based

92
Q

Define inter-rater reliability/inter-observer agreement

A

Extent to which the instrument measures the same construct regardless of who collects the data

93
Q

How do you assess inter-rater reliability/inter-observer agreement?

A

If the same construct were observed by two data collectors, their ratings should be almost identical

94
Q

What is the metric for inter-rater reliability?

A

Cohen’s kappa
- percentage of agreement between two data collectors

95
Q

How do you interpret Cohen’s kappa?

A

≥ 90% good
≥ 80% acceptable
< 80% problematic

96
Q

Define test-retest reliability

A

Extent to which the instrument yields consistent results at two points in time

97
Q

How do you assess test-retest reliability?

A
  • administer the measure at two points in time
  • the time interval is set so that no improvement is expected to occur between the first and second administrations
98
Q

What is the metric for test-retest reliability?

A

Pearson correlation coefficient
Expressed as a correlation between pairs of scores from the same schools obtained at the measurement administrations

99
Q

How do you interpret the metric for test-retest reliability?

A

r ≥ 0.7 acceptable

100
Q

What are the two types of validity and their subgroups?

A

Theory-related validity
- face validity
- content validity
- construct validity
Criterion-related validity
- concurrent validity
- predictive validity

101
Q

What is face validity?

A
  • looks to measure what it is supposed to measure
  • looks at items for appropriateness (client, sample respondents)
  • least scientific validity measure
102
Q

Define content validity

A

Extent to which the items on an instrument relate to the construct of interest e.g. student behavious

103
Q

How do you assess content validity?

A

Expert judgement if items measure content theoretically or empirically linked to the construct

104
Q

What is the metric for content validity and how do you interpret it?

A

Expressed as a percentage of expert agreement
≥ 80% agreement desirable

105
Q

Define construct validity

A

Extent to which the instrument measures what it is supposed to measure (e.g. the theorized construct “student behaviour”)

106
Q

How do you assess construct validity?

A
  • factor analyses yielding information about the instrument’s dimensions (e.g. aspects of “student behaviour)
  • correlations between constructs hypothesized to impact each other (e.g “student behaviour” and “student reading achievement”)
107
Q

What is the metric for construct validity and how do you interpret it?

A

Statistical model fit indices (e.g. chi square, PCA, CFA)
Interpret it through statistical significance

108
Q

Define criterion-related validity

A

Extent to which the instrument correlates with another instrument measuring a similar aspect of the construct

109
Q

How do you assess criterion-related validity?

A
  • concurrent validity: compare data from concurrently administered measures for agreement
  • predictive validity: compare data from subsequently administered measures for predictive accuracy
110
Q

What is the metric used when assessing criterion-related validity and how is it interpreted?

A

Expressed as a correlation between two measurements
Moderate to high correlations are desirable
Concurrent validity: very high correlations might indicated redundancy of measures

111
Q

How might you modify a scales?

A
  • wording
  • length
  • dimensions
  • multiple modifications