lecture 5 Flashcards
variation =
true variation = variation due to error
intra-interviewer variability =
if the interviewer is consistend in how they evaluate candidates
inter-interviewer variability =
the difference in evaluation and judgement among different interviewers who assess the same candidate
intra-observer variability =
the difference in repeated measurements by the same observer
inter-observer variability =
the difference in the measurements between other observers
bias self report;
- social desirability bias
- yes/no sayers biases
- logical error bias
- central tendency bias
social desirability bias =
the tendency to give answers that they think are socially acceptable or desired, rather rhen providing their true answer
yes/no sayers bias =
the tendency some individuals have to consistenly agree to say yes to survey questions, regardless of the content
logical error bias =
mistakes or errors that people make in their thinking or decision making process
central tendency bias =
common bias in human thinking where we rely too heavily on the measures of central tendency and overlook the full range of data or other important information
what are two indirect biases?
- leniency bias
- halo effect bias
leniency bias =
thendency in human judgement where individuals consistently evaluate things more positely then they are in reality
halo effect bias=
tendency of when we form overall positive/negative opinions about a person/place based on one quality
probing =
the act of asking follow up questions or seeking further clarification to dive deeper into a particular topic
prompting =
the act of saying something to remind/encourage someone to say something
contrived =
situation especially designed fo observing behavior
internal reliability =
are items consistent with eachother?
inter-rater reliability =
are coders consistend with each other?
cronbachs alpha =
it helps researchers determine how well a set of questions work together and provide consistent results. A higher alpha value indicates stronger reliability, while a lower value suggests a need for improvement or further evaluation of the questions being used.
–> If Cronbach’s alpha is 0.7 or higher, it means the questions in the survey are working well together and measuring the same concept reliably.
–>If Cronbach’s alpha is below 0.7, it suggests there might be some issues with the questions, and they may not be consistently measuring the concept.
cohen’s kappa =
it helps us understand how much they agree with each other beyond what we would expect by chance.
In general terms, if the kappa value is above 0.6, it indicates substantial agreement between the raters. If it’s below 0.4, it suggests poor agreement.