Powerpoint 2: Measurement Issues Flashcards
what is narrative recording
minute by minute account of target child’s behaviour
what is time sampling
observing/recording specific behaviours on a checklist for a specific time period
what is event sampling
recording during specific events - only when target behaviour occurs
what are issues that come up with observational techniques
observer influences (problem with being watched) observer bias
how to counteract observer influence in observational research
observer in enviro for a period of time before they record (ex: Jane Goddall)
Participant observatino (someone in environment codes)
Hidden observer (one-way mirrors)
what are some ways to counteract observer bias?
score behaviours as specifically as possible
blinding
inter-observer reliability
Explain Convering operations as a concept in experimental design
all methods of study of the same concept should produce the same results… if they don’t you might be studying something else
What is measurements of equivalence
when people age, a measure might no longer be suitable ex: asking child to give up candy at 4 is much different than someone at 20… maybe their car would be more of an equivalent measurement
what is a nominal scale
categories ex: boys and girls
what is an ordinal scale
ordered scores ex: never, sometimes, always
what is an interval scale
ordered, equidistance ex: thermometer (0 doesn’t mean nothing, but distance from 5-10 is the same as 20-25)
what is a ratio scale
ordered, equidistance, zero point ex: test out of 10
internal consistency reliability?
consistency between different items on the same test that test the same concept
if a test tests what it claims to test then that test is ____
valid
content validity?
if its good - a test of a measure will adequately test every part/facet of that measure ; all abilities of subject tested
criterion validity
measure developed relates to other measures of same construct
concurrent validity?
the test has good concurrent validity if the measure tested has similar results to the same measure tested in another test (usually a well-established one)
predictive validity?
if the test gives results that correspond to performance in another way ex: testing cognitive function predicts job performance
construct validity
good if the test measures what you think it is ex: DCCS task - cognitive flexible ability or motor seperation/inhibition)
convergent validity
correlation with other theroretically related tasks
divergent/discrimminant validity
lack of correlation with measures of different things
floor effects?
most of partiicpants do so poorly there is very little variability because most fail task
ceiling effects
measures way to easy - everyone will do almost if not perfect - no variability
internal validity
testing what is suppose to be testing
can internal and external validity both be high?
not usually , if one is high the other is low usually - a trade-off
what is selection bias
when nonequivalent participants are assigned to different groups so they start off different
what is selective drop out
when we lose some participants in a study more than others ude to non-random reasons
what is differential drop out
when you lose more participants inone condition than you do in another condition
what isselective data loss?
more experimenter errors with most “difficult” or with youngest children so we have loss information on them
what are history effects?
effcts of external events on a study
ex: 9/11, hurrican katrina, christmas
what is experimenter drift?
when over time the varibales or methods change so you aren’t analyzing/scoring the data in the same way
what is a “good” participant
a participant who does what they think you want them to do
what is a “bright” participant?
a participant who responds to look smart
what are some respond sets?
yes biase
last name option (counterbalance this)
positional responding
alternate ansering to same question
what is overstandardizing
whena researcher, in hops of standardizing to make data look “cleaner”, limits their pool of participants; data becomes very biased, only represents a single sample
ex: no females in study bc of hormones
differences in independent variable in lab vs field
lab: good control, increases internal validity, not as generalizable, decreases external validity
field: little control, generalizaable, good external validity
differnet in dependent variable in lab vs field
lab: ease in measurement, reactivity, reponse sets to look out for
field: ecologically valid measuremeds, difficult to measure
how do we make sure we have consent in developmental research
Assent from kids -
they cans top at any time
their needs are the most important
confidentiality in developmental research
- how many pieces of information does someone need to identify a participant?
cant let parents have access to things you think children wouldnt want their parents to know
- this could also impact the results of your study
- have to tell parent they they are not allowed in consent form if they are not allowed
- careful with rare populations - easy to breach confidentialy
3 pieces: location, gender, birthdate
shoudl we use peer nominations and evaluations ind evelopmental research
can be useful, but can also be harmful
- make sure questions are asked in positive form and see whose missed in repsonses