WEEK #6 - research methods Flashcards
what is measurement ?
is the assignment of a number of a number to a characteristic of an object
what does measurement allow ?
measurement allows the characteristic in question to be compared between objects
in addition to physical objects, what else foes measurement deal with ?
intangible characteristics
what are some examples of psychological construct variables that cannot be directly measured ?
intelligence, self-esteem, depression, pain, anxiety, etc.
what is the term used to describe variables that cant be directly measured ?
constructs
why cant constricts be observed directly ?
as they represent tendencies to think, feel or act in certain ways
what is the conceptual definition of a construct ?
describes the behaviours snd internal processes that make up that construct and how it relates to other variables
what does conceptually mean ?
having a clear and complete conceptual definition of a construct t is a prerequisite for good measurement. it allows you to make sound decisions about exactly how to measure the construct
what does operationally mean ?
defines how precisely a variable is to be measured and ensures that all researchers are measuring the construct using the same method
define operational ?
in order to be able to accurately measure a variable or construct an operational definition is required and clearly defining the operational definition is important as there may be multiple operational definitions for a variables and constructs
what is covering operations ?
when various operational definitions converge on the same construct and have scores closely related to each other it is evidence that the operational definitions are measuring the c obstruct effectively
what are the three types of measure ?
- self-report measures
- behavioural measures
- physiological measures
what is self-sport measures :
participants report their own thoughts, feelings, and actions
what are examples of self-sport measures ?
PHQ9, GAD7, SCAT 5 symptom evaluation
what is behavioural measures ?
participants behaviour is observed and recorded
what are examples of behavioural measures ?
allow children to play in a room and observe/record them
what is physiological measures ?
involve recording any of a wide variety of physiological processes
what are examples of physiological measures ?
HR, BP, SPO2
what are the types of data ?
continuous variables and discrete variables
what are continuous variables ?
- can assume any value
- example : distance, time, force
- accuracy of the data is dependent on the measuring device
what are discrete variables ?
- limited to certain numbers (typically whole numbers or integers)
are clinical variables continuous or discrete ?
clinical variables are discrete (when making a discrete diagnosis a person either has the condition or they do not)
how many categories can data be grouped into ?
4
what are the four categories that data can be grouped into ?
- nominal
- ordinal
- interval
- ratio
define nominal :
- mutually exclusive categories of subjects
- no qualitative differentiation between categories
- subjects are classified into one of the categories then counted
give an example of nominal :
students were classified as male or female then the number in each category was counted
define ordinal :
- also referred to as rank order scale
- quantitative ordering of the variables but does not indicate the magnitude of the relationship or difference between them
give an example of ordinal :
the top 3 finishers of a race are ranked first, second and third but there is no indication of how much faster first place was to second place and second place to third place
define interval :
- equal units of measurement with the same distance between each division of the scale
- there is no absolute zero point
give an example of interval :
fahrenheit scale, 60 degrees is hotter than 10 degrees but 100 degrees is not tie as hot as 50 degrees since 0 degrees doesn’t not represent a complete absence of heat
define ratio :
- equal units of measurement between each division of the scale
- zero represents an absence pf value
- since all units are proportional comparisons are appropriate
give and example of ratio :
all measurements of distance, force and time
what are the four levels of measurement ?
1) nominal
2) ordinal
3) interval
4) ratio
how many of the 4 levels of measurement are category labels ?
all 4
how many of the 4 levels of measurement are rank order ?
- 3 of the 4
- ordinal, interval, ratio
how many of the 4 levels of measurement are equal intervals ?
- 2/4
- interval and ratio
how many of the 4 levels of measurement are true zero ?
- 1/4
- ratio
define reliability ?
- refers to the consistency of a measure
- does the measure consistency reflect changes in what it purports to measure
with reliability, what are we looking that the measure is stable across ?
time and circumstance
how many types of reliability are there ?
3
what are the three types of reliability ?
1) test-retest reliability
2) internal consistency
3) inter-reader reliability
define test-retest reliability :
consistency over time
define internal consistency :
consistency of responses across the items on a multiple-item measure
define inter-rater reliability :
consistency between different observers in their judgements
how do we measure reliability ?
- split half correlation (this involves splitting the items into two sets, such as the first and second halves of the items or the ben- and odd- numbered items
- cronbach’s a (the mean of all possible split-half correlations for a set of items)
define validity :
validity is the extent to which the scores from a measure represent the variable they are intended to
how many types of validity are there ?
4
what are the four types of validity ?
1) content validity
2) criterion validity
3) discriminant validity
4) face validity
define face validity :
- is the extent to which a test is subjectively viewed as covering the concept it purports to measure. It refers to the transparency or relevance of a test as it appears to test participants.
- face validity is at best a very weak kind of evidence that a measurement method is measuring what it is supposed to
define content validity :
- the extent to which a measure “covers” the construct of interest
TRUE OR FALSE
content validity is usually assessed quantitatively
FALSE
content validity is NOT usually assessed quantitively
(assessed by carefully checking the measurement method against the conceptual definition of the construct)
define criterion validity :
- the extent to which people’s scores on a measure are correlated with other
variables (known as criteria) that one would expect them to be correlated with. - A criterion can be any variable that one has reason to think should be correlated
with the construct being measured, and there will usually be many of them
what are three points of criterion validity ?
1) concurrent validity
2) predictive validity
3) convergent validity
describe concurrent validity :
When the criterion is measured at the same time as the construct
describe predictive validity :
When the criterion is measured at some point in the future (after the construct has been
measured)
describe convergent validity :
other measures of the same construct
what is discriminant validity ?
The extent to which scores on a measure are not correlated with measures of variables that are conceptually distinct.
what is efficiency ?
is the data precise and reliable, at the lowest possible cost ?
what is generality ?
can the method be applied successfully to a wide range of phenomena
how many measurement error are there ?
5
what are the 5 measurement errors ?
- parallax error
- calibration error
- zero error
- damage
- limit of reading of the measurement device
define parallax error :
incorrectly sighting the measurement
define calibration error :
if the scale is not accurately drawn
define zero error :
if the device doesn’t have a zero or isn’t correctly
set to zero
define damage :
if the device is damaged or faulty
define limit of reading of the measurement :
the measurement can only be as accurate as the smallest unit of measurement of
the device
how many types of error are there ?
3
what are the three types of errors ?
1) gross errors
2) systematic errors
3) random errors
what are gross errors ?
Gross Errors mainly covers the human mistakes in reading instruments and recording and calculating measurement results
what are systematic errors ?
- instrumental errors
- environmental errors (external and environmental factors)
- observational errors (inaccurate readings, conversion error)
(systematic errors) what are instrumental errors ?
shortcoming, misuse, measurement accuracy
(systematic errors) what are environmental errors ?
external and environmental factors
(systematic errors) what are observational errors ?
inaccurate readings, conversion error
what are random errors ?
errors caused by disturbances about which we are unaware
what is the contingency table of hypothesis testing ?
sample result, population result, Ha true and Ho true
what is Ha true ?
difference between measures does exist
what is Ho true ?
difference between measures does not exit
TRUE OR FALSE
type 1 and type 2 are causes of error
TRUE
what are type 1 causes of error ?
- measurement error
- lack of random sample
- alpha value too liberal
- investigator bias
- improper use of one tailed test
what are type 2 causes of error ?
- measurement error
- lack of sufficient power (N too small)
- alpha value too conservative
- treatment effect not properly applied
define bias :
- factors that operate on a sample that make it unrepresentative of the population
- often subtle and may go undetected
- sufficiently large samples will eliminate unknown factors that
cause bias
what are expectancy effects of measurement bias :
- confirmation bias
- recording baises
- halo effect
- social desirable bias
what is confirmation bias ?
finding what you were looking for
what is recording bias ?
- might be more accurate to call these ‘recall biases (occur when experimenters rely on imperfect records - e.g., their memory of their
interview(s) with the participants) - availability heuristic
(more ‘graphic’ information is easier to recal land ‘vividness problem’ ) - primacy / recency effect (tendency to remember the first and last pieces of information presented during an interview)
what is the halo effect ?
- when non-experimental variables affect experimental measures
- very common in subjective appraisal of individual differences
(e.g., well-groomed individuals judged to be conscientious * e.g., attractive individuals judged to be healthy)
what it social desirable bias ?
- participant selectively reports ‘positive’ information to the
experimenter - impression management
what are the four expectancy effects of ‘participant types’ :
- the “good” participant
- the “bad” participant
- the “faithful” participant
- the “apprehensive” participant
define the “good” participant :
participant behaves in a way that ‘confirms’ the experimenters hypothesis
define the “bad” participant :
participant behaves in a way that ‘disconfirms’ the experimenter’s hypothesis
define the “faithful” participant :
the participant follows experimenter’s instructions scrupulously
define the “apprehensive” participant :
participant is unusually concerned with experimenter’s evaluation of him/her
what is the expectancy effects of the pygmalion effect ?
when the experimenter causes real change in the participants due to (presumably unconscious) changes in his/her behaviour during the experiment
what is the expectancy effect of the Hawthorne effect ?
- studied the performance effects of changing a variety of working conditions
- is usually used to refer to a change in a positive direction
what is the expectancy effects of the halo effect ?
- when used to describe behavioural changes within an experiment, is usually referring to ‘uncontrolled novelty of treatment’
- when the novelty of any new treatment is likely to cause an individual to demonstrate significant improvement in the short-term
- on average, tends to evaporate within 8 weeks of treatment presentation
what is the expectancy effects of the placebo effect ?
- the ‘placebo effect’ is actually a cluster of determinants:
- ‘spontaneous remission’ or ‘maturation’ (sometimes, symptoms just improve on their own, naturally)
- non-specific effects of treatment
(the generalized effect of ‘being in treatment’) - ‘re-interpretation’ of outcome measures
(temporary improvement confused with cure & cognitive re-appraisal of symptoms)
what are some other expectancy effects ?
biosocial experiment cues and psychosocial experimenter cues
how do you reduce expectancy effects ?
- standardize experimenter-participant interaction
- use blinding techniques
- use deception (active or passive deception)
- convince participant that you can detect lying
talk about safeguards against misleading studies :
- competition for research funding (only “the best” projects are funded)
- results are disseminated in peer-reviewed journals (experts decide what is worthy of publication)
- replication, replication, replication! (guards against Type I error and “invisible bias”)
who funds sources ?
- private industry (e.g. drug companies)
- government agencies
- philanthropic organizations
- special interest groups
what are some problems with peer review ?
- non democratic
- “error of central tendency”
- assumes that reviewers are consistent, competent, and timely in their reviews
describe the non-democratic problem with peer review :
- limited pool of reviewers
- generally consists of individuals with similar research objectives (i.e. individuals in competition with the scientist)
- selection of reviewer’s at editors discretion
(decision to accept/reject largely in the hands of one person)
describe the “error of central tendency” problem with peer review continued :
moderate viewpoints more fundable/publishable than more novel viewpoints
what is the “wastebasket effect” ?
non significant findings often are not published