WEEK #6 - research methods Flashcards

1
Q

what is measurement ?

A

is the assignment of a number of a number to a characteristic of an object

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what does measurement allow ?

A

measurement allows the characteristic in question to be compared between objects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

in addition to physical objects, what else foes measurement deal with ?

A

intangible characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what are some examples of psychological construct variables that cannot be directly measured ?

A

intelligence, self-esteem, depression, pain, anxiety, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what is the term used to describe variables that cant be directly measured ?

A

constructs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

why cant constricts be observed directly ?

A

as they represent tendencies to think, feel or act in certain ways

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is the conceptual definition of a construct ?

A

describes the behaviours snd internal processes that make up that construct and how it relates to other variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what does conceptually mean ?

A

having a clear and complete conceptual definition of a construct t is a prerequisite for good measurement. it allows you to make sound decisions about exactly how to measure the construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what does operationally mean ?

A

defines how precisely a variable is to be measured and ensures that all researchers are measuring the construct using the same method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

define operational ?

A

in order to be able to accurately measure a variable or construct an operational definition is required and clearly defining the operational definition is important as there may be multiple operational definitions for a variables and constructs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what is covering operations ?

A

when various operational definitions converge on the same construct and have scores closely related to each other it is evidence that the operational definitions are measuring the c obstruct effectively

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what are the three types of measure ?

A
  • self-report measures
  • behavioural measures
  • physiological measures
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what is self-sport measures :

A

participants report their own thoughts, feelings, and actions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what are examples of self-sport measures ?

A

PHQ9, GAD7, SCAT 5 symptom evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what is behavioural measures ?

A

participants behaviour is observed and recorded

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what are examples of behavioural measures ?

A

allow children to play in a room and observe/record them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

what is physiological measures ?

A

involve recording any of a wide variety of physiological processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what are examples of physiological measures ?

A

HR, BP, SPO2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

what are the types of data ?

A

continuous variables and discrete variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

what are continuous variables ?

A
  • can assume any value
  • example : distance, time, force
  • accuracy of the data is dependent on the measuring device
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

what are discrete variables ?

A
  • limited to certain numbers (typically whole numbers or integers)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

are clinical variables continuous or discrete ?

A

clinical variables are discrete (when making a discrete diagnosis a person either has the condition or they do not)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

how many categories can data be grouped into ?

A

4

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

what are the four categories that data can be grouped into ?

A
  • nominal
  • ordinal
  • interval
  • ratio
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
define nominal :
- mutually exclusive categories of subjects - no qualitative differentiation between categories - subjects are classified into one of the categories then counted
26
give an example of nominal :
students were classified as male or female then the number in each category was counted
27
define ordinal :
- also referred to as rank order scale - quantitative ordering of the variables but does not indicate the magnitude of the relationship or difference between them
28
give an example of ordinal :
the top 3 finishers of a race are ranked first, second and third but there is no indication of how much faster first place was to second place and second place to third place
29
define interval :
- equal units of measurement with the same distance between each division of the scale - there is no absolute zero point
30
give an example of interval :
fahrenheit scale, 60 degrees is hotter than 10 degrees but 100 degrees is not tie as hot as 50 degrees since 0 degrees doesn't not represent a complete absence of heat
31
define ratio :
- equal units of measurement between each division of the scale - zero represents an absence pf value - since all units are proportional comparisons are appropriate
32
give and example of ratio :
all measurements of distance, force and time
33
what are the four levels of measurement ?
1) nominal 2) ordinal 3) interval 4) ratio
34
how many of the 4 levels of measurement are category labels ?
all 4
35
how many of the 4 levels of measurement are rank order ?
- 3 of the 4 - ordinal, interval, ratio
36
how many of the 4 levels of measurement are equal intervals ?
- 2/4 - interval and ratio
37
how many of the 4 levels of measurement are true zero ?
- 1/4 - ratio
38
define reliability ?
- refers to the consistency of a measure - does the measure consistency reflect changes in what it purports to measure
39
with reliability, what are we looking that the measure is stable across ?
time and circumstance
40
how many types of reliability are there ?
3
41
what are the three types of reliability ?
1) test-retest reliability 2) internal consistency 3) inter-reader reliability
42
define test-retest reliability :
consistency over time
43
define internal consistency :
consistency of responses across the items on a multiple-item measure
44
define inter-rater reliability :
consistency between different observers in their judgements
45
how do we measure reliability ?
- split half correlation (this involves splitting the items into two sets, such as the first and second halves of the items or the ben- and odd- numbered items - cronbach's a (the mean of all possible split-half correlations for a set of items)
46
define validity :
validity is the extent to which the scores from a measure represent the variable they are intended to
47
how many types of validity are there ?
4
48
what are the four types of validity ?
1) content validity 2) criterion validity 3) discriminant validity 4) face validity
49
define face validity :
- is the extent to which a test is subjectively viewed as covering the concept it purports to measure. It refers to the transparency or relevance of a test as it appears to test participants. - face validity is at best a very weak kind of evidence that a measurement method is measuring what it is supposed to
50
define content validity :
- the extent to which a measure "covers" the construct of interest
51
TRUE OR FALSE content validity is usually assessed quantitatively
FALSE content validity is NOT usually assessed quantitively (assessed by carefully checking the measurement method against the conceptual definition of the construct)
52
define criterion validity :
- the extent to which people’s scores on a measure are correlated with other variables (known as criteria) that one would expect them to be correlated with. - A criterion can be any variable that one has reason to think should be correlated with the construct being measured, and there will usually be many of them
53
what are three points of criterion validity ?
1) concurrent validity 2) predictive validity 3) convergent validity
54
describe concurrent validity :
When the criterion is measured at the same time as the construct
55
describe predictive validity :
When the criterion is measured at some point in the future (after the construct has been measured)
56
describe convergent validity :
other measures of the same construct
57
what is discriminant validity ?
The extent to which scores on a measure are not correlated with measures of variables that are conceptually distinct.
58
what is efficiency ?
is the data precise and reliable, at the lowest possible cost ?
59
what is generality ?
can the method be applied successfully to a wide range of phenomena
60
how many measurement error are there ?
5
61
what are the 5 measurement errors ?
- parallax error - calibration error - zero error - damage - limit of reading of the measurement device
62
define parallax error :
incorrectly sighting the measurement
63
define calibration error :
if the scale is not accurately drawn
64
define zero error :
if the device doesn’t have a zero or isn’t correctly set to zero
65
define damage :
if the device is damaged or faulty
66
define limit of reading of the measurement :
the measurement can only be as accurate as the smallest unit of measurement of the device
67
how many types of error are there ?
3
68
what are the three types of errors ?
1) gross errors 2) systematic errors 3) random errors
69
what are gross errors ?
Gross Errors mainly covers the human mistakes in reading instruments and recording and calculating measurement results
70
what are systematic errors ?
- instrumental errors - environmental errors (external and environmental factors) - observational errors (inaccurate readings, conversion error)
71
(systematic errors) what are instrumental errors ?
shortcoming, misuse, measurement accuracy
72
(systematic errors) what are environmental errors ?
external and environmental factors
73
(systematic errors) what are observational errors ?
inaccurate readings, conversion error
74
what are random errors ?
errors caused by disturbances about which we are unaware
75
what is the contingency table of hypothesis testing ?
sample result, population result, Ha true and Ho true
76
what is Ha true ?
difference between measures does exist
77
what is Ho true ?
difference between measures does not exit
78
TRUE OR FALSE type 1 and type 2 are causes of error
TRUE
79
what are type 1 causes of error ?
- measurement error - lack of random sample - alpha value too liberal - investigator bias - improper use of one tailed test
80
what are type 2 causes of error ?
- measurement error - lack of sufficient power (N too small) - alpha value too conservative - treatment effect not properly applied
81
define bias :
- factors that operate on a sample that make it unrepresentative of the population - often subtle and may go undetected - sufficiently large samples will eliminate unknown factors that cause bias
82
what are expectancy effects of measurement bias :
- confirmation bias - recording baises - halo effect - social desirable bias
83
what is confirmation bias ?
finding what you were looking for
84
what is recording bias ?
- might be more accurate to call these ‘recall biases (occur when experimenters rely on imperfect records - e.g., their memory of their interview(s) with the participants) - availability heuristic (more ‘graphic’ information is easier to recal land ‘vividness problem’ ) - primacy / recency effect (tendency to remember the first and last pieces of information presented during an interview)
85
what is the halo effect ?
- when non-experimental variables affect experimental measures - very common in subjective appraisal of individual differences (e.g., well-groomed individuals judged to be conscientious * e.g., attractive individuals judged to be healthy)
86
what it social desirable bias ?
- participant selectively reports ‘positive’ information to the experimenter - impression management
87
what are the four expectancy effects of 'participant types' :
- the "good" participant - the "bad" participant - the "faithful" participant - the "apprehensive" participant
88
define the "good" participant :
participant behaves in a way that ‘confirms’ the experimenters hypothesis
89
define the "bad" participant :
participant behaves in a way that ‘disconfirms’ the experimenter’s hypothesis
90
define the "faithful" participant :
the participant follows experimenter’s instructions scrupulously
91
define the "apprehensive" participant :
participant is unusually concerned with experimenter’s evaluation of him/her
92
what is the expectancy effects of the pygmalion effect ?
when the experimenter causes real change in the participants due to (presumably unconscious) changes in his/her behaviour during the experiment
93
what is the expectancy effect of the Hawthorne effect ?
- studied the performance effects of changing a variety of working conditions - is usually used to refer to a change in a positive direction
94
what is the expectancy effects of the halo effect ?
- when used to describe behavioural changes within an experiment, is usually referring to ‘uncontrolled novelty of treatment’ - when the novelty of any new treatment is likely to cause an individual to demonstrate significant improvement in the short-term - on average, tends to evaporate within 8 weeks of treatment presentation
95
what is the expectancy effects of the placebo effect ?
- the ‘placebo effect’ is actually a cluster of determinants: * ‘spontaneous remission’ or ‘maturation’ (sometimes, symptoms just improve on their own, naturally) * non-specific effects of treatment (the generalized effect of ‘being in treatment’) * ‘re-interpretation’ of outcome measures (temporary improvement confused with cure & cognitive re-appraisal of symptoms)
96
what are some other expectancy effects ?
biosocial experiment cues and psychosocial experimenter cues
97
how do you reduce expectancy effects ?
- standardize experimenter-participant interaction - use blinding techniques - use deception (active or passive deception) - convince participant that you can detect lying
98
talk about safeguards against misleading studies :
- competition for research funding (only “the best” projects are funded) - results are disseminated in peer-reviewed journals (experts decide what is worthy of publication) - replication, replication, replication! (guards against Type I error and “invisible bias”)
99
who funds sources ?
- private industry (e.g. drug companies) - government agencies - philanthropic organizations - special interest groups
100
what are some problems with peer review ?
- non democratic - "error of central tendency" - assumes that reviewers are consistent, competent, and timely in their reviews
101
describe the non-democratic problem with peer review :
- limited pool of reviewers - generally consists of individuals with similar research objectives (i.e. individuals in competition with the scientist) - selection of reviewer’s at editors discretion (decision to accept/reject largely in the hands of one person)
102
describe the "error of central tendency" problem with peer review continued :
moderate viewpoints more fundable/publishable than more novel viewpoints
103
what is the "wastebasket effect" ?
non significant findings often are not published