Chapter 2 and 3 Flashcards

1
Q

Sources of ideas

A

personal interest, observation, reading, problem solving

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Research that is directed toward solving practical problems is often classified as ____ research

A

Research that is directed toward solving practical problems is often classified as APPLIED research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Studies that are intended to solve theoretical issues are classified as ___ research

A

Studies that are intended to solve theoretical issues are classified as BASIC research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Primary vs Secondary Source

A

Primary: firsthand report of observations or research results written by the individual who actually conducted the research and made the observation
Secondary: description or summary of another person’s work. Written by someone who did NOT participate in the research or observations being discussed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Issue with secondary sources

A
  • they could be biased or inaccurate
  • not directly from the researcher/observer
  • only pieces of the original study were taken and perhaps reshaped to fit their writing
  • only share part of the truth. Sometimes can be distorted and false
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is an abstract?

A

a brief summary of the publication, usually about 2000 words

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Do full-text databases usually contain more or less info on a subject?

A

less

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is in an introduction?

A
  • the intro discusses previous research that forms the foundation for the current research study
  • clear statement of the problem being investigated
  • hypothesis and prediction
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is in the methods section?

A

details concerning the participants and the procedures used in the study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Results section

A

presents details of the statistical analysis.

Not usually important for generating new research idea

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Discussion section

A

summarizes the results of the study, starting the conclusions, and noting potential applications.

  • hypothesis supported?
  • alternate explanations/limitations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What qualities must a hypothesis possess?

A
  • must be LOGICAL: based on observation, previous research, etc.
  • must be POSITIVE: indicates that a relationship does exist.
  • must be TESTABLE: testable prediction for which data can be collected to support
  • must be SIMPLE
  • must be FALSIFIABLE: have a way to prove it wrong or support alternate hypotheses
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

hypothesis vs prediction

A

hypothesis is a more general statement, while a prediction is MEASURABLE and SPECIFIC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what is a testable hypothesis?

A

one for which all of the variables, events, and individuals can be defined and observed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a refutable hypothesis?

A

one tha can be demonstrated to be false. It is possible for the outcome to be different from the prediction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the problem with this hypothesis?:

“For adults, there is no relationship between age and memory ability.”

A

It is not positive, ie. it does not indicate that a relationship exists between variables.
A PREDICTION THAT DENIES EXISTENCE IS UNTESTABLE.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A researcher designs a study to determine whether the number of syllables per word influences people’s ability to recall a list of 20 words. This study can be classified as ___ research

A

basic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

features of pseudoscience

A

-hypothesis not falsifiable–scientific-sounding terminology
-supoprtive evidene is anectodal or relies on “expert” testimony
“claims are vague, appeal to preconceived ideas
-claims are never revised to account for new data; conflicting data ignored
-If tests are reported, methodology unscientific, data questionable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

4 GOALS OF PSYCHOLOGICAL SCIENCE RESEARCH

A
  1. DESCRIBING BEHAVIOUR: careful observation and measurement
  2. PREDICTING BEHAVIOUR: systemically related variables/events
  3. DETERMINING CAUSE OF BEHAVIOUR: cause and effect
  4. EXPLAINING BEHAVIOUR
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

CRITERIA FOR CAUSAL CLAIMS

A
  • covariation of cause and effect
  • temporal precedence
  • alternative explanations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What does “covariation of cause and effect” mean?

A

do the two events or variables happen together?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

temporal precedence?

A

does one event happen before the other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

in the claim: “violent crime and ice cream sales increase at the same time,” what is an alternate explanation/third variable?

A

warm weather causes an increase in both violent crime and ice cream sales

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Applied vs basic research

A

applied: solving practical problem
basic: gaining theoretical understanding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is a variable?

A

anything that can have more than one level (e.g., intelligence, GSR, happiness, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Types of variable

A
  • situational
  • response
  • participant
  • mediating
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Situational variable

A

e.g., time of day, lighting in room

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Response variable

A
  • what is measured

- e.g., behaviour, pressing a button, answering a test, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Participant variable

A
  • what makes an individual unique

- e.g., English as a second language, age, gender, etc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Mediating variable

A
  • connects one variable to another
  • alternative explanation
  • e.g., poverty = lower lifespan. Mediating variable would be access to healthcare
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is an operational definition?

A
  • how am I going to measure/change a variable?
  • procedure for indirectly measuring and defining a variable that cannot be observed or measured directly.
  • specifies a measurement procedure (a set of operations) for measuring an external, observable behavioe, and uses the resulting measurements as a definition and a measurement of the hypothetical construct.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What is a theory in behavioural science?

A

A theory is a set of statements about the mechanisms underlying a particular behaviour.
A good theory generated predictions about behaviour.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What are constructs?

A

hypothetical variables or mechanisms that help explain and predict behavior in a theory.
e.g., anxiety or self-esteem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Give an example of how a construct can be influenced by external stimuli and, in turn, can influence external behaviour

A

external factors such as rewards can affect motivation (a construct), and motivation can then affect performance (a behaviour)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

how can hunger be operationally defined?

A

as the number of hours of food deprivation.

e.g., In an experiment, measure how much food a rat eats and that amount defines how hungry he is.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Limitations of operational definitions

A
  1. leaves out characteristics
  2. add extra characteristics
  3. definitions can impact conclusions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

most common way intelligence is operationally defined?

A

IQ test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

How can you reduce the problem of leaving out important components of a construct when operationally defining it?

A

One way to reduce this problem is to INCLUDE TWO OR MORE DIFFERENT PROCEDURES TO MEASURE THE SAME VARIABLE.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Whenever the variables in a research study are hypothetical constructs, you must use ___ ___ to define and measure the variables

A

operational definitions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Positive relationship

A

two measurements change together in the same direction

41
Q

Negative relationship

A

two measures change in opposite directions so that people who score high on one measure tend to score low on the other

42
Q

a consistent positive relationship produces a correlation near ___, a consistent negative relationship produces a correlation near ___, and an inconsistent relationship produces a correlation near ___

A

a consistent positive relationship produces a correlation near +1, a consistent negative relationship produces a correlation near -1, and an inconsistent relationship produces a correlation near 0.

43
Q

an inconsistent relationship, or a correlation near zero, would indicate that a test is not a ___ measure of a variable

A

valid

44
Q

what is validity?

A

Does the measurement procedure actually measure what it claims to be measuring?

45
Q

validity of research design

A

in an experiment where x causes y, how valid is that claim?

46
Q

6 kinds of validity of measurement:

A
  • face validity
  • concurrent validity
  • predictive validity
  • construct validity
  • convergent and divergent validity
47
Q

Face validity

A

an unscientific form of validity demonstrated when a measurement procedure superficially appears to measure what it claims to measure.
ie. does the measurement technique look like it measures the variable that it claims to measure?

48
Q

Concurrent validity

A

demonstrated when scores from a new measure are directly related to scores obtained from an established measure of the same variable

49
Q

Predictive validity

A

does the measure predict later behaviour?
When the measurements of a construct accurately predict behaviour, the measurement procedure is said to have PREDICTIVE VALIDITY

50
Q

Construct validity

A

Requires that the score obtained from a measurement procedure behave exactly the same as the variable itself.
Construct validity is based on many research studies that use the same measurement procedure and grows gradually as each new study contributes more evidence.
ie. does the measure match the theoretical construct?

51
Q

example of construct validity

A

a therapist gives BDI to clients and finds that those he deemed to be depressed scored higher. Expert showed BDI to be valid

52
Q

Convergent validity

A

demonstrated by a strong relationship between the scores obtained from two or more different methods of measuring the SAME construct.
-ie. are scores on the measure similar to other measures of the same construct?

53
Q

Divergent validity

A

demonstrated by showing little or no relationship between the measurements of two DIFFERENT constructs

54
Q

When is a measurement procedure said t have reliability?

A

if it produces identical (or nearly identical) results when it is used repeatedly to measure the same individual under the same conditions.
RELIABILITY = IS THE MEASURE STABLE/CONSISTENT?

55
Q

The concept of reliability is based on the assumption that the variable being measured is ____ or ___

A

stable or constant.

ie. won’t change dramatically one day to another

56
Q

equation to measure error in a measurement procedure

A

Measurement score = true score + error

57
Q

most common sources of error and their definitions

A
  • OBSERVER ERROR: individual who makes the measurements can introduce simple human error into the measurement process (e.g., millisecond errors from pressing handheld stopwatch)
  • ENVIRONMENTAL CHANGES: small changes in environment from one trial to another may influence measurements. It is hard to replicate exact environment multiple times.
  • PARTICIPANT CHANGES: participant can change between measurements. e.g., degree of focus and attention or feeling hunger during a test
58
Q

3 reliability measures:

A
  1. test-retest reliability: same individual should score similarly on 2 successive tests
  2. inter-rater reliability: degree of agreement between 2 observers who simultaneously record measurements of the behaviour.
  3. split-half reliability: splitting items on a questionnaire or test in half, computing separate score for each hald, and then calculating degree of consistency between the two scores for a group of participants. Two halves should score similarly.
59
Q

Reliability vs validity

A

reliability: consistent scores
validity: measuring what is intended to be measured

60
Q

Is it possible to have a valid measure that is not reliable?

A

NO

61
Q

Is it possible to have a reliable measure that is not valid?

A

YES

62
Q

What is the Barnum effect?

A

people tend to agree with vague personality descriptions that appear to apply to them.

63
Q

How can the Barnum effect be cancelled?

A

by negative attributes

64
Q

What is the accuracy of a measurement?

A

the degree to which the measurement conforms to the established standard

65
Q

can a measurement process be valid and reliable, but not accurate?

A

yes.
e.g., a car is travelling at 40mph and the speedometer consistently (reliably) reads 50mph. When the car is going 30 mph, the speedometer consistently reads 40mph.
The speedometer correctly differentiates different speeds, which means that it is producing valid measurement of speed. It is consistent which means it is reliable. But it is inaccurate

66
Q

The set of categories used for classification is called the ___ ___ ____

A

scale of measurement

67
Q

4 different types of measurement scales:

A
  1. nominal
  2. ordinal
  3. interval
  4. ratio
68
Q

Nominal scale

A
  • qualitative data arranged in discrete, not ordered categories.
  • categories have names but are not related to each other in any systemic way.
69
Q

Ordinal scale

A
  • data arranged in order of magnitude
  • e.g., small, medium and large; letter grades; junior, sophomore, and senior year; etc.
  • with measurements from an ordinal scale, we can determine whether 2 individuals are different and the direction of that difference, but not the magnitude of the difference between the two individuals.
70
Q

Interval scale

A
  • series of equal intervals
  • ZERO IS ARBITRARY
    e. g., temperature in degrees celcius, time, etc.
71
Q

Ratio scale

A
  • series of equal intervals
  • ZERO MEANS SOMETHING
    e. g., weight
72
Q

Why can’t you multiply or divide values in an interval scale?

A

because zero is arbitrary

73
Q

What is the issue with rating scales?

A

They are often treated like interval scales, when they should be ordinal.
Researchers assume that differences between labels are equal.
e.g., rating happiness from 1 to 5

74
Q

3 categories of modalities of measurement

A
  • self-report measures
  • physiological measures
  • behavioural measures
75
Q

What category of external expression to define fear is used in each scenerio:

  1. individual claims to be afraid
  2. individual refuses to travel
  3. individual has an increased heartrate
A
  1. self-report
  2. behavioural
  3. physiological
76
Q

pros and cons of self-report measures

A

pros: -can ask people (almost) everything
- useful when behaviour cannot be observed (e.g., past)
- inexpensive and easy
cons: -unreliable memories
- untruthfulness
- unwilling to answer
- don’t know how to answer

77
Q

pros and cons of physiological measures:

A

pros: -objective
- measures what people can’t or don’t know how to measure
cons: -expensive
- there could be alt. explanations to physiological changes
- can sometimes be controlled (e.g., control HR in lie detectors)

78
Q

pros and cons of behavioural measures

A

pro: -more control (e.g., in lab)
- more generaliable/real (in naturalistic observations)
con: -behaviour may be temporary or situational

79
Q

applications of barnum effect

A

personality tests, horoscopes, fortune telling

80
Q

can the mode be determined for nominal, ordinal, interval, and ratio scales?

A

yes

81
Q

Can the median be determined for nominal, ordinal, interval, and ratio scales?

A

Only for ordinal, interval and ratio. Not for nominal scale.

*in ordinal, you can determine the “center” category, but not the difference between them

82
Q

Can the mean determined for nominal, ordinal, interval, and ratio scales?

A

only for interval and ratio scales

83
Q

Range effect

A

sensitivity problem when scores obtained in a research study tend to cluster at one end of the measurement scale. Includes ceiling and floor effects

84
Q

Ceiling effect

A

when the range of scores is restricted at the high end.

Usually task is too easy

85
Q

floor effect

A

clustering of scores at the low end of the scale.

Usually task is too hard

86
Q

What is an artifact in research?

A

a nonnatural feature accidentally introduced into something being observed.
Experimenter bias, demand characteristics and participant reactivity.

87
Q

Experimenter bias

A

when measurements obtained in a study are influenced by the experimenter’s expectations or personal beliefs regards in the outcome of the study

88
Q

strategies for limiting experimenter bias

A
  • standardize or automate the experiment

- use a blind experiment (single of double)

89
Q

single-blind research study

A

if the researcher does not know the predicted outcome

90
Q

double-blind study

A

if both the researched and the participants are unaware of the predicted outcome

91
Q

demand characteristics

A

any of the potential cues or features of a study that (1) suggest to the participants what the purpose and hypothesis is, and (2) influence the participant’s response or behaviour in a certain way

92
Q

Reactivity

A

occurs when participants modify their natural behviour in response to the fact that they are participating in a research study or the knowledge that they are being measured

93
Q

the good subject role

A

participants have identified the hypothesis and are trying to produce responses that support the investigator’s hypothesis

94
Q

the negativistic subject role

A

The participants have identified the hypothesis of the study and are trying to act contrary to the investigator’s hypothesis

95
Q

The apprehensive subject role

A

participants are overly concerned that their performance will be used to evaluate their abilities or personal characteristics. They respond with socially desirable and fashionable responses rather than truth

96
Q

The faithful subject role

A

participants attempt to follow instructions to the letter and avoid acting on any suspicions they have about the purpose of the study

97
Q

field study

A

place that the participant or subject perceives as a natural environment

98
Q

ways to limit effects of demand characteristics

A
  • use deception on participants
  • naturalistic strategy
  • add filler items to questionnaire
  • post-experiment interview to see if participants figured out the hypothesis