Chapter 2 and 3 Flashcards

1
Q

Sources of ideas

A

personal interest, observation, reading, problem solving

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Research that is directed toward solving practical problems is often classified as ____ research

A

Research that is directed toward solving practical problems is often classified as APPLIED research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Studies that are intended to solve theoretical issues are classified as ___ research

A

Studies that are intended to solve theoretical issues are classified as BASIC research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Primary vs Secondary Source

A

Primary: firsthand report of observations or research results written by the individual who actually conducted the research and made the observation
Secondary: description or summary of another person’s work. Written by someone who did NOT participate in the research or observations being discussed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Issue with secondary sources

A
  • they could be biased or inaccurate
  • not directly from the researcher/observer
  • only pieces of the original study were taken and perhaps reshaped to fit their writing
  • only share part of the truth. Sometimes can be distorted and false
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is an abstract?

A

a brief summary of the publication, usually about 2000 words

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Do full-text databases usually contain more or less info on a subject?

A

less

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is in an introduction?

A
  • the intro discusses previous research that forms the foundation for the current research study
  • clear statement of the problem being investigated
  • hypothesis and prediction
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is in the methods section?

A

details concerning the participants and the procedures used in the study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Results section

A

presents details of the statistical analysis.

Not usually important for generating new research idea

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Discussion section

A

summarizes the results of the study, starting the conclusions, and noting potential applications.

  • hypothesis supported?
  • alternate explanations/limitations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What qualities must a hypothesis possess?

A
  • must be LOGICAL: based on observation, previous research, etc.
  • must be POSITIVE: indicates that a relationship does exist.
  • must be TESTABLE: testable prediction for which data can be collected to support
  • must be SIMPLE
  • must be FALSIFIABLE: have a way to prove it wrong or support alternate hypotheses
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

hypothesis vs prediction

A

hypothesis is a more general statement, while a prediction is MEASURABLE and SPECIFIC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what is a testable hypothesis?

A

one for which all of the variables, events, and individuals can be defined and observed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a refutable hypothesis?

A

one tha can be demonstrated to be false. It is possible for the outcome to be different from the prediction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the problem with this hypothesis?:

“For adults, there is no relationship between age and memory ability.”

A

It is not positive, ie. it does not indicate that a relationship exists between variables.
A PREDICTION THAT DENIES EXISTENCE IS UNTESTABLE.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A researcher designs a study to determine whether the number of syllables per word influences people’s ability to recall a list of 20 words. This study can be classified as ___ research

A

basic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

features of pseudoscience

A

-hypothesis not falsifiable–scientific-sounding terminology
-supoprtive evidene is anectodal or relies on “expert” testimony
“claims are vague, appeal to preconceived ideas
-claims are never revised to account for new data; conflicting data ignored
-If tests are reported, methodology unscientific, data questionable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

4 GOALS OF PSYCHOLOGICAL SCIENCE RESEARCH

A
  1. DESCRIBING BEHAVIOUR: careful observation and measurement
  2. PREDICTING BEHAVIOUR: systemically related variables/events
  3. DETERMINING CAUSE OF BEHAVIOUR: cause and effect
  4. EXPLAINING BEHAVIOUR
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

CRITERIA FOR CAUSAL CLAIMS

A
  • covariation of cause and effect
  • temporal precedence
  • alternative explanations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What does “covariation of cause and effect” mean?

A

do the two events or variables happen together?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

temporal precedence?

A

does one event happen before the other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

in the claim: “violent crime and ice cream sales increase at the same time,” what is an alternate explanation/third variable?

A

warm weather causes an increase in both violent crime and ice cream sales

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Applied vs basic research

A

applied: solving practical problem
basic: gaining theoretical understanding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is a variable?
anything that can have more than one level (e.g., intelligence, GSR, happiness, etc.
26
Types of variable
- situational - response - participant - mediating
27
Situational variable
e.g., time of day, lighting in room
28
Response variable
- what is measured | - e.g., behaviour, pressing a button, answering a test, etc.
29
Participant variable
- what makes an individual unique | - e.g., English as a second language, age, gender, etc
30
Mediating variable
- connects one variable to another - alternative explanation - e.g., poverty = lower lifespan. Mediating variable would be access to healthcare
31
What is an operational definition?
- how am I going to measure/change a variable? - procedure for indirectly measuring and defining a variable that cannot be observed or measured directly. - specifies a measurement procedure (a set of operations) for measuring an external, observable behavioe, and uses the resulting measurements as a definition and a measurement of the hypothetical construct.
32
What is a theory in behavioural science?
A theory is a set of statements about the mechanisms underlying a particular behaviour. A good theory generated predictions about behaviour.
33
What are constructs?
hypothetical variables or mechanisms that help explain and predict behavior in a theory. e.g., anxiety or self-esteem
34
Give an example of how a construct can be influenced by external stimuli and, in turn, can influence external behaviour
external factors such as rewards can affect motivation (a construct), and motivation can then affect performance (a behaviour)
35
how can hunger be operationally defined?
as the number of hours of food deprivation. | e.g., In an experiment, measure how much food a rat eats and that amount defines how hungry he is.
36
Limitations of operational definitions
1. leaves out characteristics 2. add extra characteristics 3. definitions can impact conclusions
37
most common way intelligence is operationally defined?
IQ test
38
How can you reduce the problem of leaving out important components of a construct when operationally defining it?
One way to reduce this problem is to INCLUDE TWO OR MORE DIFFERENT PROCEDURES TO MEASURE THE SAME VARIABLE.
39
Whenever the variables in a research study are hypothetical constructs, you must use ___ ___ to define and measure the variables
operational definitions
40
Positive relationship
two measurements change together in the same direction
41
Negative relationship
two measures change in opposite directions so that people who score high on one measure tend to score low on the other
42
a consistent positive relationship produces a correlation near ___, a consistent negative relationship produces a correlation near ___, and an inconsistent relationship produces a correlation near ___
a consistent positive relationship produces a correlation near +1, a consistent negative relationship produces a correlation near -1, and an inconsistent relationship produces a correlation near 0.
43
an inconsistent relationship, or a correlation near zero, would indicate that a test is not a ___ measure of a variable
valid
44
what is validity?
Does the measurement procedure actually measure what it claims to be measuring?
45
validity of research design
in an experiment where x causes y, how valid is that claim?
46
6 kinds of validity of measurement:
- face validity - concurrent validity - predictive validity - construct validity - convergent and divergent validity
47
Face validity
an unscientific form of validity demonstrated when a measurement procedure superficially appears to measure what it claims to measure. ie. does the measurement technique look like it measures the variable that it claims to measure?
48
Concurrent validity
demonstrated when scores from a new measure are directly related to scores obtained from an established measure of the same variable
49
Predictive validity
does the measure predict later behaviour? When the measurements of a construct accurately predict behaviour, the measurement procedure is said to have PREDICTIVE VALIDITY
50
Construct validity
Requires that the score obtained from a measurement procedure behave exactly the same as the variable itself. Construct validity is based on many research studies that use the same measurement procedure and grows gradually as each new study contributes more evidence. ie. does the measure match the theoretical construct?
51
example of construct validity
a therapist gives BDI to clients and finds that those he deemed to be depressed scored higher. Expert showed BDI to be valid
52
Convergent validity
demonstrated by a strong relationship between the scores obtained from two or more different methods of measuring the SAME construct. -ie. are scores on the measure similar to other measures of the same construct?
53
Divergent validity
demonstrated by showing little or no relationship between the measurements of two DIFFERENT constructs
54
When is a measurement procedure said t have reliability?
if it produces identical (or nearly identical) results when it is used repeatedly to measure the same individual under the same conditions. RELIABILITY = IS THE MEASURE STABLE/CONSISTENT?
55
The concept of reliability is based on the assumption that the variable being measured is ____ or ___
stable or constant. | ie. won't change dramatically one day to another
56
equation to measure error in a measurement procedure
Measurement score = true score + error
57
most common sources of error and their definitions
- OBSERVER ERROR: individual who makes the measurements can introduce simple human error into the measurement process (e.g., millisecond errors from pressing handheld stopwatch) - ENVIRONMENTAL CHANGES: small changes in environment from one trial to another may influence measurements. It is hard to replicate exact environment multiple times. - PARTICIPANT CHANGES: participant can change between measurements. e.g., degree of focus and attention or feeling hunger during a test
58
3 reliability measures:
1. test-retest reliability: same individual should score similarly on 2 successive tests 2. inter-rater reliability: degree of agreement between 2 observers who simultaneously record measurements of the behaviour. 3. split-half reliability: splitting items on a questionnaire or test in half, computing separate score for each hald, and then calculating degree of consistency between the two scores for a group of participants. Two halves should score similarly.
59
Reliability vs validity
reliability: consistent scores validity: measuring what is intended to be measured
60
Is it possible to have a valid measure that is not reliable?
NO
61
Is it possible to have a reliable measure that is not valid?
YES
62
What is the Barnum effect?
people tend to agree with vague personality descriptions that appear to apply to them.
63
How can the Barnum effect be cancelled?
by negative attributes
64
What is the accuracy of a measurement?
the degree to which the measurement conforms to the established standard
65
can a measurement process be valid and reliable, but not accurate?
yes. e.g., a car is travelling at 40mph and the speedometer consistently (reliably) reads 50mph. When the car is going 30 mph, the speedometer consistently reads 40mph. The speedometer correctly differentiates different speeds, which means that it is producing valid measurement of speed. It is consistent which means it is reliable. But it is inaccurate
66
The set of categories used for classification is called the ___ ___ ____
scale of measurement
67
4 different types of measurement scales:
1. nominal 2. ordinal 3. interval 4. ratio
68
Nominal scale
- qualitative data arranged in discrete, not ordered categories. - categories have names but are not related to each other in any systemic way.
69
Ordinal scale
- data arranged in order of magnitude - e.g., small, medium and large; letter grades; junior, sophomore, and senior year; etc. - with measurements from an ordinal scale, we can determine whether 2 individuals are different and the direction of that difference, but not the magnitude of the difference between the two individuals.
70
Interval scale
- series of equal intervals - ZERO IS ARBITRARY e. g., temperature in degrees celcius, time, etc.
71
Ratio scale
- series of equal intervals - ZERO MEANS SOMETHING e. g., weight
72
Why can't you multiply or divide values in an interval scale?
because zero is arbitrary
73
What is the issue with rating scales?
They are often treated like interval scales, when they should be ordinal. Researchers assume that differences between labels are equal. e.g., rating happiness from 1 to 5
74
3 categories of modalities of measurement
- self-report measures - physiological measures - behavioural measures
75
What category of external expression to define fear is used in each scenerio: 1. individual claims to be afraid 2. individual refuses to travel 3. individual has an increased heartrate
1. self-report 2. behavioural 3. physiological
76
pros and cons of self-report measures
pros: -can ask people (almost) everything - useful when behaviour cannot be observed (e.g., past) - inexpensive and easy cons: -unreliable memories - untruthfulness - unwilling to answer - don't know how to answer
77
pros and cons of physiological measures:
pros: -objective - measures what people can't or don't know how to measure cons: -expensive - there could be alt. explanations to physiological changes - can sometimes be controlled (e.g., control HR in lie detectors)
78
pros and cons of behavioural measures
pro: -more control (e.g., in lab) - more generaliable/real (in naturalistic observations) con: -behaviour may be temporary or situational
79
applications of barnum effect
personality tests, horoscopes, fortune telling
80
can the mode be determined for nominal, ordinal, interval, and ratio scales?
yes
81
Can the median be determined for nominal, ordinal, interval, and ratio scales?
Only for ordinal, interval and ratio. Not for nominal scale. *in ordinal, you can determine the "center" category, but not the difference between them
82
Can the mean determined for nominal, ordinal, interval, and ratio scales?
only for interval and ratio scales
83
Range effect
sensitivity problem when scores obtained in a research study tend to cluster at one end of the measurement scale. Includes ceiling and floor effects
84
Ceiling effect
when the range of scores is restricted at the high end. | Usually task is too easy
85
floor effect
clustering of scores at the low end of the scale. | Usually task is too hard
86
What is an artifact in research?
a nonnatural feature accidentally introduced into something being observed. Experimenter bias, demand characteristics and participant reactivity.
87
Experimenter bias
when measurements obtained in a study are influenced by the experimenter's expectations or personal beliefs regards in the outcome of the study
88
strategies for limiting experimenter bias
- standardize or automate the experiment | - use a blind experiment (single of double)
89
single-blind research study
if the researcher does not know the predicted outcome
90
double-blind study
if both the researched and the participants are unaware of the predicted outcome
91
demand characteristics
any of the potential cues or features of a study that (1) suggest to the participants what the purpose and hypothesis is, and (2) influence the participant's response or behaviour in a certain way
92
Reactivity
occurs when participants modify their natural behviour in response to the fact that they are participating in a research study or the knowledge that they are being measured
93
the good subject role
participants have identified the hypothesis and are trying to produce responses that support the investigator's hypothesis
94
the negativistic subject role
The participants have identified the hypothesis of the study and are trying to act contrary to the investigator's hypothesis
95
The apprehensive subject role
participants are overly concerned that their performance will be used to evaluate their abilities or personal characteristics. They respond with socially desirable and fashionable responses rather than truth
96
The faithful subject role
participants attempt to follow instructions to the letter and avoid acting on any suspicions they have about the purpose of the study
97
field study
place that the participant or subject perceives as a natural environment
98
ways to limit effects of demand characteristics
- use deception on participants - naturalistic strategy - add filler items to questionnaire - post-experiment interview to see if participants figured out the hypothesis