~Intro Research Methods Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What makes Psychology a “hard” science?

A

When studying psychology, we are often dealing with fuzzy concepts that are hard to define and hard to measure. The things that make psych a soft science, are the things that make it a hard science

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the “invisible” constructs Psychologists work with?

A

Emotion, Cognition, Perception, Aggression…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is an Operational Definition?

A

Things defined in terms of observable, measurable, and agreed-upon criteria.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Psychology’s Measurement Tools (e.g., experimental tasks, questionnaires, defined behaviours, etc…) are designed to measure an underlying ___

A

Construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

When we talk about that underlying phenomenon, the term we use is that we are looking at a particular ___.

A

Construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Structured Observation?

A

Observing behaviours in a controlled setting; often lab-based, where you as the researcher have a lot of say in these circumstances the child is experiencing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is an example of a Structured Observation Experiment?

A

You want to study aggression in a lab setting, you come up with a design where kids are put in a frustrating situation, and you see how they respond to that situation. Do they get more aggressive after being put through this experience?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a benefit of Structured Observation?

A

You have a lot of control, you know that every kid in your study has the exact same experience before you started counting their behaviours. This lets you narrow in on exactly what might be going on, the “active ingredient” that might lead to increased levels of a behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the downside of Structured Observation?

A

To get a lot of control, you often have to sacrifice real-world context, you have to make it a very narrow scope of experiences so that you can make a tight comparison. We’re not seeing how the phenomena we’re studying actually play out in the child’s real world.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Structured Observation cannot tell us everything about children’s ___

A

Real-world development

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is Naturalistic Observation?

A

Where you are watching and counting behaviours in a real-world setting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What two methods of collecting Data complement each other well?

A

Structured Observation and Naturalistic Observation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is an example of a Naturalistic Observation Experiment?

A

Going to a school and watching them playing at recess, watch them in their everyday environment, and count the number of aggressive behaviours that they engage in that context.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the benefit of Naturalistic Observation?

A

You’re able to observe the behaviour in a real-world setting with real-world context

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the downside of Naturalistic Observation?

A

You have far less control over the circumstances kids are experiencing while you’re counting behaviours

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Self-Report Data?

A

Using questionnaires/surveys in which individuals report on their attitudes, behaviours, etc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are examples of Self-Report Data?

A

You want to collect data on how aggressive you consider a child to be, so you ask kids to fill out a questionnaire asking them when they think it’s okay to perform a certain behaviour, like when is it okay to push a classmate in this circumstance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the benefits of using Self-Report Data?

A

This type of research can be useful when the question you’re asking is about how kids view a certain situation, or view types of behaviour, do they think it’s acceptable or not

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the downsides of using Self-Report Data?

A

Response Bias, people are generally reluctant to admit to things they are not proud of, or things that they know aren’t objectively good in some way

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are Standardized Tests?

A

Administered and scored uniformly; allows comparison against norms for a population): Tasks that are developed to tap into different skills and abilities that children might show.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is an example of a Standardized Test?

A

A standardized test that is designed to measure children’s reading comprehension abilities. They are reading passages and answering questions about what they have read.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is a benefit to using Standardized Tests?

A

They have been widely developed and tested on large numbers of people. Are really useful for finding broad patterns and interpreting things across different samples of people.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the downside to using Standardized Tests?

A

If we’re not mindful between the norm sample and our sample, we can draw inappropriate conclusions about their performance on that test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

In Standardized Testing, the comparison between the norms and your sample children is only appropriate if the two groups are ___

A

Similar

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What was the problem with the first version of the MMPI (Minnesota Multi-Basic Personality Inventory)?

A

It was standardized and normed on a group of very homogenous people from rural Minnesota, all from the same demographic background, and this was the comparison group that everyone else in the world was put up against

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What are Physiological Measures?

A

Measuring physiological aspects, e.g., heart rate; galvanic skin response; event-related potential; fMRI)

27
Q

What is the benefit of using Physiological Measures?

A

Can be useful for answering developmental questions, and converging on conclusions we draw from these other approaches

28
Q

What is the downside of using Physiological Measures?

A

We have a measurement, but we don’t always know what it means

29
Q

What is Galvanic Skin-Response?

A

Measuring conductance on the skin

30
Q

What are Event-Related Potentials?

A

Looking at changes in electrical activity in response to specific stimuli, events you’ve just experienced etc, used to get a sense of what’s going on in the brain during different experiences

31
Q

What can an fMRI measure in a Physiological Measure?

A

Could look at changes in oxygenated blood flow through the brain, and which parts of the brain are most active in a particular situation

32
Q

Any measurement tool is only going to be as useful as the ___ it has

A

Characteristics

33
Q

What makes a good test?

A

Reliability & Validity

34
Q

What is Reliability?

A

The extent to which an assessment tool is consistent in its measurement

35
Q

What is Inter-rater Reliability?

A

Do evaluators come to the same answer independently? If you have two different people looking at the same situation, the same set of observations, do they come to the same conclusions about what they saw?

36
Q

What is Test–retest Reliability?

A

Are results of the tool stable across time? (if appropriate). Is it consistent over time?

37
Q

What is Validity?

A

The extent to which an assessment tool measures what it is supposed to measure

38
Q

What is Internal Validity?

A

The extent to which evidence can support claims, within the context of a particular study. You can only be confident that your study is internally valid if you can rule out alternative explanations for your findings. Often it’s easier to get internal validity in tightly-controlled structured setting, like the lab.

39
Q

What is External Validity?

A

The extent to which findings from a study can be applied to other settings (e.g., real-world contexts). Focus is on the study’s generalizability; whether its findings have relevance in the world at large.

40
Q

Often it’s easier to get external validity in a more ___ setting

A

Naturalistic

41
Q

Going for more control may ___ internal validity at the expense of external validity

A

Increase

42
Q

How should we think of each study we might see in the news?

A

As one brick in the larger wall

43
Q

What is a Correlational Research Design?

A

Research in which the goal is to describe the strength and direction of the relationship between two or more variables

44
Q

What are we trying to do in a Correlational Research Design?

A

We are trying to characterize the relationship between two or more things

45
Q

How is data often represented when looking at correlation?

A

Plotting the data through Scatter Plots

46
Q

What is a Perfect Relationship?

A

Reflected by r = 1.00. Never show up, and you would only see them in transformations of the same thing.

47
Q

What does it mean if there is No Relationship?

A

Reflected by a r = 0. Knowing something about how they score on the Z-variable, can’t tell you anything about how you’ll score on the Y-variable

48
Q

What relationship are you most likely to see in Psych research?

A

A Moderate Relationship

49
Q

A correlation is just an observation about how two variables are related to each other, it doesn’t give us much insight into why that ___, or what other factors might come into play

A

relationship exists

50
Q

Correlation does not equal ___

A

Causation

51
Q

What does the phrase “Correlation does not equal causation” mean?

A

Just because two things are related doesn’t mean one causes the other

52
Q

What is the Directionality Problem?

A

If a relationship is causal, does A cause B? B cause A? Both?

53
Q

What is the Third Variable Problem?

A

The apparent relationship between A and B might be caused by a third variable (or several variables) that you didn’t measure.

54
Q

What is an example of the Third Variable Problem?

A

The # of ice cream cones sold per day is significantly correlated with the number of pool drowning deaths per day

55
Q

What is another name for Third Variables?

A

Confound Variables

56
Q

What are Confound Variables?

A

Things that drive relationships that we didn’t take into account, so we can’t figure out the extent to which it’s effecting the relationship in our data

57
Q

What are Spurious Correlations?

A

Some relationships—even strong ones—occur simply by chance

58
Q

What can Experimental Research Design help us address?

A

Help us address the limitations of correlational research

59
Q

How can an Experimental Research Design help us address the Third variable problem?

A

Use random assignment to place participants in experimental conditions (or use a within-subjects design)

60
Q

How can an Experimental Research Design help us address a Directionality problem?

A

Manipulate one variable (independent variable); observe its effect on the other variable (dependent variable)

61
Q

How can an Experimental Research Design help us address Spurious effects?

A

Always a possibility in research, but statistical methods help us calibrate our confidence in patterns we observe

62
Q

What is a Between-Subjects Research Design?

A

An Experimental Research Design in which each participant experiences one condition in the experiment

63
Q

What is a Within-Subjects Research Design?

A

An Experimental Research Design in which each participant experiences all conditions in the experiment