Chapter 10 (Sam) Flashcards

1
Q

case study research

A

a method of analysis that involves an in-depth investigation of a single individual, group, or event; in political science, generally used with the intent of identifying general causal principles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

comparative research

A

a research design that seeks to compare phenomena across different political systems or cultures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

counter-intuitive

A

a condition that occurs when a situation, event, or outcome differs from dominant theoretical expectations or common sense.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Scope conditions

A

the explicit limits to which particular research can make valid claims.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The theory-testing case study is appropriate in two distinct counter-intuitive conditions:

A

a phenomenon is expected to confirm a theory but refutes it (referred to as a failed most-likely case), or a phenomenon is expected to refute a theory but confirms it (referred to as a successful least-likely case)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Process tracing

A

a research method that generates causal pathways between the independent and dependent variables of a case by connecting a series of observations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Generalization in the context of case study research is the process of extending the findings of a single case study to a wider population of cases

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

most similar systems design

A

a comparative research design in which the researcher compares very similar systems in an attempt to explain differences between them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

most different systems design

A

a comparative research design in which the researcher compares very different systems in an attempt to explain similarities between them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

equivalent measures

A

This may mean using different indicators in different cases or working hard to ensure that the indicators chosen do, in fact, measure the same concept in all the cases under study.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Galton’s problem

A

a problem that occurs when the units under observation are not independent of one another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Between-subjects design

A

: an experimental design in which different subjects are randomly assigned to various treatment and control groups; causality is inferred based on post-treatment differences observed between these groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Causal effects

A

: the difference between the value of an outcome when a subject receives a treatment and when a subject does not receive the treatment. (p. 209) Control group: a group of subjects randomly assigned not to receive the treatment in an experiment; identical to the treatment group in all other respects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Double-blind design

A

: a research design in which both the subjects and the research team are unaware of who receives the treatment and who receives a placebo; intended to reduce the risk of the researchers providing subjects with cues about how they should react and to control for bias in the data collection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Experimental (or treatment) group

A

: a group of subjects exposed to the intervention of interest in an experiment; identical to the control group is all respects except that the control does not receive the treatment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

External validity

A

: the extent to which the findings drawn from the cases under examination may be used to make generalizations about phenomena outside the original study

17
Q

Feasibility

A

: the extent to which a study is capable of being completed, given the researcher’s skills and resources

18
Q

Field experiment.

A

: an experiment in which a researcher’s intervention is implemented in a subject’s natural environment

19
Q

Fundamental problem of causal inference

A

: a situation referring to the fact that causal effects cannot be observed in the real world and that causal inferences are therefore required.

20
Q

Internal validity

A

: the extent to which the researcher has produced results reflective of reality, as measured within the confines of the study.

21
Q

Laboratory experiments

A

: an experiment in which subjects are recruited to a common location where the researcher exerts a relatively large degree of control over the experimental setting.

22
Q

Natural (or naturally occurring) experiment

A

: a course of naturally occurring events in which different conditions appear to be randomly assigned to subjects without the planned intervention of the researcher (e.g., a lottery); to qualify as a natural experiment, assignment to experimental and control groups must be allocated as if by random assignment.

23
Q

Placebo

A

: a form of stimuli that does not contain the precise treatment the researcher is testing; often takes a form similar to the experimental treatment without exposing subjects to the specific ingredient that is hypothesized to have a causal effect (e.g., a sugar pill); a type of control group in that it offers the experimenter a point of reference to estimate the treatment’s effect.

24
Q

Planned intervention

A

: a situation in which an experimenter purposefully manipulates one or several aspects of subjects’ conditions according to a predefined scheme.

25
Q

Quasi-experiments

A

: a course of events in a subject’s environment that implies neither “as if” random assignment nor any planned intervention by the researcher. (p

26
Q

Random assignment

A

: the process of assigning some members of a population to a treatment group and others to a control group; random assignment ensures that the two groups are identical in all respects aside from the receipt of treatment.

27
Q

Single-blind design

A

: an experimental design in which subjects remain unaware of various types of information—such as whether they are part of the treatment, placebo, or control group—in order to reduce the possible bias that this information could induce

28
Q

Statistical power

A

: the ability of a research design to detect effects should such effects exist; a power analysis can give researchers an idea of what sample sizes will make an experiment sensitive enough to detect effects of certain sizes. (p. 213)

29
Q

Stimulus/stimuli

A

the treatment(s)—often involving exposure to different forms of information— to which subjects in an experimental group are exposed

30
Q

Survey experiment

A

an experiment implemented in the context of a survey involving the random manipulation of a part (or parts) of the survey instrument.

31
Q

Within-subject design

A

: an experimental design in which researchers evaluate subjects before and after exposure to a given treatment; may also involve comparisons made to the before and after observations of the control group; causality is inferred based on any differences observed between comparisons.