4. Quantitative research design Flashcards

1
Q

What is research design?

A

Research design is the overall strategy that you will pursue in order to answer your research question. It
ensures coherence between the different steps of the research process, so the study flows in logical
manner
* Set of instructions which rule the gathering and analysis of the data

Research designs differ based on:
* Whether they are supposed to describe or explain (what or why)
* The logic of comparison they utilize
* What type of variance do they explain and what type of variance do they control
* Between units- f.e. individuals, groups, time points? Between units over time?
* The in-built constraints on the quality of the data that they produce

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the difference between research design and research methods?

A

Research design is a plan to answer your research question. A research method is a strategy used to implement that plan

Research designs include
* Unit of observation- depends on the phenomenon we want to study
* Condition/treatment/Independent variable which will influence dependent variable
* Number of replications
* Level of analysis- depending on the theoretical framework where does the phenomenon of interest lie? DV and IVs are on
individual or higher level of analysis?

Research design based on what type of variance it is explaining: Experiment, cross-sectional, longitudinal

Research design is not the same as method of data analysis
Experimental study can utilize within the scope of one
project
* ANCOVA- to compare results of treatment and
control group
* Factor analysis- to create scale from
administered questionnaire
* Logistic regression- to predict probability of
individual results within treatment and control
group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Name 3 authors that have different views on research design

A

McGrath - Typology of research strategies
Saunders - Research onion
Bryman and Bell - Table with quantiative and qualitative in experimental, cross-sectional, longtiduinal, case study, comparative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is experiment?

A

Random assignment of subjects to
treatment/control group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is cross-sectional?

A

Survey or secondary data
Observation on multiple individuals (or any
other unit) at the same time
Sample should be randomly drawn from
population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is longitudinal?

A

Survey or secondary data
Multiple observations on multiple individuals
(or any other unit) over time
* Panel dataset
* Time series dataset

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Name 5 areas in research design quality

A

Statistical conclusion validity
Internal validity
Construct validity
External validity
Ecological validty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is research validity
- the quality criteria

A
  • Informativeness of a specific study for the development and support of the hypotheses
  • Each design has in-built flaws

(Cook and Campbell)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Internal validity

A
  • Can we establish causality between the treatment and effect (IV and DV)?
  • Experiment is the gold standard when it comes to internal validity
  • Controlled environment and manipulation of treatment
  • Randomization- selecting random samples from population should produce statistically equal groups, equal probability
    distributions of all potential confounders
  • so if they are assigned to different treatments, the variance in dependent variable should be caused by treatment
    since everything else is equal
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why is internal validity important?

A

Internal validity makes the conclusions of a causal relationship credible and trustworthy. Without high internal validity, an experiment cannot demonstrate a causal link between two variables

  • It is the difference between descriptive and explanatory research (testing theory)
  • Can be established only in randomized experiments (strongly)
  • All other research designs are compared to the “gold standard” of experiments when it comes to causality and try to achieve
    similar level by
  • Controlling/measuring as many potential cofounding variables as possible
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Causality

A

Change in one variable causes the change in other variable, other things being equal

Conditions that must be satisfied for causality
Empirical association- covariance/correlation

  • If IV goes up so does the DV and vice versa
  • This association has to be substantial (we often test this through statistical tests)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which meaning can correlation have between DV and IV

A

Correlation can have multiple meanings

Examples:
* IV has a causal effect on DV
* DV has a causal effect on IV- reversed causality
* IV and DV have both causal effect on each other
* IV has causal effect on other variables that then influence
DV
* DV has a causal effect on other variables that then
influence IV
* IV and DV have some other causal effects in common

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

External validity

A

External validity is the extent to which you can generalize the findings of a study to other situations, people, settings, and measures

  • To what extent can we generalize?
  • The causal relationship found would apply beyond the sample that is currently studied, beyond the specific
    time and specific settings OR would apply to the specific population, time and settings that the theory is
    targeting.

Selection of subjects, settings and time matters
* Subjects
* Convenience samples- self-selection bias
* Settings
* Cultural context, industry, size of the company…
* Time
* marketing research before social media

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Construct validity

A
  • Do measures we use represent well the phenomena we want to capture?

*Manipulation/treatment (experiments) are fallible operationalizations of the conceptual variable we want to
capture

*Constructs/scales/indexes (survey) are fallible operationalizations of the conceptual variables we want to
capture

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Statistical conclusion validity

A

Is the design precise and powerful enough to detect the relationship between variables if it indeed exists?

  • Violations of assumptions of statistical tests
  • F.e. analysis of variance assumes normality and equal variances in each group
  • Independence of observations- very important in the design stage
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Ecological validity

A
  • Also influences generalizability- would we find the same
    results about phenomena/effects in real life settings where the
    phenomenon naturally occurs?
  • Conclusions from ecologically invalid research are artefacts
    of the data collection and analytic tools
  • Nature of the stimuli/task
  • Example: Eye-tracking experiments are used frequently to
    research decision-making behaviour of people
17
Q

Min, max and con

A

The main function of a good research design is to explain and control variance- MAXMINCON

  • To maximize systematic variance
  • To control extraneous systematic variance
  • To minimize the error or random variance
18
Q

What is variance?

A

how spread out is the data from the mean?

19
Q

Why maximize?

A

Maximize the experimental variance- in all research designs we want to make sure we have
as much variance caused by/associated with the interesting independent variables as
possible

  • Make sure the treatments are really different
  • Make sure that the data collection is occurring in context where variance is present
20
Q

What is control?

A

Control the extraneous variables- there will always be variables that cause variance in the
dependent variable but we are not interested in their effects- they need to be minimized,
nullified or at least isolated so we can pull them apart

  • Eliminate any variance due to this variable
  • Randomization
  • Build it into design- make it a treatment variable
  • When neither possible – “capture” it- measure it
21
Q

Why minimize?

A

Minimization of error variance
* Error variance- random errors tend to balance each other out (mean zero), but because they are not
systematic they are unpredictable (not possible to explain them)
* Anything that we do not control ends up clumping with error variance
* Potential sources- individual characteristics, momentary conditions, measurement error

How to prevent/minimize error
* Increase control of conditions
* Increase reliability of measures