Research Design & Ethics + Ch4 Flashcards
Research design
A master plan that specifies the methods that will be used to collect and analyze the information needed for a research project
Research Ethics: Step 1 - Translating business qs into research qs
- Avoid conflicts of interest
- Emphasize substance over flash
- Look for what’s there, not what you want to see
Research Ethics: Step 2 - Design research program to answer research questions
Valid appropriate experimental design
- Design experiments that are replicable
- Minimize risk and maximize benefit
- Avoid experimenter demand
- Avoid confirmation bias
Valid appropriate sampling plans
- Respect vulnerable populations
- Avoid excluding people from participation or benefit
Experimenter demand / demand effect
The participant is trying to please the experimenter by giving them the result that they are looking for
Research Ethics: Step 3 - Collect Data
Fair, respectful, and professional treatment of participants:
- Obtain informed consent
- Participation is optional and can be discontinued at any time
- Respect privacy and confidentiality
eg: Milgram Experiment
Research Ethics: Step 4 - Analyze Research Results
Valid, appropriate data handling
- Maintain data security
- Do not omit, alter, create or otherwise tamper with data
Research Ethics: Step 5 - Interpret research findings and make recommendations
Valid appropriate data interpretation
- Avoid biased interpretation
Research Ethics: Step 6 - Communicate recommendations to business audiences
Transparent communication
- Avoid the file drawer problem (publishing studies which have only significant results and omitting studies which do not)
- Disclose methods
- Disclose sample details
3 things to designing a research program by
By source
By methodology
By objectives
Designing research programs by data source
Primary = collection of new data
Secondary = data previously collected
Designing research programs by methodology
Qualitative:
- Interviews
- Observation
- Focus groups
- Text and image analysis
- Diary studies
Quantitative:
- Panel data (secondary)
- Surveys (primary)
Designing research programs by objectives (EDC)
Exploratory (gather insights, formulate hypothesis)
Descriptive (size and characteristics of target market)
Causal/experimental (test hypothesis, response to market mix changes)
Exploratory research
Seeks to define an ambiguous problem
- May be conducted as part of problem definition
- Gains background info
- Defines terms
- Clarifies problem/hypothesis
- Establishes research priorities
Methods of conducting exploratory research (SCEF)
Secondary data
Case analysis
Experience surveys
Focus groups
When is exploratory research complete?
When the problem is fully defined
- Root problems, not just symptoms
- No more “whys”
Descriptive research
Seeks to describe a defined problem
- Answers to questions of who, what, where, when, and how
eg: where do they buy brands, who their customers are, how they find out about products, etc
- More rigid than exploratory research
- We want to project a study’s findings to a larger population
eg: understand the average customer
Methods of conducting descriptive research (CSLPBM)
Cross-sectional studies
Sample surveys
Longitudinal studies
Panels (continuous = same questions, discontinuous/omnibus = vary questions)
Brand-switching studies
Market-tracking studies
Causal research
Seeks to answer a defined & described problem
- Determine causality
- Most rigid
eg: does increasing sugar content affect sales?
Methods of conducting causal research
Experiments
Experiment
Where one or more independent variables are manipulated to see how one or more dependent variables are affected, while also controlling the effects of extraneous variables
Extraneous variable
All variables other than IVs that have an effect on DVs
eg: 1s faster per lap because of driver? (extraneous variables: aero, tyres)
Experimental design
A procedure for devising an experimental setting so that a change in a DV may be attributed solely to the change in an IV
Before-after testing
An experimental design in which a DV is measured before and after an IV is changed
Control group
A group whose subjects have not been exposed to the change in an IV
Experimental group
A group whose subjects have been exposed to the change in IV
Pretest
A measurement of the DV that is taken prior to changing the IV
Posttest
A measurement of the DV that is taken after changing the IV
Internal validity
Accurate results need large sample sizes, true randomization, clear cause-and-effect relationship between variables
Helps ensure findings are due to the IV
External validity
Generalizing results to another group. Need to replicate findings with different populations and different contexts
Ecological validity
The study needs to mimic what happens in real life
Lab experiments
IVs are manipulated and measures of the DVs are taken in an artificial setting for the purpose of controlling all extraneous variables
Field experiments
IVs are manipulated, and the measurements of the DV are taken in their natural setting
Test marketing
conducting an experiment or study in a field setting to evaluate a new product or service or other elements of the marketing mix
3 types of test markets
Standard test market
Controlled test market
Simulated test market
Standard test market
One in which the firm tests the product or marketing-mix variables through the company’s normal distribution channels
Controlled test market
One that is conducted by outside research firms that guarantee distribution of the product through prespecified types and numbers of distributors
Simulated test market (STM)
One in which companies test new products in a staged environment that mimics natural conditions
Three criteria for selecting test market region
Representativeness
Degree of isolation
Ability to control distribution and promotion
Correlation
Two variables share some kind of relationship
Causation
One variable causes something to happen in another variable
Correlation can be explained by 4 ways
One-way causality
Two-way causality
A “confound”
Spurious correlation
One-way causality
One variable is the cause of the other one
eg: change in x = change in y
Two-way causality
Both variables may be the cause of each other
eg: change in x = change in y = change in x again
A “confound”
A third variable may be responsible for correlation
eg: change in z = affects change in x and y
Spurious correlation
A mathematical relationship in which two events or variables have no causal connection
Moderator variable
A second IV changes the effect of your manipulated variable (initial IV) on the DV
eg: amount of exercise (IV), age (MV), weight loss (DV)
How to determine if a question is correlational or experimental design?
When you can not manipulate a variable, such as the right-left-handedness of someone, it is not experimental design
How to control an experiment to determine causality
Manipulate one thing at a time
Randomly assign participants to different levels of constant variables
eg: manipulate only IV, while holding all other variables constant
2 types of study design
Within-subject design
Between-subjects design
Within-subjects design
All participants are exposed to all conditions of the IV
Measures the same subset of participants at different times (eg: check blood pressure before meds, check blood pressure after med)
Between-subjects design
Measure different subsets of participants, each subset having different exposure to IV, at the same time (randomly assign participants to group A or group B, measure group A and group B at the same time)
6 considerations to finding evidence for a causal relationship
- What is the research question
- What are the variables of interest
- Should you use a correlational or experimental design?
- What are the conditions?
- Should you use a within-subjects design or between-subjects design
- Should the study be in the lab or field