Definitions Flashcards
Learn abstract theory
Business Research
a series of well thought out activities and carefully executed data analysis that help a manager avoid, solve or minimize a problem
Applied Research
to solve a current problem that demands a timely solution. Applies to a specific company, within firms or research agencies
Fundamental Research
generate a body of knowledge by trying to understand how certain problems that occur in organizations can be solved. Research done to make a contribution to existing knowledge. (Teaching us something we didn’t know before, mainly done in universities and knowledge institutes)
Internal Research Benefits
More chance of being accepted, Less time needed to understand the structure of the organization, Less costly
Internal Research Disadvantages
Might be stereotyped, not perceived as experts by the staff, less objective findings
External Research Benefits
Has experience in working with different types of organizations, more knowledge
External Research Disadvantages
High cost and time, might not be accepted by Staff
The Hallmarks of Good Research
Purposiveness, Rigor, Objectivity, Parsimony, Replicability, Generalizability
Purposiveness
A definite aim or purpose, knowing the ‘‘why’’ of your research
Rigor
Ensuring a good theoretical base and a good methodological design adds rigor to a purposive study (implies carefulness)
Objectivity
Drawing conclusions based on facts rather than on subjective ideas
Parsimony
Shaving away unnecessary details, explaining a lot with a little
Replicability
Finding the same results if the research is repeated in similar circumstances
Generalizability
Being able to apply the research findings in a wide variety of different settings
Deductive Research
Theory to data, testing theory
Inductive Research
data to theory, building theory
Seven-step Deductive Research Process
- Define the Business problem.
- Formulate the problem statement.
- Develop theoretical framework
- Choose a research design
- Collect data
- Analyze data
- Write-up
Seven-step Inductive Research Process
- Define the business problem
- Formulate the problem statement.
- Provide a conceptual background
- Choose a research design
- Collect data
- Analyze data
- Develop theory
Primary data
Information that the researcher gathers first hand through instruments such as surveys
Secondary data
Data that already exists and doesn’t have to be gathered by the researcher
Business Problem
Gap between actual and desired situation (state)
What makes a good business problem?
Feasibility and relevance
Feasibility
Is it doable?
- Is the problem demarcated? (Make smaller if it is too big)
- Can the problem be expressed in variables?
- Are you able to gather the required data?
Relevance
Is it worthwhile?
Managerial relevance
Who benefits from having my problem solved?
Academic relevance
Has the problem not already been solved in prior research?
Completely new topic (academic relevance)
No research available at all, although the topic is important
New context (academic relevance)
Prior research is available but not in the same context
Integrate scattered research (academic relevance)
e.g., different studies have focused on different IVs/moderators; consequently, their relative importance is not clear
Reconcile contradictory research (academic relevance)
Solve the contradictions through introducing one or more moderators
A good problem statement is….
- Formulated in terms of variables and relations
- Open-ended question
- Stated clearly/ unambiguously
- Managerially and academically relevant
Good research questions…
- Should collectively address the problem statement; one problem statement is translated into multiple research questions
- First theoretical, then practical research questions
- Stated clearly/ unambiguously
Theoretical Research questions
Context questions, conceptualization questions, relationship questions
Practical Research Questions
Relationship Questions, implication Questions
Relationship Questions (Practical)
To what extent does X affect Y?
What is the (relative) magnitude of the relations?
Implication Questions (Practical)
How can practitioners implement your results?
Open question
3 Types of Research Questions
- Exploratory research question
- Descriptive research question
- Causal research question
Exploratory Research Questions
Often relies on qualitative approaches to data gathering (not much is known)
Descriptive Research Questions
Obtain data that describes the topic of interest
Causal Research Questions
Studies whether or not one variable causes another variable to change
A theoretical framework consists of…
Variable definitions
Conceptual model
Hypotheses
Variables
Anything that can take on varying values
Dependent Variable
The phenomena that you are trying to understand (measuring variable)
Independent Variable
Influences the dependent variable in a positive or negative way (manipulated variable)
Mediating variable
A variable that explains the mechanisms at work between X and Y
Full mediation
X only has effect on Y through the mediating variable
Partial mediation
X has an indirect effect on Y through the mediating variable, but also has a direct effect on Y
Moderating variable
A variable that alters the strength and sometimes even the direction of the relationship between X and Y
Quasi moderation
Moderating variable moderates the relationship between X and Y, but it also has a direct effect on Y
Pure moderation
Moderating variable moderates the relationship between X and Y, but it has no direct effect on Y
The 4 conditions for causality (to establish a change is n the IV causes a change in the DV)
- X and Y co-occur (covary)
- A logical explanation for the effect of X on Y is needed
- X proceeds Y in time
- No other cause (Z) explains the co-occurrence of X and Y
Omitted variable bias
Lack of important variables in the model
Hypotheses
A tentative statement about the coherence between two or more variables
Directional (one-sided) hypotheses
Direction of the relationship is indicated. Terms such as positive, negative, more than, less than are used
Unidirectional (two-sided) hypotheses
They postulate a relationship or difference but offer no indication of the direction
Null hypotheses
Expresses NO relationship or difference between groups and is set up to be rejected (almost never presented in research reports)
Alternate hypotheses
Express their relationship or difference between groups; research hypothesis
Negative case method
To test the hypothesis, the researcher should look for data to refute it. When you find data that does not support the hypothesis, the theory needs revision.
Research Design / Plan
Plan for collection, measurement, and analysis of data
Causal Research: Experiment
A data collection method in which one or more IV’s are manipulated to measure the effect of this manipulation on the DV
Critical research design decisions
- Choosing between deductive research strategies
- Choosing between statistical techniques
- Choosing between sampling designs
Causal Research Experiment: Lab experiments
- Explore cause and effect relationship in artificial environment
- One or more IV’s manipulated after which the effect on the DV is measured
- High degree of control by researcher
Causal Research Experiment: Field experiments
- An experiment is carried out in the natural environment (work/life goes on as usual)
- Manipulation/ interference possible
Deductive research strategies
Lab experiments, field experiments
Correlation Research
Descriptive Research
Archival Research (Correlation research)
- Research based on data that already exists
- External: data gathered by sources outside of the firm
- Internal: existing company data
Survey Research (Correlation research)
Research based on questionnaire to which respondents record their answers, typically with closely defined alternatives
Contrived Settings
Artificial environment (lab experiment)
Non-contrived settings
Natural environment (field study)
Unit of Analysis
Individual, Dyad (two person interaction), Group, Organization, Culture
What determines the unit of analysis?
The research question
Cross sectional (time horizon)
Data gathered just once, one shot studies
Longitudinal (time horizon)
Study phenomena at more than one point in time (more time and effort, more expensive)
Mixed method research
Research question cannot be answered by qualitative or quantitative approach alone
Is more data better?
Raw data means nothing without the proper tools to analyse or interpret them
Descriptive statistics
Methods of summarizing the data in an informative way
Types of measures for descriptive statistics
Measures of central tendency: mean, mode, median
Measures of dispersion: range, standard deviation, variance and interquartile range
Inferential statistics
Methods to draw conclusions
Methods to draw conclusions (inferential stats)
Mean difference test, chi square test, ANOVA, regression analysis, logit analysis etc.
4 Types of Measurement Scales
- Nominal
- Ordinal
- Interval
- Ratio
Nominal (Types of measurement scales)
No logical order (ethnicity, social security number, gender)
Ordinal (Types of measurement Scales)
Ranked and ordered (not only categorizes but also rank order them in a meaningful way) (clothing sizes, ranking)
Interval (Types of measurement scales)
Meaningful differences between values, but no natural zero point (the difference between any two values on the scale is identical to the difference between any other two neighbouring values on the scale is identical to the difference between any other two neighbouring values on the scale (e.g., thermometer, time on a 12-h clock))
Ratio (Types of measurement scales)
Meaningful differences and ratios between values due to a natural zero point i.e., income, weight, money, blood pressure etc.
IV: Nominal/ordinal; DV: Nominal/ Ordinal; Statistical technique:….
Chi-square test
IV: Nominal/ ordinal; DV: interval/ ratio; Statistical technique..
T-test, ANOVA
IV: Interval/ratio; DV: Nominal/ ordinal; Statistical technique:….
Logit analysis
IV: Interval/ratio; DV: Interval/ ratio; Statistical technique:…
Regression analysis
Popular rating scales in business research
Likert Scale, Semantic Differential (both treated as interval scales)
Sample
Subset of the population of interest
Sampling
Procedure where a given number of members from a population are selected as representative subjects of that population
The sampling process (4 steps)
- Define the target population
- Determine the sampling frame
- Determine the sampling design
- Determine the sampling size
Target population
Defining in terms of elements, geographical boundaries and time (Who are you targeting as a sample during this study?)
Sampling frame
Physical representation of the target population
Coverage error
When sampling frame does not match the population
Sampling design
Probability sampling vs. Non-probability sampling
Probability sampling
Each element of the population has a known chance of being selected as a subject
Types of probability sampling (4)
- Simple random sampling
- Systematic sampling
- Stratified sampling
- Cluster sampling
Simple random sampling (probability sampling)
Least bias, most generalizable since every element has an equal change of being chosen
Systematic sampling (probability sampling)
Select random starting point and then pick every I-th element (low generalizability)
Stratified sampling (probability sampling)
Divide population into groups, then apply SRS within each group
Cluster sampling (probability sampling)
Divide the population into heterogeneous groups, randomly select a number of groups and select each member within this group
Non probability sampling
The elements of the population do not have a known chance of being selected as a subject
Four types of non-probability sampling
- Convenience Sampling
- Quota Sampling
- Judgement Sampling
- Snowball Sampling
Convenience Sampling (Non-probability sampling)
Select subjects who are conveniently available (lowest generalizability)
Quota Sampling (Non-probability sampling)
Fix a quota for each subgroup, select on the basis of specific criteria
Judgement Sampling (Non-probability sampling)
Select subjects based on their knowledge/ professional judgement
Snowball Sampling (Non-probability sampling)
Do you know people who..?
A larger sample size leads to….
a lower sampling error
Rules of thumb for sampling
- Sample size >75 and <500 is appropriate
- Multivariate search: >10x parameters to be estimated
- Subsamples: >20-30 per subsample
Survey Research
Research based on a questionnaire to which respondents record their answers, typically with closely defined alternatives
When to use surveys
- Interest in quantitative descriptors
- Want to say something about the population, but you can’t measure the whole population
Categories of Questions
Open-ended
Closed-ended
Single-item Measures
Multi-item Measures
Open-ended Questions
Allows respondents to answer a question in any way they choose
Closed-ended
Asks the respondents to make choices among a set of alternatives
Single-item measures
When concrete singular object/attribute is used (What is your marital status? What is your profession? NOT ‘’ How diverse is the workforce of your company etc.)
Multi-item measures
In all other cases
Item-response scales
Comparative scales, non-comparative scales
Comparative scales ( item-response scales)
- Paired comparison
- Contact sum
- Rank ordering
Paired comparison (comparative scales)
Compare pairs of shampoo brands etc.
Contact sum (comparative scales)
Divide 100 points among the following etc.
Rank ordering (comparative scales)
Rank brands, rank companies etc.
Non-Comparative scales
Continuous rating scale
Likert scale
Semantic differentials
Continuous rating scale (non-comparative)
Rate department score from 0 to 100 etc.
Likert scale (non-comparative)
Disagree/agree, 5- or 7- point scale
Semantic differentials (non-comparative)
Good or bad, powerful or weak, modern or old-fashioned
Nominal scale measured questions…
should be mutually exclusive: only 1 answer applies
should be collectively exhaustive: the answer possibilities cover the entire realm of possible answers
Tailor Design Method
- Pre-notification
- Questionnaire (package contains: personalized cover letter, questionnaire, token of appreciation, free return envelope or reply e-mail button)
- Thank you/ reminder
- Replacement questionnaire
- Final contact
Validity measures..
Provide precedence, but provide sound logic to support that considerable conceptual overlap exists between measurement/proxy and construct
Social Desirability Bias
Respondents may not always be willing to communicate their true response in case of sensitive issues; to minimize socially desirable responding, use deliberately leading and or/loading questions to make the sensitive normal
Reliability of survey measures…
For multi-item measures use Cronbach’s Alpha which measure to what extent a set of items are inter-related; when highly inter-related = high reliability
Cronbach’s Alpha
Outcome is between 0 an 1, values >0.7 are considered acceptable
Experimental Research
Data collection method where one or more IVs are manipulated to measure the effect on the DV, and where you control for other causes
Two main objectives of experimental studies…
- To draw valid conclusions about the effect ov IVs on DV
2. To make valid generalizations towards a broader group/population
Threats to internal validity
- History effects
- Maturation
- Testing
- Instrumentation
- Statistical regression
- Mortality
History effects (threats to internal validity)
Events outside the experiment have an impact on the DV during the experiment
Maturation (threats to internal validity)
Biological changes over time
Testing (threats to internal validity)
Prior testing affects the DV
Instrumentation (threats to internal validity)
The observed effect is due to a change in measurement
Statistical regression (threats to internal validity)
Extreme scores in the beginning are less extreme in the end
Selection bias (threats to internal validity)
Incorrect selection of respondents
Mortality (threats to internal validity)
Drop out of respondents during the experiment
Increase internal validity by…
Randomization of participants
Design control: extra group, control group
Statistical control: measure extraneous variables, and include these in the statistical analysis
How to solve selection bias, instrumentation, history or mortality threats to validity
Randomization of participants
Measuring reliability
Cronbach’s Alpha
Lab experiment
Artificial setting to have as much control as possible over the manipulations
Field experiment
Natural environment where manipulation is possible
Lab experiment has high or low internal/ external validity?
High internal validity
Field experiment has high or low internal/ external validity?
High external validity
In field experiments, ideally…
Participants are (a) unaware that they are taking part in a study and (b) unaware of the different manipulations
Field experiments external validity…
high as it generalizes results to real-world behaviour
Advantages of Field experiments
- Real world behaviours = Real world results
- Authenticity
- Novel insights
Authenticity (advantages of field experiments)
Field experiments provide authentic (a) context, (b) treatments, (c) participants, and (d) outcomes measures
Novel insights (advantages of field experiments)
Field experiments enable (a) to answer questions that cannot be answered in the lab, (b) to check if lab results hold in real-world situations, and (c) to capture second-order and long-term effects
Disadvantages of field experiments
- Time consuming
- Challenging to implement
- Focus on observed behaviour
- High degree of noise
- Ethical consideration
Time consuming (disadvantages of field experiments)
Need to identify potential partners, convince key stakeholders, legal considerations etc.
Challenging to implement (disadvantages of field experiments)
Need to monitor procedure; address organization-specific infrastructure
Focus on observed behaviour (disadvantages of field experiments)
Focus on field experiments is limited on behaviour that can be observed, low ability to investigate underlying psychological processes
High degree of noise (disadvantages of field experiments)
Limited control over experimental procedure; several potential influences that threaten the validity of the results
Ethical consideration (disadvantages of field experiments)
Need to consider if field experiment is ethically correct in context of study
Internal validity threats experimental research
1: Poor timing and unexpected situational factors
2. Failure to randomize
3. Non-compliance/ Failure to treat
4. Spill overs & side-effects
5. Insufficient sample size
Poor timing & unexpected situational factors (Internal validity threats experimental research)
Changes in the environment unrelated to the study i.e., weather, technology, news, politics
Failure to randomize (internal validity threats experimental research)
No randomization of participants in groups (due to targeting, technical failures etc.)
Non-compliance / Failure to treat (internal validity threats experimental research)
Subjects that are supposed to receive the treatment do not receive it
Spill overs & side-effects (internal validity threats experimental research)
One participant is affected by the treatment of other participants; no consideration of unexpected side effects
Insufficient sample size (internal validity threats experimental research)
Insufficient power to detect effects (main and interaction)
A/B Testing
randomized field experiment with two variants, A and B. It includes application of statistical hypothesis testing or “two-sample hypothesis testing”
Archival Data
Data gathered from existing sources (secondary rather than primary data) collected for another purpose than that of the current study
Archival based research
Research that capitalized on data that are already in existence (rather than new primary data)
Internal Archival Data
Company records and archives
External Archival Data
Commercially available data sets, publicly available data sets
Archival data is cheap, true or false?
FALSE, fata bases have to be paid for at time which costs more than 50k a year!
Archival data is quick, true or false?
FALSE, archival data often consists of piecing together multiple data sets, which can be more time intensive than the analysis itself
Why use archival research? (5 reasons)
- Tap into industry wisdom
- Power
- Examining effects across time
- Examining effects across countries
- Examining socially sensitive phenomena
Tap into industry wisdom (reasons to use archival research)
Learn from past successes and failures in the industry when you cannot rely on your own experience
Power (reasons to use archival research)
High likelihood of rejecting H0 when H0 is false = low likelihood of missing a real effect
Examining effects across time (reasons to use archival research)
Examine whether a phenomenon changes over time, or examine the duration of an effect
Examining effects across countries (reasons to use archival research)
Primary international research is expensive and cumbersome
Examining socially sensitive phenomena (reasons to use archival research)
Archival data= unobtrusive (what people do rather than say, minimize the opportunity of distorted responses)
Sources of measurement unreliability in survey research
- Missing observations
- Inaccurately recorded observations
- Fake observations
Big Data
Data sets that are so big and complex that traditional data processing software are inadequate to deal with them
10 Characteristics of Big Data
Good for research: 1. Big 2. Always-on 3. Nonreactive Bad for research: 4. Incomplete 5. Inaccessible 6. Non-representative 7. Drifting 8. Algorithmically conformed 9. Dirty 10. Sensitive
Big (advantage Big Data characteristic)
Rare events, different reactions across units, heterogeneity, small effects
Always on (advantage big data characteristic)
Real-time estimates of economic activity
Non reactive (advantage big data characteristic)
Measurement in big data sources is less likely to change behaviour
Incomplete (disadvantage big data characteristic)
Leaves out missing information (demographics, behaviour on other platforms etc.)
Inaccessible (disadvantage big data characteristic)
Legal, business or ethical barriers to giving outside researchers access to data
Non-representative (disadvantage big data characteristic)
Can’t make inferences about population based on sample
Drifting (disadvantage big data characteristic)
User can change, usage changes, platform changes
Algorithmically confounded (disadvantage big data characteristic)
Platform design can influence behaviour, introducing bias and noise to study
Dirty (disadvantage big data characteristic)
Can be loaded with junk or spam (bots and trolls)
Sensitive (disadvantage big data characteristic)
Can be damaging when made public, data re-identification: matching anonymous data with publicly available data in order to identify an individual
The 4 V’s of Big data
- Volume
- Velocity
- Variety
- Veracity
Volume (4Vs big data)
n*p data matrix
Velocity (4Vs big data)
Speed of data processing, big data usually are sparse so most elements are zero or missing
Variety (4Vs big data)
Number of types of data
Veracity (4Vs big data)
Uncertainty of data
Three ways to learn from big data
- Measuring (includes counting)
- Prediction
- Approximating experiments
Measuring (ways to learn from big data)
What brand compete more closely with each other (example) & perceptual maps: marketing tool to display perceptions of customers about competing brands, collecting via surveys etc.
Prediction (ways to learn from big data)
Can Google predict the flu? Trends in searches, traditional data has gaps whereas big data is always on, now casting = predicting the present
Approximating experiments (ways to learn from big data)
Big data have so many observations, they can be matched
Create pairs of observations who are the same in every way, except the variable you want to study
Big Data Ethics
Just because it’s mathematical doesn’t make it objective or fair; Audit the algorithm
Supervised learning paradigm
Model variation; don’t trust anyone who says they have a good learning algorithm unless you see results of careful cross-validation, because flexible models can lead to overfitting (= following errors too closely)
Spurious correlations
Fit well in the beginning, worse in the end
Construct Equivalent
Are we studying the same phenomena in different countries?
Measurement equivalent
Are the phenomena that we study measured in the same way in terms of: wording; (translation equivalence), scaling (metric equivalence)
Obtaining metric equivalence
Pre-data collection through pictorious response scale, post data collection through standardized variable (Z)
Response style bias
Extreme responding and Socially desired responding
Egoistic response tendency
Superhero, masculine countries
Moralistic Response Tendency
Saint, feminine and collectivist countries
Sampling Equivalence
Achieve representative and comparable samples
Exploratory research
To require an in-depth understanding when prior theory is absent, often based on qualitative data (words rather than numbers used to build theory, phenomena of interest involve words and language)
Fundamental Characteristics of Qualitative Data
- Open-ended
- Concrete and vivid
- Rich and nuanced
Open-ended (characteristics qualitative data)
No need to predetermine precise constructs; flexible and exploratory
Concrete and vivid (characteristics qualitative data)
See the world through the eyes of the subjects
Rich and nuanced (characteristics qualitative data)
Capture details
Sources of qualitative data
- Primary: field research, interviews
2. Secondary: desk research, annual reports and other company records, blogs, websites etc.
When NOT to use exploratory research
When results are to be generalised to the total population
When numbers are needed to make a decision
Research strategies of exploratory research
- In-Depth interviews
- Focus Groups
- Observation
In-Depth interviews
A conversation where the researcher asks questions and listens to the respondents answers
Focus groups
An interview on a group basis of 8 to 10 participants, chosen based on their familiarity with the topic; discussion is facilitated by the moderator
Observation
The watching and analysis of the behaviour of employees, consumers, investors etc.
Reliability Exploratory Research
Interjudge reliability
Interjudge reliability
Degree of agreement among raters/judges
Validity Exploratory Research
Interview biases, interviewee biases
Interview/interviewee biases
Loaded questions, expressing one’s own opinion and judging whilst asking questions
Selective perception
Hearing what you want to hear from the interviewer, observing what you want to observe
Obedience
Desire to please the interviewer
Conformity
Do/think what the majority does or thinks (normative social influence)