Week 4 Exam Flashcards

1
Q

Strategies associated with quantitative data:

A

Survey:
1) Often used to test theory
2) Establish correlation not causation
3) New data collected by the researcher (primary)
4) Quantitative data or qualitative that is quantified.

Experiments:
1) Used to establish causation and test theory
2) New data collected by the researchers
3) Quantitative data or qualitative that is quantified

Decision science:
1) Used to solve problems and improve practices/processes, to predict and forecast
2) Mostly uses mathematical modeling
3) May use any type of data, primary, secondary. Quantitative often prevails, but simulations also used.

Secondary analysis:
1) Often used to test theory
2) Data collected by others
3) Associated with quantitative, but can also be qualitative - can test and also explore new topics
4) A specific strategy often used in some fields (finance,…)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Hypotheses in theory-testing:

A

A number of hypotheses is developed based on the existing literature

A conceptual model is built, representing the relationship of all hypotheses (hypothesized relationships) see slide 12, week 4

A good formulation of the hypothesis includes the direction of the relationship (increases, decreases, etc)

Data is collected to measure variables

Statistical analysis is used to test whether hypotheses need to be rejected or not.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Types of variables in hypotheses:

A

1) Independent/explanatory - Explains the change in the dependent variable (exam grade)
2) Dependent variable - changed or influenced by the IV (Study hours)
3) Moderating variable - Changes the strenght of an relationship between the two variables. (sleep quality)
4) Mediating or intervening - transmits the effect of one variable on another (understanding of materials)

In experiment particullarly important to define these correctly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Design of surveys:

A

1) which variables to measure (incl. control variables) - can be nominal or ordinal (categorical), Interval or ratio (numeric)
2) Operationalization - check if constructs have been measured before; if so, use validated scales
3) Questionnaire design (professional design, attractive, order, length), include attention checks; self-administered, interviewer administered.
4) How to pilot test your questionnaire
5) which population, sampling frame, sample size, sampling approach?
6) Combination with other methods? - do some interviews before to understand what to ask, use secondary data to measure variable like performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Methodological quality of surveys - how to improve:

A

Quality of the questionnaire (poor quality leads to risks to construct validity):
1) Do questions cover what is intended to (construct)
2) Reliability check (internal consistency) for multi-item constructs via Chronbach’s alpha analysis
3) Use of previously validated scales for constructs is suggested

Quality of sampling (poor sampling leads to external validity problems, threat of sampling bias)
1) Sampling frame is important (A sampling frame is a list or source that contains all the elements or people in the target population that you can actually sample from)
2) if non-random, reflect on why and how it is justifiable

Response rate and threat of non-response bias (low response rate and bias leads to construct validty problems)
1) Length of questionnaire
2) Make efforts to maximize response rate

Response quality (poor insutrctions leads to construct validity problems)
1) Clarity of instructions
2) Checks for experience and knowledgeability of the respondent
3) Analysis of missing values (check if missing at random)

Internal validity problems
in surveys originate from poor
analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Design of experiment:

A

Important to develop a good conceptual model (hypothesis). Internal and external validity risks.

1) Define your variables
2) It is important to identify types of variables upfront - otherwise you cannot design your experiment
3) Decide on operationalization
4) Identify other variables that may affected measured variable (the issue of confounding variables)
5) Dealing with confounding variables: Statistical control, Randomization, Restriction, Matching.

Confounding variables are external to the model (extraneous); may be difficult to observe. They confound (distort) the “true” relationship between two variables. They affect both the dependent variable and the independent variable and hence may suggest a causal relationship that does not exist.

6) Lab (online and offline), field, quasi-experiment
7) Design (setting, treatment, instructions, measurements, records…)
8) Who will be your participants, why these, how till they be recruited
9) Control variables
10) You test only one relationship at a time (long experiments may lead to fatigue)
11) if there is a need to test for two or more interventions, you need different groups - you study effect of promotional gift size on likelihood of making the next purchase (one group with small gift, one group with large gift, and possibly also control group – no gift)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Execution of experiments:

A

Simplified example experiment procedure:
1) Assign participants to groups
2) Measure dependent variable
3) Manipulate IV (in experimental group only)
4) Remeasure the DV

Maximize control and stability of the experimental environment:
1) Same instructions and time for all participants
2) If study is repeated then on the same day of week and time
3) The setting is always the same
4) Interaction of experimenter is always the same

Control of participants and experimenters:
1) Prticipants should not know which group they belong to (blind design)
2) Experimenters should not know which group they are working with (double blind design)

Experimental logbook should be filled constantly
Maximize consistency - no change in measurement instruments, no change in experiment, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Methodological quality of experiments:

A

Construct validity - refers to quality of measurement of variables
1) Pre-tests (pilot) improve construct validity as design is improved

Internal validity - is about to what extent causal conclusions can be drawn
1) Lab experiments (if done and executed properly) lead to strong internal validity.
2) good control during execution improves internal validity

External validity - is a particularly big risk in management studies
1) The lab does not fully replicate the social, organizaitonal, or behavioral conditions that are in real-life setting (also reffered to as “ecological quality”)
2) Depending on study can be replicated with different practicants to increase generalizability

Reliability:
Maximize detail and transparency in documentation - for replication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Decision science, what is:

A

The discipline that is concerned with the development and applications of quantitative methods and techniques to support decisions.

Examples - Monte Carlo simulation, Linear programming, Decision Trees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Core of decision science - decision problem:

A

Decision problem has 3 elements:
1) Decision - choice between alternatives
2) Objectives - which objectives to take into account when making decisions
3) Constraints - which constraint should be respected by the decisions? Which decisions are feasible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why use quantitative methods (in Decision science):

A

1) Save costs (evaluate different options before implementing)
2) Handle large decision space - quickly identify best decision among millions of alternatives
3) Make systematic trede-offs - when multiple objectives must be considered (volume vs continuity of access)
4) Deal with uncertainty - identify decisions that are optimal given uncertain future
5) Avoid biases
6) Enable automated or hybrid decision-making
7) Do risk/ what-if analyses - impact of certain parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Process for decision science:

A

1) Turn a real-life problem to a modelled problem
2) Solve (obtain solutions from) the modelled problem and return decisions (or insights)

Model -> abstract (mathematical) representation of the real-world problem
never a 100% representation - assumptions needed to develop.

Steps:
1) Understand the problem: problem definition - what decisions need to be made? What are the objectives, constraints? It is important to build a conceptual model of the problem before a mathematical model

2) Formulate mathematical model - construction of a model that mimics the real-world problem

3) Selecting/ developing a solution method - how to obtain a “good” or optimal solution for the modelled problem

4) Solve the problem – perform computational tests (called numerical analysis) by running the model on one or multiple case studies – potentially make adjustments and repeat the process

5) present the results

Feedback throughout all the steps, making it iterative.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Decision science in research cycle:

A

Management problem; Knowledge Question; Review of Evidence - Understanding the problem

Review of Evidence; Research design - Formulate mathematical model; Select develop solution model

Data analysis; Research outcomes - Solve the problem

Research outcomes;recommendations - Presentation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Data used in Decision science:

A

To use quantitative method, quantitative data is required:
1) Can be both primary and secondary
2) can be both empirical and simulated: often times simulated used to compensate for poor empirical (insert/replace). Simulated also needed to create multiple scenarios, and is especially useful for testig a solution model

To formulate the model, it is key to also use qualitative data (interviews with key stakeholders)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Three types of questions in decision science:

A

Methodological - How can we model or solve this type of problem? Mainly about tools and methods

Theoretical - What are the features of the problem or the solution? Mainly about the properties/characteristics. Focuses on understanding the problem or method better.

Practical - What is the best solution for this specific real-world case? Mainly about real-life applications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Methodological quality in decision-science:

A

Is different than the one in other strategies.

Typical risks are:
Misunderstand the problem (the built model is wrong as well)
1) Demonstrate that effort is taken to understand the issue (interviews) - validity

Your model is too simple or too complex, given its purpose:
1) Demonstrate that you thought about the purpose of model, and explain modelling choices - practicallity and validity

Your solution method does not align with its purpose
1) Demonstrate that you took effort to understand the purpose of the solution method (Should it be low-tech? Fast? Easy to explain?) and evaluate different methods - practicality

Your solution is far from the optimal solution (which may not be known yet)
1) Analyze how far your solution can be from the optimal (use a case study to test this) - Practicality

Your solution is not robust (stable) - too sensitive to a change or parameter
1) Sensitivity analysis reports how much your solution deviates from the best possible (or optimal) if input parameters change or if input data is imprecise - validity

17
Q

What is secondary analysis:

A

Secondary analysis involves the use of existing data, collected for a purpose of a prior study, to pursue research interest distinct of the original research

A new RQ or an alternative perspective;
A new theoretical framework or another method of statistical analysis

But also collecting data about the interested organization or person indirectly, so desk research (emails traces, etc.)

In sum: this definition means that secondary analysis is about using collected and organized data from someone else + data you collected and compiled from texts, audios or videos in an unobtrusive way

18
Q

When would you use secondary data analysis for research:

A

1) if you have very useful dataset for your study
2) You want to study org. or people that cannot be accessed directly
3) The topic is sensitive
4) You want to study historical activities of people or organizations that no longer exist (historical study), or you want to cover a very extended period (longitudinal study)
5) In fields it is a primary analysis method (finance), due to abundance of secondary data
6) Used mainly for theory-testing, but also exploration and building in some cases (decision science)

In theory testing:
1) Similar logic to surveys (managerial problem or the need to test theory, review literature and make hypothesis)
2) In finance and accounting, the hypotheses might be implicit

19
Q

Advantages and disadvantages of secondary data:

A

Advantages:
1) Convenience (can be obtained faster)
2) Can be cheap or for free
3) Size and range of datasets available
4) Easier opportunity for study replication (reliability)
5) You can explain change and evolution
6) More objectivity in relation to data
7) Can be high quality
8) Unobstrusive nature

Disadvantages:
1) Possible mismatch between data and RQ
2) Risk of missing, incomplete, obscure, biased data
3) Temptation to start from data rather than RQ - ethical problem
4) The cost of time needed to learn about new data
5) High quality data can be expensive
6) It can be time consuming to find all necessary data

20
Q

What to do if secondary data is not sufficient for the research:

A

1) Combine several data sources (merge missing or add your own through desk research)
2) Adjust (slightly) research focus
3) Investigate which research problems can be studied with the available data

  • You can think about what important and interesting questions can be studied using available data, but…
  • Be careful about this approach: data first, hypothesis second is NOT acceptable in research
21
Q

Quality assessment of secondary data:

A

Always look at Coverage and Measurements to assess whether data is suitable for you:
1) Why does it exist in the first place
2) Does the data cover population you intend to study, time, and variables, other variables can be taken out.
3) Are the measures right

Risks in validity and reliability:
1) Assess reputation of the source
2) Try to understand the methods and proccessess through which data was collected
3) Consider the risk of measurement bias - delibarate or unintentional distortion of the data

  • Unclear or poor operationalization/measurement of variables = risks to construct validity
  • Data collected from wrong age group, geographical region, etc. = risks to external validity
  • Unclear process of working with secondary data, lack of documentation = risk to reliability
22
Q

Approach to working with quantitative data:

A

1) Collect
2) Structure and record
3) Clean
4) Analyze
5) Interpret analysis
6) Draw conclusions