Combi week1tm5 Flashcards

1
Q

What is theory-testing research and when is it appropriate?

A

It tests existing theories using quantitative methods (e.g., surveys, experiments). Appropriate when theory exists but hasn’t been empirically verified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are 5 characteristics of theory-testing research?

A

Deductive logic

Based on existing theories

Hypotheses tested quantitatively

Structured design (surveys, experiments)

Aims to confirm/refute expected relationships

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is theory-building research?

A

It develops new theoretical insights based on patterns found in data, often through qualitative methods like case studies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What type of research is used when the phenomenon is new and not much is known?

A

Exploratory research.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is decision science?

A

A type of research aiming to find the optimal solution to a managerial problem, often using modeling and simulations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which logic fits theory-testing research?

A

Deductive reasoning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which logic is typical for case study-based theory-building?

A

Inductive or abductive reasoning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

research cycle

A

management problem
knowledge question
review of evidence (literature review)
research design
data collection
data analysis
research outcomes (results & discussion)
recommendations to management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is discussed in the ‘Discussion’ section of a paper?

A

Interpretation of findings / conclusion of the research question,
theoretical and managerial contributions,
limitations,
suggestions for future research.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

If a paragraph talks about previous studies and concepts, which section does it belong to?

A

Literature review.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

If a paragraph contains quotes from participants or numerical results, which section is it?

A

Findings or results section.
-> finding is qualitative
-> results is quantitative

die sectie komt vóór discussion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Where do we typically include implications for managers?

A

research outcomes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is construct validity and how do you improve it?

A

Whether the measurement truly reflects the concept. Improve it via validated scales, pilot testing, and expert feedback.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is internal validity?

A

Whether the observed effect is truly caused by the independent variable. High in experiments with randomization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is external validity?

A

Generalizability of results to other contexts, populations, or times.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How does triangulation improve validity?

A

By combining multiple sources or methods to cross-check findings

it reduces bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which types of validity are most at risk in survey studies?

A

External validity and construct validity (due to sampling and self-report issues -> misinterpreted questions).

-> internal is het gevolg van construct in dit geval

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the 4 types of data levels (scales)?

A

Nominal (category), Ordinal (order), Interval (order & equal intervals, no true zero), Ratio (like interval, but with true zero).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Why is using validated scales important?

A

Ensures reliability and construct validity; allows for comparison with other studies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is Cronbach’s alpha used for?

A

Assessing internal consistency of multi-item scales; should ideally be ≥ 0.7.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is the risk of using only one method in surveys?

A

Common method bias – artificially inflated correlations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the difference between interviews and case studies?

A

Interviews are a data collection method; case studies are a full research strategy using multiple data sources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What type of bias is common in interviews?

A

Informant bias (e.g., social desirability, memory issues), and researcher bias (interpretation).

social desirability bias
recall bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is saturation in qualitative research?

A

The point at which gathering more data no longer leads to new themes, insights, or information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Why use multiple case studies instead of one?
Increases generalizability, allows cross-case comparison, supports theory-building.
26
What are the 3 criteria for causality?
Covariation (X and Y vary together), 2. Temporal precedence (X before Y), 3. No other confounders.
27
What is a double-blind design?
Neither participants nor researchers know the assigned conditions. Prevents bias.
28
What is HARK-ing and why is it problematic?
Hypothesizing After Results are Known; undermines scientific integrity.
29
Name common threats to internal validity in experiments.
Confounding variables or poor randomization.
30
Why pre-test your experimental design?
To detect errors, check understanding, and improve construct validity.
31
What is the key difference between systematic search and snowballing?
Systematic = from big pool to narrow set; Snowballing = starting from a key article and expanding via references/citations.
32
When do you use academic vs. professional literature?
Academic = theory building/testing; Professional = current practices, context understanding.
33
Is using AI to fill in missing survey data ethical? and what does it violate?
No – it's data fabrication and violates research integrity.
34
What type of validity is stronger in experiments compared to surveys?
Internal validity – due to control over variables and randomization.
35
What type of validity is often better in surveys than in experiments?
External validity – surveys can reach broader populations and more realistic settings.
36
What are some advantages of using surveys?
Cost-effective, scalable, can measure many variables at once, especially latent constructs.
37
What are limitations of experiments in management research?
Limited external/ecological validity; not all phenomena can be ethically or practically manipulated.
38
What is a systematic literature search approach?
A structured, transparent and protocol-based search using defined databases, keywords, and inclusion/exclusion criteria.
39
What are the five steps in a systematic literature search?
1) Build pool, 2) Filter, 3) Assess relevance, 4) Analyze and cluster, 5) Refine or stop.
40
What is snowballing in literature search?
Starting from one article, then following backward (references) and forward (citations) to find more. step 1 of the structured literature search approach
41
What is the main risk of a snowballing-only strategy?
It may miss broader literature and introduce bias if the starting article is unrepresentative.
42
How can you avoid Type I and Type II errors in your literature search?
Type I: use stricter search terms. Type II: include synonyms and broader terms.
43
Is it ethical to use AI to clean or complete your data?
Only if transparently reported and done in line with ethical standards. Inventing missing data is not allowed.
44
What is the role of informed consent in research ethics?
Ensures that participants are aware of the research, its purpose, and their rights, including data handling.
45
What is stakeholder bias?
When data or interpretation is skewed to serve the interests of a stakeholder (e.g., sponsoring company). Let’s say Pfizer funds a clinical study to test the effectiveness of a new antidepressant they developed. Possible stakeholder bias: The study design might compare the drug to a weaker alternative, making Pfizer’s product look better. Negative side effects might be downplayed in the published results. Only studies with positive outcomes might get published, while less favorable studies are ignored (publication bias).
46
CIMO statement
Context – Intervention – Mechanism – Outcome. To clearly formulate managerial implications: “In [Context C], if [Intervention I] is applied, [Mechanism M] will lead to [Outcome O].”
47
What logic and strategy fit best with exploratory research?
Logic: Inductive or abductive. Strategy: Case study, interviews. abductive kan hier dus ook bij omdat: Stel je vindt iets onverwachts -> You’re interviewing employees about motivation and unexpectedly find that some are demotivated by praise. You weren’t looking for that. With abductive logic, you might hypothesize that this is due to cultural values around modesty—an explanation you can then explore further.
48
What logic and strategy fit best with theory-building research?
Logic: Inductive or abductive. Strategy: Case study (often multiple), sometimes interviews or grounded theory.
49
What logic and strategy fit best with theory-testing research?
Logic: Deductive. Strategy: Surveys, experiments.
50
What logic and strategy fit best with decision science research?
Logic: Deductive. Strategy: Mathematical modeling or simulations.
51
What is the difference between a research strategy and a data collection method?
Strategy = overall plan to answer the RQ (e.g., case study); method = how you gather data (e.g., interview).
52
Are interviews always part of a case study strategy?
No. Interviews can be standalone (strategy) or a method within another strategy (e.g., case study).
53
Is a survey a method or a strategy?
It can be both. As a strategy it refers to the overall design; as a method it refers to using a questionnaire.
54
What are key threats to validity in survey research?
Non-response bias, poor construct validity, sampling bias.
55
How can you improve reliability in interview studies?
Triangulation, consistent interview guide, researcher reflexivity.
56
What type of validity is often limited in case studies?
External validity – due to limited generalizability.
57
What are strengths of experiments in terms of validity?
High internal validity due to control and randomization.
58
What do you report in the Methods section of your paper?
Research strategy, sampling, operationalization, data collection & analysis procedures, validity precautions.
59
What belongs in the Results section?
Description of findings only — without interpretation. Use tables, quotes, graphs. this is quantitative
60
What is the purpose of the Discussion section?
Interpret results, connect to literature, outline contributions, discuss limitations, suggest future research.
61
Where do you present your conceptual model and hypotheses?
In the Literature Review or at the end of the Theory section (not in the Results).
62
biases in surveys:
Selection bias – de steekproef is niet representatief voor de populatie. -> selection bias is onderdeel van sampling bias samen met non-response bias, exclusion bias en volunteer bias Non-response bias – bepaalde groepen reageren niet, wat vertekening geeft. Social desirability bias – deelnemers geven sociaal wenselijke antwoorden. Common method bias – wanneer alles via dezelfde methode gemeten wordt (bv. zelfrapportage). Construct validity – als de vragen de beoogde constructen niet goed meten. Response fatigue – vermoeidheid bij deelnemers door lange vragenlijsten.
63
biases in experiments
Selection bias – als toewijzing aan condities niet random gebeurt. Confounding variables – externe factoren beïnvloeden het resultaat. Experimenter bias – onbedoelde invloed van de onderzoeker. Demand characteristics – deelnemers raden het doel van het experiment en passen hun gedrag aan. Low ecological validity – kunstmatige setting, ver van realiteit.
64
interview biases
Informant bias – de respondent geeft sociaal wenselijke of gekleurde antwoorden. Recall bias – onbetrouwbare herinneringen bij terugblikvragen. Researcher bias – interpretatie beïnvloed door verwachtingen of aannames. Leading questions – sturende vragen beïnvloeden antwoorden. Inconsistent interview guide – verschillen tussen interviews verminderen betrouwbaarheid.
65
case study biases
Selection bias – het kiezen van ‘makkelijke’ of ‘succesvolle’ cases. Researcher bias – subjectieve interpretatie van rijke data. Limited data triangulation – als slechts één bron gebruikt wordt. Confirmation bias – alleen data gebruiken die je hypothese bevestigt.
66
Type II
Iets Interessants gemist
67
How can bad construct validity lead to bad internal validity?
Als je niet weet of je X goed hebt gemeten, kun je ook niet zeker weten of X echt Y veroorzaakt heeft.
68
construct validity
Meet je wat je denkt dat je meet?
69
what is a robustness check?
A robustness check tests whether the results of an analysis remain stable under reasonable changes in the model or data. It improves confidence in the reliability of the findings.
70
2 examples of a robustness check
run the model with and without control variables: whether internal validity still holds when you remove/add controls. use a different operationalization of a key variable: whether your result is robust to how you define or measure your variable.
71
What are the four ways of being critical in reading and writing, and what does each one mean?
ORAT Critique of authority – Question experts and commonly accepted theories Critique of tradition – Don't follow conventional wisdom blindly Critique of rhetoric – Examine how well claims are supported Critique of objectivity – Reflect on and challenge bias in sources (and your own)
72
What makes academic writing good?
Message = inhoud + doel Structure = logische volgorde Language = professioneel, duidelijk, formeel
73
How can you turn limitations of a study into directions for future research?
Researchers can “flip” limitations into opportunities by showing how future research could overcome them. Examples: A small or context-specific sample → suggest a broader or comparative study Cross-sectional design → propose a longitudinal or experimental follow-up -> cross sectional zou geen goede causaliteit kunnen hebben door temporal precedence Limited construct measurement → recommend validated scales
74
What are best practices for presenting quantitative results in tables and figures?
Use visuals to summarize, not repeat, text Avoid clutter: keep tables/graphs clear and focused Always label axes and variables clearly Include titles and captions that explain the takeaway Only present relevant data — don’t overload
75
What four stakeholder groups must researchers consider when making ethical decisions throughout the research cycle?
Respondents – informed consent, privacy Sponsors – transparency, no hidden agendas Research team – fairness, integrity, no manipulation Scientific community – honest reporting, avoiding cherry-picking 🧠 Tip: Denk aan R-S-R-S
76
What is the difference between secondary data and secondary analysis
Secondary data: You use data that someone else collected Secondary analysis: You do a full new study based on existing data
77
What makes a literature review academically strong?
Relevant – directly linked to your question Up to date – includes recent studies Balanced – different views considered Critical – not just summarizing, but evaluating 🧠 Tip: R-U-B-C
78
What are first-level, second-level, and third-level codes in qualitative data analysis?
First-level codes: Descriptive, summarise raw data (what is said) Second-level codes: Group first-level codes into themes (patterns/meanings) Third-level codes: Theoretical integration (linking themes to concepts/theories) 🧠 Tip: Think of it as building a pyramid — from raw data to theory.
79
What are common threats to validity in qualitative research and which types of validity do they affect?
informant, researcher en idiosyncratic Informant bias → threatens construct validity Omdat je dan denkt dat je bijv. “job satisfaction” meet, maar in werkelijkheid meet je: Wat mensen denken dat jij wilt horen Wat ze zich verkeerd herinneren Hoe comfortabel ze zich voelen om eerlijk te zijn → Het construct dat je wil meten is vervormd door hoe de persoon erop antwoordt. Researcher bias → threatens internal validity 🔹 Internal validity = Is the causal relationship real? Bij kwalitatief onderzoek betekent dat: Komt mijn interpretatie van wat ik zie/hoor écht voort uit de data, of is het gekleurd door mijn eigen aannames? 🔥 Wat doet researcher bias? Je ziet wat je wilt zien (confirmation bias) Je focust op data die je hypothese bevestigt Je stelt sturende vragen Je selecteert of codeert subjectief 🔁 Dus: je denkt dat X leidt tot Y in je data, maar in werkelijkheid heb je dat effect erin gelezen omdat je het verwachtte. ➡️ De relatie X → Y lijkt aanwezig, maar is niet "echt" → internal validity daalt Idiosyncratic findings → threaten external validity Idiosyncratic findings ➤ Resultaten zijn uniek en niet generaliseerbaar ➤ → bedreigt vooral external validity 🧠 Tip: IRC = Informant – Researcher – Case uniqueness = Construct, Internal, External
80
What is P-hacking
vorm van onethisch gedrag -> data manipuleren tot het significant wordt
81
Wat is het verschil tussen een narrative review en een systematische review? +wat is een meta analyse en waar hoort dit bij?
narrative: context schetsen, theory-building systematic: compleet overzicht geven, theorie-testing Een meta-analyse is een kwantitatieve methode waarbij de resultaten van meerdere vergelijkbare empirische studies statistisch worden gecombineerd. Meta-analyse is alleen mogelijk bij systematische reviews ➝ omdat je dan een compleet en transparant overzicht hebt van de relevante studies
82
types of academic and professional literature
academic: conference/working papers, academic articles, academic books, theses Professional: reports, professional journals, newspapers, social media